Concurrency之Introduce

本文介绍了大中央调度(GCD)的基础知识,包括如何利用GCD进行并发编程,以及三种主要的调度队列:主队列、并行队列和串行队列。还探讨了如何使用Block对象和C函数来提交任务,以及Cocoa提供的不同类型的operation。

并发也就是为了达到在同一时间点有两个任务在执行。现在的操作系统都有并发能力,即使它只有一个cup。cup通过给每个任务一个确定的轮轴时间片来实现这种能力。
然而,现在的科技能让我们有两个cup,这也就意味着我们能真正做到两个任务同时执行。简单来说,操作系统就会把任务分给cup,直到它处理完成。
大中央调度(Grand Central Dispatch),简称GCD,它是使用C语言实现的,里面有很多的回调对象。使用GCD给操作系统的多核分配任务,在编程中你不需要了解是哪一个内核在执行哪一个任务。
GCD的核心是调度队列。在当下流行的操作系统中,调度队列就是被GCD管理的线程池,无论是ios还是MAC操作系统。你不会直接接触到这些线程的工作,你只需要对这些调度队列进行操作,分配任务到这些队列,并要求队列调用你分配的这些任务。GCD提供了一些操作用来处理正在运行的任务:synchronously,asynchronously,after a certin delay,等等……
你如果要在你的app中使用GCD,你不需要导入任何特殊的第三方库。苹果已经在不同的框架中引入了GCD,包括Core Foundation 、Cocoa/Cocoa Touch.所有GCD中的方法和数据类型都以“dispatch_”关键字开始,例如dispatch_async是用来分配一个任务到异步队列中;dispatch_after是用来运行一个回调实现一个延时。
有三种调度队列:

Main queue:主队列
这种队列是把其队列中的所有任务都放在主线程中去执行,它和Cocoa和Cocoa Touch要求所有的UI操作都需要在主线程上操作有关联。用dispatch_get_main_queue 函数来获取主队列。
Concurrent queues:并行队列
你可以从GCD申请这些队列来执行同步或异步任务。多个并发队列可以不费吹灰之力的并行执行多个任务。它不需要更多的线程管理。用dispatch_get_global_queue 函数来获取一个并发队列。
Serial queues:串行队列
这些队列是,无论你提交了多少同步或异步的任务给它,它都会按照FIFO的顺序来执行这些任务,也就是你在一个时刻只能执行一个任务。但是这些任务不是运行在主线程上,而是运行在费主线程上,并且按照严格的执行顺序来执行。用dispatch_queue_create 函数来创建一个串行队列。

有两种方法来提交任务给队列:
1. Block objects
2. C founctions
Block对象是利用GCD巨大能量的最好方式。有一些GCD的函数曾经被利用C语言代理block对象进行扩展。所以仅仅只是少数的GCD函数允许使用C函数来实现。

Cocoa提供了三种不同类型的操作:
Block operations:它方便执行单个对象,或多个block对象。
Invocation operations:它可以让你在一个方法中调用另一个方法和当前存在的对象。
Plain operations:普通操作,这些都是需要进行简单的子类化操作的类。要执行的代码将被写入操作对象的主方法内。

Operations,正如前面提到的,可以被operation queues管理,其数据类型是NSOperationQueue. 实例化任何刚刚提到过的operation(Block operations、Invocation operations、Plan operations)后,你就可以添加它们到一个operation queue中,并让这个queue来管理这些操作。
一个operation object可以依赖其它的operation objects并且可以在执行与它相关联的operation之前等待完成一个或多个operations.。除非你自己手动添加依赖,你对这些operations的执行顺序没有控制权。举例来说,即使你以一个确定的顺序添加这些operations,但是在执行的时候也不一定按照你添加的顺序,执行顺序完全取决于它们的queue。

在操作operation queues和operations时,有以下几点一定要记住:

  • Operations(多任务),默认情况下,运行在一个线程上启动它们,利用它开始初始化方法。如果你想让这些operations异步工作,你必须使用operations queue或者是NSOperations的子类来创建一个新的线程在一个主要的实例化操作上。
  • 一个operation可以等待另一个operation执行完成后启动,但是一定要注意不要创建一个相互依赖的死锁错误。
  • operations是可以被取消的,如果你有NSOperation子类来创建自定义操作对象,你必须确保使用isCancelled实例方法检查是否执行与操作相关的任务之前的操作已经被取消。
  • operation objects是KVO兼容不同操作路径,例如isFinished、isReady、isExecuting.
  • 如果你计划使用NSOperation的子类并来实现一个operation,你必须创建一个你自己的autorelease pool在operation的开始方法中。
  • 一定要保持reference在你创建的operation objects. Theconcurrentnature of operation queues might make it impossible for you to retrieve a reference to an operation after it has been added to the queue. 

Chapter 4: Processor Architecture. This chapter covers basic combinational and sequential logic elements, and then shows how these elements can be combined in a datapath that executes a simplified subset of the x86-64 instruction set called “Y86-64.” We begin with the design of a single-cycle datapath. This design is conceptually very simple, but it would not be very fast. We then introduce pipelining, where the different steps required to process an instruction are implemented as separate stages. At any given time, each stage can work on a different instruction. Our five-stage processor pipeline is much more realistic. The control logic for the processor designs is described using a simple hardware description language called HCL. Hardware designs written in HCL can be compiled and linked into simulators provided with the textbook, and they can be used to generate Verilog descriptions suitable for synthesis into working hardware. Chapter 5: Optimizing Program Performance. This chapter introduces a number of techniques for improving code performance, with the idea being that programmers learn to write their C code in such a way that a compiler can then generate efficient machine code. We start with transformations that reduce the work to be done by a program and hence should be standard practice when writing any program for any machine. We then progress to transformations that enhance the degree of instruction-level parallelism in the generated machine code, thereby improving their performance on modern “superscalar” processors. To motivate these transformations, we introduce a simple operational model of how modern out-of-order processors work, and show how to measure the potential performance of a program in terms of the critical paths through a graphical representation of a program. You will be surprised how much you can speed up a program by simple transformations of the C code. Bryant & O’Hallaron fourth pages 2015/1/28 12:22 p. xxiii (front) Windfall Software, PCA ZzTEX 16.2 xxiv Preface Chapter 6: The Memory Hierarchy. The memory system is one of the most visible parts of a computer system to application programmers. To this point, you have relied on a conceptual model of the memory system as a linear array with uniform access times. In practice, a memory system is a hierarchy of storage devices with different capacities, costs, and access times. We cover the different types of RAM and ROM memories and the geometry and organization of magnetic-disk and solid state drives. We describe how these storage devices are arranged in a hierarchy. We show how this hierarchy is made possible by locality of reference. We make these ideas concrete by introducing a unique view of a memory system as a “memory mountain” with ridges of temporal locality and slopes of spatial locality. Finally, we show you how to improve the performance of application programs by improving their temporal and spatial locality. Chapter 7: Linking. This chapter covers both static and dynamic linking, including the ideas of relocatable and executable object files, symbol resolution, relocation, static libraries, shared object libraries, position-independent code, and library interpositioning. Linking is not covered in most systems texts, but we cover it for two reasons. First, some of the most confusing errors that programmers can encounter are related to glitches during linking, especially for large software packages. Second, the object files produced by linkers are tied to concepts such as loading, virtual memory, and memory mapping. Chapter 8: Exceptional Control Flow. In this part of the presentation, we step beyond the single-program model by introducing the general concept of exceptional control flow (i.e., changes in control flow that are outside the normal branches and procedure calls). We cover examples of exceptional control flow that exist at all levels of the system, from low-level hardware exceptions and interrupts, to context switches between concurrent processes, to abrupt changes in control flow caused by the receipt of Linux signals, to the nonlocal jumps in C that break the stack discipline. This is the part of the book where we introduce the fundamental idea of a process, an abstraction of an executing program. You will learn how processes work and how they can be created and manipulated from application programs. We show how application programmers can make use of multiple processes via Linux system calls. When you finish this chapter, you will be able to write a simple Linux shell with job control. It is also your first introduction to the nondeterministic behavior that arises with concurrent program execution. Chapter 9: Virtual Memory. Our presentation of the virtual memory system seeks to give some understanding of how it works and its characteristics. We want you to know how it is that the different simultaneous processes can each use an identical range of addresses, sharing some pages but having individual copies of others. We also cover issues involved in managing and manipulating virtual memory. In particular, we cover the operation of storage allocators such as the standard-library malloc and free operations. CovBryant & O’Hallaron fourth pages 2015/1/28 12:22 p. xxiv (front) Windfall Software, PCA ZzTEX 16.2 Preface xxv ering this material serves several purposes. It reinforces the concept that the virtual memory space is just an array of bytes that the program can subdivide into different storage units. It helps you understand the effects of programs containing memory referencing errors such as storage leaks and invalid pointer references. Finally, many application programmers write their own storage allocators optimized toward the needs and characteristics of the application. This chapter, more than any other, demonstrates the benefit of covering both the hardware and the software aspects of computer systems in a unified way. Traditional computer architecture and operating systems texts present only part of the virtual memory story. Chapter 10: System-Level I/O. We cover the basic concepts of Unix I/O such as files and descriptors. We describe how files are shared, how I/O redirection works, and how to access file metadata. We also develop a robust buffered I/O package that deals correctly with a curious behavior known as short counts, where the library function reads only part of the input data. We cover the C standard I/O library and its relationship to Linux I/O, focusing on limitations of standard I/O that make it unsuitable for network programming. In general, the topics covered in this chapter are building blocks for the next two chapters on network and concurrent programming. Chapter 11: Network Programming. Networks are interesting I/O devices to program, tying together many of the ideas that we study earlier in the text, such as processes, signals, byte ordering, memory mapping, and dynamic storage allocation. Network programs also provide a compelling context for concurrency, which is the topic of the next chapter. This chapter is a thin slice through network programming that gets you to the point where you can write a simple Web server. We cover the client-server model that underlies all network applications. We present a programmer’s view of the Internet and show how to write Internet clients and servers using the sockets interface. Finally, we introduce HTTP and develop a simple iterative Web server. Chapter 12: Concurrent Programming. This chapter introduces concurrent programming using Internet server design as the running motivational example. We compare and contrast the three basic mechanisms for writing concurrent programs—processes, I/O multiplexing, and threads—and show how to use them to build concurrent Internet servers. We cover basic principles of synchronization using P and V semaphore operations, thread safety and reentrancy, race conditions, and deadlocks. Writing concurrent code is essential for most server applications. We also describe the use of thread-level programming to express parallelism in an application program, enabling faster execution on multi-core processors. Getting all of the cores working on a single computational problem requires a careful coordination of the concurrent threads, both for correctness and to achieve high performance翻译以上英文为中文
08-05
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值