Concurrent Programming with Threads

本文探讨了在并发服务器中使用线程的方法,通过对比进程和I/O多路复用,介绍了线程如何提高服务器吞吐量并简化数据共享。同时,文章也讨论了线程编程中需要注意的问题,如避免竞态条件和内存泄漏。
To this point, we have looked at two approaches for creating concurrent logical flows form the previous two essays. With the first approach, we use a separate process for each flow. The kernel schedules each process automatically. Each process has its own private address space, which makes it difficult for flows to share data. With the second approach, we create our own logical flows and use I/O multiplexing to explicitly schedule the flows. Because there is only one process, flows share the entire address space. This section introduces a third approach—based on threads—that is a hybrid of these two.
      The code for a concurrent echo server based on threads is showed below.
#include <pthread.h>

void *thread(void *vargp);

int main(int argc, char **argv)
{
		if (argc != 2) {
 				fprintf(stderr, "usage: %s <port>\n", argv[0]);
 				exit(0);
 		}
 		int port = atoi(argv[1]);
 		int listenfd = open_listenfd(port);
 		
 		int *connfdp;
		struct sockaddr_in clientaddr;
 		pthread_t tid;
 		while (1) {
				int clientlen = sizeof(clientaddr);
	 			connfdp = malloc(sizeof(int));
	 			*connfdp = accept(listenfd, (SA *) &clientaddr, &clientlen);
	 			pthread_create(&tid, NULL, thread, connfdp);
	 	}
}

 // Thread routine
void *thread(void *vargp)
{
    int connfd = *((int *)vargp);

 		pthread_detach(pthread_self());
 		free(vargp);

 		echo(connfd);
 		close(connfd);
 		return NULL;
}
      The overall structure is similar to the process-based design. The main thread repeatedly waits for a connection request and then creates a peer thread to handle the request. While the code looks simple, there are a couple of general and somewhat subtle issues we need to look at more closely. The first issue is how to pass the connected descriptor to the peer thread when we call pthread_create. The obvious approach is to pass a point to the descriptor, as in the following:
connfd = accept(listenfd, (SA *) &clientaddr, &clientlen);
pthread_create(&tid, NULL, thread, &connfd);
Then we have peer thread dereference the pointer and assign it to a local variable, as follows:
void *thread(void *vargp)
{
    int connfd = *((int *)vargp);  // Assignment statement
    // ... 
    return NULL;
}
      This would be wrong, however because it introduces a race between the assignment statement in the peer thread and the accept statement in the main thread. Another issue is avoiding memory leaks in the thread routine. Since we are not explicitly reaping threads, we must detach each thread so that its memory resources will be reclaimed when it terminates. Further, we must be careful to free the memory block that was allocated by the main thread.
      There are several advantages to using threads. First, threads have less run time overhead than processes. We would expect a server based on threads to have better throughput (measured in clients serviced per second) than one based on processes. Second, because all threads share the same global variables and heap variables, it is much easier for threads to share state information.
      The major disadvantage of using threads is that the same memory model that makes it easy to share data structures also makes it easy to share data structures unintentionally and incorrectly. As we learned, shared data must be protected, functions called from threads must be reentrant, and race conditions must be avoided.

For Instructors: Courses Based on the Book Instructors can use the CS:APP book to teach a number of different types of systems courses. Five categories of these courses are illustrated in Figure 2. The particular course depends on curriculum requirements, personal taste, and the backgrounds and abilities of the students. From left to right in the figure, the courses are characterized by an increasing emphasis on the programmer’s perspective of a system. Here is a brief description. ORG. A computer organization course with traditional topics covered in an untraditional style. Traditional topics such as logic design, processor architecture, assembly language, and memory systems are covered. However, there is more emphasis on the impact for the programmer. For example, data representations are related back to the data types and operations of C programs, and the presentation on assembly code is based on machine code generated by a C compiler rather than handwritten assembly code. ORG+. The ORG course with additional emphasis on the impact of hardware on the performance of application programs. Compared to ORG, students learn more about code optimization and about improving the memory performance of their C programs. ICS. The baseline ICS course, designed to produce enlightened programmers who understand the impact of the hardware, operating system, and compilation system on the performance and correctness of their application programs. A significant difference from ORG+ is that low-level processor architecture is not covered. Instead, programmers work with a higher-level model of a modern out-of-order processor. The ICS course fits nicely into a 10-week quarter, and can also be stretched to a 15-week semester if covered at a more leisurely pace. ICS+. The baseline ICS course with additional coverage of systems programming topics such as system-level I/O, network programming, and concurrent programming. This is the semester-long Carnegie Mellon course, which covers every chapter in CS:APP except low-level processor architecture.翻译以上英文为中文
最新发布
08-05
评论
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值