Full Multi-thread Client/Server Socket Class with ThreadPool

本文介绍了一个改进的多线程客户端-服务器Socket类。该类通过使用模板和线程池提高了性能,并支持同步和异步操作。适用于TCP和UDP通信。

To run the application as client, type SocketServer.exe /client from the command prompt.

Introduction

Recently, I updated one of my first articles here at The Code Project, the ServerSocket. While the base class (CSocketHandle) is quite stable and easy to use, one has to admit that some of the initial design decisions to keep the communication interface intact are starting to be an issue for newer development.

This is the goal of this article, I present the new and improved version of the communication class and show how you can take advantage of thread pooling to increase performance for your network solutions.

Description

First, I assume that you are already familiar with socket programming and have several years of experience under your belt. If that is not the case, I highly recommend some links that you will find in the reference section that may guide you along the way. For those who are "all fired up and ready to go", please read on. I will try to shed some light on how you can use the new classes to enhance the performance of your system.

Synchronous Sockets

By default, sockets operate in blocking mode, that means you will need a dedicated thread to read/wait for data while another one will write/send data on the other side. This is now made easier for you by using the new template class.

Typically, a client needs only one thread, so there is no problem there but if you are developing server components and need reliable communication or point-to-point link with your clients, sooner or later, you will find that you need multiple threads to handle your requests.

SocketClientImpl

The first template SocketClientImpl encapsulates socket communication from a client perspective. It can be used to communicate using TCP (SOCK_STREAM) or UDP (SOCK_DGRAM). The good news here is that it handles the communication loop and will report data and several important events in an efficient manner. All this to make the task really straightforward for you.

template <typename T, size_t tBufferSize = 2048>
class SocketClientImpl
{
    typedef SocketClientImpl<T, tBufferSize> thisClass;
public:
    SocketClientImpl()
    : _pInterface(0)
    , _thread(0)
    {
    }

    void SetInterface(T* pInterface)
    {
        ::InterlockedExchangePointer(reinterpret_cast<void**>(&_pInterface), pInterface);
    }

    bool IsOpen() const;
    bool CreateSocket(LPCTSTR pszHost, LPCTSTR pszServiceName,
			int nFamily, int nType, UINT uOptions = 0);
    bool ConnectTo(LPCTSTR pszHostName, LPCTSTR pszRemote,
			LPCTSTR pszServiceName, int nFamily, int nType);
    void Close();
    DWORD Read(LPBYTE lpBuffer, DWORD dwSize,
		LPSOCKADDR lpAddrIn = NULL, DWORD dwTimeout = INFINITE);
    DWORD Write(const LPBYTE lpBuffer, DWORD dwCount,
		const LPSOCKADDR lpAddrIn = NULL, DWORD dwTimeout = INFINITE);
    bool StartClient(LPCTSTR pszHost, LPCTSTR pszRemote,
		LPCTSTR pszServiceName, int nFamily, int nType);
    void Run();
    void Terminate(DWORD dwTimeout = 5000L);

    static bool IsConnectionDropped(DWORD dwError);

protected:
    static DWORD WINAPI SocketClientProc(thisClass* _this);
    T*              _pInterface;
    HANDLE          _thread;
    CSocketHandle   _socket;
};

The client interface reports the following events:

class ISocketClientHandler
{
public:
    virtual void OnThreadBegin(CSocketHandle* ) {}
    virtual void OnThreadExit(CSocketHandle* ) {}
    virtual void OnDataReceived(CSocketHandle* , const BYTE* ,
				DWORD , const SockAddrIn& ) {}
    virtual void OnConnectionDropped(CSocketHandle* ) {}
    virtual void OnConnectionError(CSocketHandle* , DWORD ) {}
};
Function Description
OnThreadBegin Called when thread starts
OnThreadExit Called when thread is about to exit
OnDataReceived Called when new data arrives
OnConnectionDropped Called when an error is detected. The error is caused by loss of connection or socket being closed.
OnConnectionError Called when an error is detected.

This interface is in fact quite optional, your program can be implemented as this:

class CMyDialog : public CDialog
{
    typedef SocketClientImpl<CMyDialog> CSocketClient; // CMyDialog handles events!
public:
    CMyDialog(CWnd* pParent = NULL);   // standard constructor
    virtual CMyDialog ();

    // ...
    void OnThreadBegin(CSocketHandle* ) {}
    void OnThreadExit(CSocketHandle* ) {}
    void OnDataReceived(CSocketHandle* , const BYTE* , DWORD , const SockAddrIn& ) {}
    void OnConnectionDropped(CSocketHandle* ) {}
    void OnConnectionError(CSocketHandle* , DWORD ) {}

protected:
    CSocketClient m_SocketClient;
};

SocketServerImpl

The second template SocketServerImpl handles all the communication tasks from a server perspective. In UDP mode, it behaves pretty much the same way as for the client. In TCP, it delegates the management for each connection in a separate pooling thread. The pooling thread template is a modified version that was published under MSDN by Kenny Kerr (^). You should be able to reuse it in your project without any issue. The good thing is it can be used to call a class member from a thread pool. Callbacks can have the following signature:

    void ThreadFunc();
    void ThreadFunc(ULONG_PTR);

Remember, you need Windows 2000 or higher to use QueueUserWorkItem. That should not be a problem unless you are targeting Windows CE. I was told no one uses Windows 95/98 anymore! Smile | :)

class ThreadPool
{
    static const int MAX_THREADS = 50;
    template <typename T>
    struct ThreadParam
    {
        void (T::* _function)(); T* _pobject;
        ThreadParam(void (T::* function)(), T * pobject)
        : _function(function), _pobject(pobject) { }
    };
public:
    template <typename T>
    static bool QueueWorkItem(void (T::*function)(),
                                  T * pobject, ULONG nFlags = WT_EXECUTEDEFAULT)
    {
        std::auto_ptr< ThreadParam<T> > p(new ThreadParam<T>(function, pobject) );
        WT_SET_MAX_THREADPOOL_THREADS(nFlags, MAX_THREADS);
        bool result = false;
        if (::QueueUserWorkItem(WorkerThreadProc<T>,
                                p.get(),
                                nFlags))
        {
            p.release();
            result = true;
        }
        return result;
    }

private:
    template <typename T>
    static DWORD WINAPI WorkerThreadProc(LPVOID pvParam)
    {
        std::auto_ptr< ThreadParam<T> > p(static_cast< ThreadParam<T>* >(pvParam));
        try {
            (p->_pobject->*p->_function)();
        }
        catch(...) {}
        return 0;
    }

    ThreadPool();
};
template <typename T, size_t tBufferSize = 2048>
class SocketServerImpl
{
    typedef SocketServerImpl<T, tBufferSize> thisClass;
public:
    SocketServerImpl()
    : _pInterface(0)
    , _thread(0)
    {
    }

    void SetInterface(T* pInterface)
    {
        ::InterlockedExchangePointer(reinterpret_cast<void**>
					(&_pInterface), pInterface);
    }

    bool IsOpen() const
    bool CreateSocket(LPCTSTR pszHost,
		LPCTSTR pszServiceName, int nFamily, int nType, UINT uOptions);
    void Close();
    DWORD Read(LPBYTE lpBuffer, DWORD dwSize,
		LPSOCKADDR lpAddrIn, DWORD dwTimeout);
    DWORD Write(const LPBYTE lpBuffer, DWORD dwCount,
		const LPSOCKADDR lpAddrIn, DWORD dwTimeout);
    bool Lock()
    {
        return _critSection.Lock();
    }

    bool Unlock()
    {
        return _critSection.Unlock();
    }

    bool CloseConnection(SOCKET sock);
    void CloseAllConnections();
    bool StartServer(LPCTSTR pszHost,
		LPCTSTR pszServiceName, int nFamily, int nType, UINT uOptions);
    void Run();
    void Terminate(DWORD dwTimeout);
    void OnConnection(ULONG_PTR s);

    static bool IsConnectionDropped(DWORD dwError);

protected:
    static DWORD WINAPI SocketServerProc(thisClass* _this);
    T*              _pInterface;
    HANDLE          _thread;
    ThreadSection   _critSection;
    CSocketHandle   _socket;
    SocketList      _sockets;
};

The server interface reports the following events:

class ISocketServerHandler
{
public:
    virtual void OnThreadBegin(CSocketHandle* ) {}
    virtual void OnThreadExit(CSocketHandle* )  {}
    virtual void OnThreadLoopEnter(CSocketHandle* ) {}
    virtual void OnThreadLoopLeave(CSocketHandle* ) {}
    virtual void OnAddConnection(CSocketHandle* , SOCKET ) {}
    virtual void OnRemoveConnection(CSocketHandle* , SOCKET ) {}
    virtual void OnDataReceived
	(CSocketHandle* , const BYTE* , DWORD , const SockAddrIn& ) {}
    virtual void OnConnectionFailure(CSocketHandle*, SOCKET) {}
    virtual void OnConnectionDropped(CSocketHandle* ) {}
    virtual void OnConnectionError(CSocketHandle* , DWORD ) {}
};

This interface is also optional, but I hope you will decide to use it as it makes the design cleaner.

Asynchronous Sockets

Windows supports asynchronous sockets. The communication class CSocketHandle makes it accessible to you as well. You will need to define SOCKHANDLE_USE_OVERLAPPED for your project. Asynchronous communication is a non-blocking mode, thus, allows you to handle multiple requests in a single thread. You may also provide multiple read/write buffers to queue your I/O. Asynchronous sockets is a big subject, probably deserves an article by itself but I hope you will consider the design that is currently supported. The functions CSocketHandle::ReadEx andCSocketHandle::WriteEx give you access to this mode. The latest template ASocketServerImpl shows how to use SocketHandle class in Asynchronous read mode. The main advantage is that in TCP mode, one thread is used to handle all your connections.

Conclusion

In this article, I present the new improvements for the CSocketHandle class. I hope the new interface will make it easier for you. Of course, I'm always open to suggestions, feel free to post your questions and suggestions to the discussion thread.
Enjoy!

Reference

History

  • 02/12/2009 - Initial release (published as separate article)
  • 02/17/2009 - Updated Threadpool startup flag for Windows XP
  • 03/14/2009 - Fixed hang issue (99% CPU) in templates
  • 03/29/2009 - Asynchronous mode Server template (+ Support: WindowsCE, UNIX/Linux)
  • 04/05/2009 - Fixed resource leak in server templates
  • 08/07/2009 - Fixed Asynchronous mode build (use SOCKHANDLE_USE_OVERLAPPED)
  • 09/26/2009 - IPv6 Support (Windows and Linux)
http://www.codeproject.com/Articles/33352/Full-Multi-thread-Client-Server-Socket-Class-with?fid=1535016&fr=101#xx0xx
我的实验作业是做一个web server,这是作业要求: 分别使用多线程,多进程,IO多路复用实现一个简单的web server(端口80)承载附件的web。对比这三种实现方式各自的优缺点。 Ps:Web server和普通socket通信的区别主要在于:多了字段的解析和处理。 多线程:优点:线程间数据共享方便、线程切换开销小、可以充分利用多核处理器优势;缺点:线程间同步和互斥需要额外开销容易引起死锁、线程数量过多时系统资源开销大、可能存在线程间的资源竞争 多进程:优点:进程间相互独立不会竞争资源、可以更好利用多核处理器、稳定性高;缺点:切换开销大、系统资源消耗大 IO多路复用:优点:单线程可同时处理多个IO请求,提高了并发性、节省系统资源、实现高效的事件驱动编程;缺点,计算密集型任务中IO多路复用效率可能较低、编程复杂性高 课题目的: 1. 熟悉公司的开发设计流程 2. 熟悉HTTP协议 3. 熟悉并掌握Linux环境应用开发 4. 熟悉socket网络编程 5. 加强多线程/多进程编程能力 6. 熟悉经典IO复用设计模型 课题要求: 1、实现以多线程方式实现web server功能(必修) 2、实现以多进程方式实现web server功能(必修) 3、实现以IO多路复用方式实现web server功能(必修) 4、提供完善Debug机制(必修) 5、Web server需完成get post等基础请求处理(必修) 6、如何使其具备高并发性并实现(选修) 7、使用makefile构建(必修 以报告的形式提交。 下面是我的代码: "utils.c" #include "utils.h" #include "debug.h" #include <time.h> const char *get_content_type(const char *path) { LOG_DEBUG("Determining content type for: %s", path); const char *ext = strrchr(path, '.'); if (!ext) { LOG_DEBUG("No file extension, defaulting to text/plain"); return "text/plain"; } if (strcmp(ext, ".html") == 0) return "text/html"; if (strcmp(ext, ".css") == 0) return "text/css"; if (strcmp(ext, ".js") == 0) return "application/javascript"; if (strcmp(ext, ".json") == 0) return "application/json"; if (strcmp(ext, ".jpg") == 0 || strcmp(ext, ".jpeg") == 0) return "image/jpeg"; if (strcmp(ext, ".png") == 0) return "image/png"; if (strcmp(ext, ".gif") == 0) return "image/gif"; if (strcmp(ext, ".ico") == 0) return "image/x-icon"; LOG_DEBUG("Unknown file extension '%s', defaulting to text/plain", ext); return "text/plain"; } int file_exists(const char *path) { LOG_DEBUG("Checking file existence: %s", path); int exists = access(path, F_OK) != -1; if (!exists) { LOG_WARN("File not found: %s", path); } return exists; } long get_file_size(FILE *file) { if (fseek(file, 0, SEEK_END) != 0) { LOG_ERROR("fseek(SEEK_END) failed: %s", strerror(errno)); return -1; } long size = ftell(file); if (size == -1) { LOG_ERROR("ftell failed: %s", strerror(errno)); } if (fseek(file, 0, SEEK_SET) != 0) { LOG_ERROR("fseek(SEEK_SET) failed: %s", strerror(errno)); } LOG_DEBUG("File size: %ld bytes", size); return size; } void log_request(const char *client_ip, const char *method, const char *path, int status) { time_t now = time(NULL); char time_buf[64]; strftime(time_buf, sizeof(time_buf), "%Y-%m-%d %H:%M:%S", localtime(&now)); // 根据状态码确定日志级别 if (status >= 400) { LOG_ERROR("[%s] %s - %s %s - %d", time_buf, client_ip, method, path, status); } else if (status >= 300) { LOG_WARN("[%s] %s - %s %s - %d", time_buf, client_ip, method, path, status); } else { LOG_INFO("[%s] %s - %s %s - %d", time_buf, client_ip, method, path, status); } } "http_parser.c" #include "http_parser.h" #include "debug.h" #include <string.h> #include <stdio.h> #include <stdlib.h> #include <ctype.h> int parse_http_request(const char *request, http_request_t *req) { LOG_DEBUG("Parsing HTTP request"); char buffer[4096]; strncpy(buffer, request, sizeof(buffer) - 1); buffer[sizeof(buffer) - 1] = '\0'; // 解析请求行 char *line = strtok(buffer, "\r\n"); if (!line) { LOG_ERROR("Empty request line"); return -1; } // 解析方法、路径和HTTP版本 if (sscanf(line, "%15s %1023s %15s", req->method, req->path, req->version) != 3) { LOG_ERROR("Invalid request line: %s", line); return -1; } LOG_DEBUG("Request line: %s %s %s", req->method, req->path, req->version); // 解析头部字段 req->content_length = 0; memset(req->host, 0, sizeof(req->host)); while ((line = strtok(NULL, "\r\n")) != NULL && line[0] != '\0') { if (strncasecmp(line, "Host:", 5) == 0) { sscanf(line + 5, " %255s", req->host); LOG_DEBUG("Host: %s", req->host); } else if (strncasecmp(line, "Content-Length:", 15) == 0) { sscanf(line + 15, " %d", &req->content_length); LOG_DEBUG("Content-Length: %d", req->content_length); } } return 0; } int parse_request_body(int client_fd, http_request_t *req, const char *buffer, ssize_t bytes_read) { if (strcmp(req->method, "POST") != 0 || req->content_length <= 0) { LOG_DEBUG("Skipping body parsing for non-POST request"); return 0; // 非POST请求或没有内容体 } LOG_DEBUG("Parsing POST request body (length: %d)", req->content_length); // 定位请求头结束位置 const char *body_start = strstr(buffer, "\r\n\r\n"); if (!body_start) { LOG_ERROR("Failed to find body separator in request"); return -1; } body_start += 4; // 跳过空行 size_t header_length = body_start - buffer; size_t body_received = (bytes_read > header_length) ? (bytes_read - header_length) : 0; LOG_DEBUG("Header length: %zu, Body received: %zu", header_length, body_received); // 分配内存存储请求体 req->body = malloc(req->content_length + 1); if (!req->body) { LOG_CRITICAL("Body allocation failed: %s", strerror(errno)); return -1; } // 复制已接收部分 size_t to_copy = (body_received < (size_t)req->content_length) ? body_received : (size_t)req->content_length; memcpy(req->body, body_start, to_copy); req->body[req->content_length] = '\0'; LOG_DEBUG("Body content (%d bytes):\n%.*s", req->content_length, (int)req->content_length, req->body); return 0; } void free_request_body(http_request_t *req) { if (req->body) { LOG_DEBUG("Freeing request body"); free(req->body); req->body = NULL; } } // URL解码函数 void urldecode(char *src) { char *p = src; char *q = src; while (*p) { if (*p == '+') { *q++ = ' '; p++; } else if (*p == '%' && isxdigit(p[1]) && isxdigit(p[2])) { *q++ = (char)strtol((char[]){p[1], p[2], 0}, NULL, 16); p += 3; } else { *q++ = *p++; } } *q = '\0'; } void log_post_body(const http_request_t *req, const char *client_ip) { if (!req->body || req->content_length <= 0) { LOG_DEBUG("No POST body to log"); return; } // 安全截断长请求体 int log_length = (req->content_length > 1024) ? 1024 : req->content_length; char buf[log_length + 1]; memcpy(buf, req->body, log_length); buf[log_length] = '\0'; LOG_INFO("POST data from %s for %s (length: %d)", client_ip, req->path, req->content_length); // 解析并打印结构化参数 char *pair = strtok(buf, "&"); while (pair) { char *eq = strchr(pair, '='); if (eq) { *eq = '\0'; char *key = pair; char *value = eq + 1; // 对键值进行URL解码 urldecode(key); urldecode(value); LOG_INFO(" %-8s: %s", key, value); } pair = strtok(NULL, "&"); } } "http_response.c" #include "http_response.h" #include "utils.h" #include "debug.h" #include <stdio.h> #include <stdlib.h> #include <string.h> #include <unistd.h> #include <sys/sendfile.h> #include <fcntl.h> #include <sys/stat.h> #include <arpa/inet.h> void send_http_response(int client_fd, int status, const char *message, const char *content_type, long content_length) { char header[1024]; snprintf(header, sizeof(header), "HTTP/1.1 %d %s\r\n" "Content-Type: %s\r\n" "Content-Length: %ld\r\n\r\n", // "Connection: close\r\n\r\n", status, message, content_type, content_length); LOG_DEBUG("Sending HTTP response header:\n%s", header); ssize_t sent = send(client_fd, header, strlen(header), 0); if (sent != (ssize_t)strlen(header)) { LOG_ERROR("Partial header sent (%zd/%zu bytes)", sent, strlen(header)); } } void send_error_response(int client_fd, int status, const char *message) { char body[512]; snprintf(body, sizeof(body), "<html><body><h1>%d %s</h1></body></html>", status, message); LOG_WARN("Sending error response: %d %s", status, message); send_http_response(client_fd, status, message, "text/html", strlen(body)); ssize_t sent = send(client_fd, body, strlen(body), 0); if (sent != (ssize_t)strlen(body)) { LOG_ERROR("Partial error body sent (%zd/%zu bytes)", sent, strlen(body)); } } void handle_http_request(int client_fd, http_request_t *req) { LOG_DEBUG("Handling HTTP request: %s %s", req->method, req->path); char file_path[2048]; // 默认首页 if (strcmp(req->path, "/") == 0) { snprintf(file_path, sizeof(file_path), "web/Index.html"); } else { snprintf(file_path, sizeof(file_path), "web%s", req->path); } LOG_DEBUG("Resolved file path: %s", file_path); // 防止路径遍历攻击 if (strstr(file_path, "..")) { LOG_WARN("Path traversal attempt detected: %s", file_path); send_error_response(client_fd, 403, "Forbidden"); return; } // 检查文件是否存在 if (!file_exists(file_path)) { LOG_WARN("File not found: %s", file_path); send_error_response(client_fd, 404, "Not Found"); return; } // 打开文件 FILE *file = fopen(file_path, "rb"); if (!file) { LOG_ERROR("File open failed: %s, error: %s", file_path, strerror(errno)); send_error_response(client_fd, 500, "Internal Server Error"); return; } LOG_DEBUG("File opened successfully: %s", file_path); // 获取文件大小 long file_size = get_file_size(file); // 发送HTTP头 const char *content_type = get_content_type(file_path); LOG_DEBUG("Sending file: %s, type: %s, size: %ld", file_path, content_type, file_size); send_http_response(client_fd, 200, "OK", content_type, file_size); // 使用sendfile高效发送文件 int fd = fileno(file); off_t offset = 0; long remaining = file_size; while (remaining > 0) { ssize_t sent = sendfile(client_fd, fd, &offset, remaining); if (sent <= 0) { LOG_ERROR("sendfile failed: %s, sent: %zd/%ld bytes", strerror(errno), file_size - remaining, file_size); break; } remaining -= sent; LOG_DEBUG("Sent %zd bytes, remaining: %ld bytes", sent, remaining); } if (remaining == 0) { LOG_DEBUG("File sent completely: %s", file_path); } else { LOG_WARN("File partially sent: %s (%ld/%ld bytes)", file_path, file_size - remaining, file_size); } fclose(file); } "thread_pool.c" #include "thread_pool.h" #include "debug.h" #include <stdlib.h> #include <stdio.h> void queue_init(task_queue_t *queue, int capacity) { LOG_DEBUG("Initializing task queue (capacity: %d)", capacity); queue->tasks = malloc(capacity * sizeof(task_t)); if (!queue->tasks) { LOG_CRITICAL("Queue malloc failed: %s", strerror(errno)); exit(EXIT_FAILURE); } queue->front = 0; queue->rear = -1; queue->size = 0; queue->capacity = capacity; } void queue_push(task_queue_t *queue, task_t task) { if (queue->size >= queue->capacity) { LOG_ERROR("Task queue overflow (capacity: %d)", queue->capacity); return; } queue->rear = (queue->rear + 1) % queue->capacity; queue->tasks[queue->rear] = task; queue->size++; LOG_DEBUG("Task added to queue (size: %d/%d)", queue->size, queue->capacity); } task_t queue_pop(task_queue_t *queue) { if (queue->size <= 0) { LOG_ERROR("Attempted to pop from empty task queue"); task_t empty_task = {NULL, NULL}; return empty_task; } task_t task = queue->tasks[queue->front]; queue->front = (queue->front + 1) % queue->capacity; queue->size--; LOG_DEBUG("Task popped from queue (size: %d/%d)", queue->size, queue->capacity); return task; } int queue_empty(task_queue_t *queue) { return queue->size == 0; } void queue_clean(task_queue_t *queue) { LOG_DEBUG("Cleaning task queue"); free(queue->tasks); queue->tasks = NULL; queue->front = 0; queue->rear = -1; queue->size = 0; queue->capacity = 0; } void *worker_thread(void *arg) { thread_pool_t *pool = (thread_pool_t *)arg; LOG_DEBUG("Worker thread started (TID: %lu)", (unsigned long)pthread_self()); while (1) { pthread_mutex_lock(&pool->lock); // 等待新任务或关闭信号 while (queue_empty(&pool->queue) && !pool->shutdown) { LOG_DEBUG("Worker thread waiting (TID: %lu)", (unsigned long)pthread_self()); pthread_cond_wait(&pool->cond, &pool->lock); } // 检查是否关闭 if (pool->shutdown) { pthread_mutex_unlock(&pool->lock); LOG_DEBUG("Worker thread exiting (TID: %lu)", (unsigned long)pthread_self()); pthread_exit(NULL); } // 获取任务 task_t task = queue_pop(&pool->queue); pthread_mutex_unlock(&pool->lock); // 执行任务 if (task.function) { LOG_DEBUG("Worker thread executing task (TID: %lu)", (unsigned long)pthread_self()); task.function(task.arg); } } return NULL; } thread_pool_t *thread_pool_create(int num_threads) { LOG_INFO("Creating thread pool (size: %d)", num_threads); thread_pool_t *pool = malloc(sizeof(thread_pool_t)); if (!pool) { LOG_CRITICAL("Thread pool malloc failed: %s", strerror(errno)); exit(EXIT_FAILURE); } pool->num_threads = num_threads; pool->shutdown = 0; // 初始化任务队列 queue_init(&pool->queue, 128); // 初始化互斥锁和条件变量 if (pthread_mutex_init(&pool->lock, NULL) != 0) { LOG_CRITICAL("Mutex init failed: %s", strerror(errno)); free(pool); exit(EXIT_FAILURE); } if (pthread_cond_init(&pool->cond, NULL) != 0) { LOG_CRITICAL("Condition init failed: %s", strerror(errno)); pthread_mutex_destroy(&pool->lock); free(pool); exit(EXIT_FAILURE); } // 创建工作线程 pool->threads = malloc(num_threads * sizeof(pthread_t)); if (!pool->threads) { LOG_CRITICAL("Thread array malloc failed: %s", strerror(errno)); pthread_mutex_destroy(&pool->lock); pthread_cond_destroy(&pool->cond); free(pool); exit(EXIT_FAILURE); } for (int i = 0; i < num_threads; i++) { if (pthread_create(&pool->threads[i], NULL, worker_thread, pool) != 0) { LOG_CRITICAL("Thread creation failed: %s", strerror(errno)); // 清理已创建的线程 for (int j = 0; j < i; j++) { pthread_cancel(pool->threads[j]); } pthread_mutex_destroy(&pool->lock); pthread_cond_destroy(&pool->cond); free(pool->threads); free(pool); exit(EXIT_FAILURE); } LOG_DEBUG("Worker thread %d created (TID: %lu)", i, (unsigned long)pool->threads[i]); } LOG_INFO("Thread pool created successfully"); return pool; } void thread_pool_add_task(thread_pool_t *pool, void (*function)(void *), void *arg) { pthread_mutex_lock(&pool->lock); // 创建新任务 task_t task; task.function = function; task.arg = arg; // 添加到队列 queue_push(&pool->queue, task); // 通知工作线程 pthread_cond_signal(&pool->cond); pthread_mutex_unlock(&pool->lock); LOG_DEBUG("Task added to thread pool"); } void thread_pool_destroy(thread_pool_t *pool) { LOG_INFO("Destroying thread pool"); pthread_mutex_lock(&pool->lock); pool->shutdown = 1; pthread_cond_broadcast(&pool->cond); pthread_mutex_unlock(&pool->lock); // 等待所有线程退出 for (int i = 0; i < pool->num_threads; i++) { LOG_DEBUG("Joining worker thread %d (TID: %lu)", i, (unsigned long)pool->threads[i]); pthread_join(pool->threads[i], NULL); } // 清理资源 LOG_DEBUG("Freeing thread array"); free(pool->threads); LOG_DEBUG("Cleaning task queue"); queue_clean(&pool->queue); LOG_DEBUG("Destroying mutex and condition"); pthread_mutex_destroy(&pool->lock); pthread_cond_destroy(&pool->cond); LOG_DEBUG("Freeing thread pool structure"); free(pool); LOG_INFO("Thread pool destroyed successfully"); } 多进程: /* !Copyright(c) 2022-2025 TP-LINK Technologies CO.LTD. * All rights reserved. * * \file server_process.c * \brief This source file is for testing code. * * \author zhaoyunlong <zhaoyunlong1@tp-link.com.hk> * \version 1.0.0 * \date 07/8/2025 * * \history \arg 1.0.0, 07/8/2025, zhaoyunlong, Create file. */ #include <stdio.h> #include <stdlib.h> #include <string.h> #include <unistd.h> #include <sys/socket.h> #include <netinet/in.h> #include <arpa/inet.h> #include <sys/wait.h> #include <signal.h> #include "http_parser.h" #include "http_response.h" #include "utils.h" #include "debug.h" #define PORT 80 /* 服务器监听端口 */ #define BACKLOG 128 /* 等待连接队伍的最大长度 */ #define BUFFER_SIZE 8192 /* 接收http请求的缓冲区大小(8KB) */ // 客户端处理函数 void handle_client(void *arg) { LOG_DEBUG("Child process started (PID: %d)", getpid()); int client_fd = *(int *)arg; char buffer[BUFFER_SIZE]; char client_ip[INET_ADDRSTRLEN]; // 获取客户端IP struct sockaddr_in client_addr; socklen_t addr_len = sizeof(client_addr); /* 获取与套接字关联的远程协议地址 */ if (getpeername(client_fd, (struct sockaddr *)&client_addr, &addr_len) < 0) { LOG_ERROR("getpeername failed: %s", strerror(errno)); } inet_ntop(AF_INET, &client_addr.sin_addr, client_ip, INET_ADDRSTRLEN); /* 将数值格式的IP地址转换为点分十进制格式 */ LOG_DEBUG("Client IP: %s", client_ip); // 接收HTTP请求 ssize_t bytes_read = recv(client_fd, buffer, BUFFER_SIZE - 1, 0); /* 等待接收数据 */ if (bytes_read <= 0) { if (bytes_read == 0) { LOG_DEBUG("Connection closed by client"); } else { LOG_ERROR("recv failed: %s", strerror(errno)); } close(client_fd); free(arg); return; } buffer[bytes_read] = '\0'; LOG_DEBUG("Received %zd bytes from %s", bytes_read, client_ip); // 解析HTTP请求 http_request_t req; if (parse_http_request(buffer, &req) != 0) { LOG_ERROR("Failed to parse HTTP request"); log_request(client_ip, "UNKNOWN", "INVALID", 400); send_error_response(client_fd, 400, "Bad Request"); close(client_fd); free(arg); return; } // 记录请求日志 LOG_INFO("Request: %s %s from %s", req.method, req.path, client_ip); // 解析并记录POST请求体 if (strcmp(req.method, "POST") == 0) { LOG_DEBUG("Processing POST request"); if (parse_request_body(client_fd, &req, buffer, bytes_read) == 0) { log_post_body(&req, client_ip); } else { LOG_WARN("Failed to parse POST body"); } } // 处理请求 if ((strcmp(req.method, "GET") == 0) || (strcmp(req.method, "POST") == 0)) { handle_http_request(client_fd, &req); } else { LOG_WARN("Unsupported method: %s", req.method); send_error_response(client_fd, 501, "Not Implemented"); } close(client_fd); free(arg); LOG_DEBUG("Child process completed (PID: %d)", getpid()); } // SIGCHLD信号处理函数(回收僵尸进程) void sigchld_handler(int sig) { int status; pid_t pid; while ((pid = waitpid(-1, &status, WNOHANG)) > 0) { if (WIFEXITED(status)) { LOG_DEBUG("Child process %d exited with status %d", pid, WEXITSTATUS(status)); } else if (WIFSIGNALED(status)) { LOG_WARN("Child process %d killed by signal %d", pid, WTERMSIG(status)); } } } int main() { // 初始化调试系统 log_init("./log/"); debug_init(DEBUG_LEVEL_INFO); int server_fd; int client_fd; struct sockaddr_in server_addr; LOG_INFO("Starting multi-process server on port %d", PORT); // 创建套接字 CHECK((server_fd = socket(AF_INET, SOCK_STREAM, 0)) >= 0, exit(EXIT_FAILURE), "socket failed: %s", strerror(errno)); LOG_DEBUG("Socket created: fd=%d", server_fd); // 设置端口复用 int opt = 1; CHECK(setsockopt(server_fd, SOL_SOCKET, SO_REUSEADDR, &opt, sizeof(opt)) == 0, { close(server_fd); exit(EXIT_FAILURE); }, "setsockopt failed: %s", strerror(errno)); // 绑定地址 server_addr.sin_family = AF_INET; server_addr.sin_addr.s_addr = INADDR_ANY; server_addr.sin_port = htons(PORT); CHECK(bind(server_fd, (struct sockaddr *)&server_addr, sizeof(server_addr)) == 0, { close(server_fd); exit(EXIT_FAILURE); }, "bind failed: %s", strerror(errno)); // 开始监听 CHECK(listen(server_fd, BACKLOG) == 0, { close(server_fd); exit(EXIT_FAILURE); }, "listen failed: %s", strerror(errno)); // 设置SIGCHLD信号处理僵尸进程 signal(SIGCHLD, sigchld_handler); // 安装信号处理器 LOG_INFO("Web server running on port %d (PID: %d)", PORT, getpid()); while (1) { LOG_DEBUG("Waiting for new connection..."); // 接受新连接 struct sockaddr_in client_addr; socklen_t client_len = sizeof(client_addr); client_fd = accept(server_fd, (struct sockaddr *)&client_addr, &client_len); if (client_fd < 0) { LOG_ERROR("accept failed: %s", strerror(errno)); continue; } LOG_DEBUG("New connection accepted: fd=%d", client_fd); // 分配客户端套接字内存 int *client_ptr = malloc(sizeof(int)); if (!client_ptr) { LOG_CRITICAL("malloc failed: %s", strerror(errno)); close(client_fd); continue; } *client_ptr = client_fd; LOG_DEBUG("Allocated client context: %p", (void *)client_ptr); /* 多进程处理 */ // 创建子进程处理连接 pid_t pid = fork(); if (pid == 0) { // 子进程 close(server_fd); // 关闭监听套接字 handle_client(client_ptr); exit(EXIT_SUCCESS); // 处理完成后退出 } else if (pid > 0) { // 父进程 close(client_fd); // 关闭客户端套接字(父进程不需要) LOG_DEBUG("Forked child process PID: %d", pid); } else { LOG_ERROR("fork failed: %s", strerror(errno)); perror("fork failed"); close(client_fd); } } // 清理资源 LOG_INFO("Shutting down server..."); close(server_fd); LOG_INFO("Server shutdown complete"); return 0; } 多线程: /* !Copyright(c) 2022-2025 TP-LINK Technologies CO.LTD. * All rights reserved. * * \file server_pthread.c * \brief This source file is for testing code. * * \author zhaoyunlong <zhaoyunlong1@tp-link.com.hk> * \version 1.0.0 * \date 07/8/2025 * * \history \arg 1.0.0, 07/8/2025, zhaoyunlong, Create file. */ #include <stdio.h> #include <stdlib.h> #include <string.h> #include <unistd.h> #include <sys/socket.h> #include <netinet/in.h> #include <arpa/inet.h> #include <pthread.h> #include "thread_pool.h" #include "http_parser.h" #include "http_response.h" #include "utils.h" #include "debug.h" #define PORT 80 /* 服务器监听端口 */ #define BACKLOG 128 /* 等待连接队伍的最大长度 */ #define THREAD_POOL_SIZE 16 /* 线程池的工作线程数量 */ #define BUFFER_SIZE 8192 /* 接收http请求的缓冲区大小(8KB) */ // 客户端处理函数 void handle_client(void *arg) { LOG_DEBUG("Client handler started"); int client_fd = *(int *)arg; char buffer[BUFFER_SIZE]; char client_ip[INET_ADDRSTRLEN]; // 获取客户端IP struct sockaddr_in client_addr; socklen_t addr_len = sizeof(client_addr); /* 获取与套接字关联的远程协议地址 */ if (getpeername(client_fd, (struct sockaddr *)&client_addr, &addr_len) < 0) { LOG_ERROR("getpeername failed: %s", strerror(errno)); } inet_ntop(AF_INET, &client_addr.sin_addr, client_ip, INET_ADDRSTRLEN); /* 将数值格式的IP地址转换为点分十进制格式 */ LOG_DEBUG("Client IP: %s", client_ip); // 接收HTTP请求 ssize_t bytes_read = recv(client_fd, buffer, BUFFER_SIZE - 1, 0); /* 等待接收数据 */ if (bytes_read <= 0) { if (bytes_read == 0) { LOG_DEBUG("Connection closed by client"); } else { LOG_ERROR("recv failed: %s", strerror(errno)); } close(client_fd); free(arg); return; } buffer[bytes_read] = '\0'; LOG_DEBUG("Received %zd bytes from %s", bytes_read, client_ip); // 解析HTTP请求 http_request_t req; if (parse_http_request(buffer, &req) != 0) { LOG_ERROR("Failed to parse HTTP request"); log_request(client_ip, "UNKNOWN", "INVALID", 400); send_error_response(client_fd, 400, "Bad Request"); close(client_fd); free(arg); return; } // 记录请求日志 LOG_INFO("Request: %s %s from %s", req.method, req.path, client_ip); // 解析并记录POST请求体 if (strcmp(req.method, "POST") == 0) { LOG_DEBUG("Processing POST request"); if (parse_request_body(client_fd, &req, buffer, bytes_read) == 0) { log_post_body(&req, client_ip); } else { LOG_WARN("Failed to parse POST body"); } } // 处理请求 if ((strcmp(req.method, "GET") == 0) || (strcmp(req.method, "POST") == 0)) { handle_http_request(client_fd, &req); } else { LOG_WARN("Unsupported method: %s", req.method); send_error_response(client_fd, 501, "Not Implemented"); } // 释放资源 free_request_body(&req); close(client_fd); free(arg); LOG_DEBUG("Client handler finished"); } int main() { // 初始化调试系统 log_init("./log/"); debug_init(DEBUG_LEVEL_INFO); int server_fd; int client_fd; struct sockaddr_in server_addr; LOG_INFO("Starting web server on port %d", PORT); // 创建套接字 CHECK((server_fd = socket(AF_INET, SOCK_STREAM, 0)) >= 0, exit(EXIT_FAILURE), "socket failed: %s", strerror(errno)); LOG_DEBUG("Socket created: fd=%d", server_fd); // 设置端口复用 int opt = 1; CHECK(setsockopt(server_fd, SOL_SOCKET, SO_REUSEADDR, &opt, sizeof(opt)) == 0, { close(server_fd); exit(EXIT_FAILURE); }, "setsockopt failed: %s", strerror(errno)); // 绑定地址 server_addr.sin_family = AF_INET; server_addr.sin_addr.s_addr = INADDR_ANY; server_addr.sin_port = htons(PORT); CHECK(bind(server_fd, (struct sockaddr *)&server_addr, sizeof(server_addr)) == 0, { close(server_fd); exit(EXIT_FAILURE); }, "bind failed: %s", strerror(errno)); // 开始监听 CHECK(listen(server_fd, BACKLOG) == 0, { close(server_fd); exit(EXIT_FAILURE); }, "listen failed: %s", strerror(errno)); // 全局线程池 thread_pool_t *pool = NULL; // 初始化线程池 pool = thread_pool_create(THREAD_POOL_SIZE); CHECK(pool != NULL, { close(server_fd); exit(EXIT_FAILURE); }, "Thread pool creation failed"); LOG_INFO("Web server running on port %d", PORT); while (1) { LOG_DEBUG("Waiting for new connection..."); // 接受新连接 struct sockaddr_in client_addr; socklen_t client_len = sizeof(client_addr); client_fd = accept(server_fd, (struct sockaddr *)&client_addr, &client_len); if (client_fd < 0) { LOG_ERROR("accept failed: %s", strerror(errno)); continue; } LOG_DEBUG("New connection accepted: fd=%d", client_fd); // 分配客户端套接字内存 int *client_ptr = malloc(sizeof(int)); if (!client_ptr) { LOG_CRITICAL("malloc failed: %s", strerror(errno)); close(client_fd); continue; } *client_ptr = client_fd; LOG_DEBUG("Allocated client context: %p", (void *)client_ptr); /* 多线程处理--线程池 */ // 将任务加入线程池 thread_pool_add_task(pool, handle_client, client_ptr); } // 清理资源 LOG_INFO("Shutting down server..."); thread_pool_destroy(pool); close(server_fd); LOG_INFO("Server shutdown complete"); return 0; } 多路IO复用: /* !Copyright(c) 2022-2025 TP-LINK Technologies CO.LTD. * All rights reserved. * * \file server_epoll.c * \brief This source file is for testing code. * * \author zhaoyunlong <zhaoyunlong1@tp-link.com.hk> * \version 1.0.0 * \date 07/8/2025 * * \history \arg 1.0.0, 07/8/2025, zhaoyunlong, Create file. */ #include <stdio.h> #include <stdlib.h> #include <string.h> #include <unistd.h> #include <sys/socket.h> #include <netinet/in.h> #include <arpa/inet.h> #include <sys/wait.h> #include <sys/epoll.h> #include <fcntl.h> #include <errno.h> #include "http_parser.h" #include "http_response.h" #include "utils.h" #include "debug.h" #define PORT 80 /* 服务器监听端口 */ #define BACKLOG 128 /* 等待连接队伍的最大长度 */ #define MAX_EVENTS 1024 /* epoll最大事件数 */ #define BUFFER_SIZE 8192 /* 接收http请求的缓冲区大小(8KB) */ // 客户端连接信息结构体 typedef struct { int fd; // 客户端文件描述符 char buffer[BUFFER_SIZE]; // 接收缓冲区 ssize_t bytes_received; // 已接收字节数 char client_ip[INET_ADDRSTRLEN]; // 客户端IP地址 } client_info_t; // 处理客户端请求 void process_client(int client_fd, client_info_t *client) { LOG_DEBUG("Processing client request (fd: %d)", client_fd); http_request_t req; // 解析HTTP请求 if (parse_http_request(client->buffer, &req) != 0) { // 解析失败记录错误日志 LOG_ERROR("Failed to parse HTTP request"); log_request(client->client_ip, "UNKNOWN", "INVALID", 400); send_error_response(client_fd, 400, "Bad Request"); return; } // 记录成功请求日志 LOG_INFO("Request: %s %s from %s", req.method, req.path, client->client_ip); // 解析并记录POST请求体 if (strcmp(req.method, "POST") == 0) { LOG_DEBUG("Processing POST request"); if (parse_request_body(client_fd, &req, client->buffer, client->bytes_received) == 0) { log_post_body(&req, client->client_ip); } else { LOG_WARN("Failed to parse POST body"); } } // 处理GET/POST请求 if (strcmp(req.method, "GET") == 0 || strcmp(req.method, "POST") == 0) { handle_http_request(client_fd, &req); } else { // 不支持的HTTP方法 LOG_WARN("Unsupported method: %s", req.method); send_error_response(client_fd, 501, "Not Implemented"); } } int main() { // 初始化调试系统 log_init("./log/"); debug_init(DEBUG_LEVEL_INFO); int server_fd; int epoll_fd; struct sockaddr_in server_addr; struct epoll_event ev, events[MAX_EVENTS]; // epoll事件结构 LOG_INFO("Starting epoll server on port %d", PORT); // 创建套接字 CHECK((server_fd = socket(AF_INET, SOCK_STREAM, 0)) >= 0, exit(EXIT_FAILURE), "socket failed: %s", strerror(errno)); LOG_DEBUG("Socket created: fd=%d", server_fd); // 设置端口复用 int opt = 1; CHECK(setsockopt(server_fd, SOL_SOCKET, SO_REUSEADDR, &opt, sizeof(opt)) == 0, { close(server_fd); exit(EXIT_FAILURE); }, "setsockopt failed: %s", strerror(errno)); // 绑定地址 server_addr.sin_family = AF_INET; server_addr.sin_addr.s_addr = INADDR_ANY; server_addr.sin_port = htons(PORT); CHECK(bind(server_fd, (struct sockaddr *)&server_addr, sizeof(server_addr)) == 0, { close(server_fd); exit(EXIT_FAILURE); }, "bind failed: %s", strerror(errno)); // 开始监听 CHECK(listen(server_fd, BACKLOG) == 0, { close(server_fd); exit(EXIT_FAILURE); }, "listen failed: %s", strerror(errno)); // 创建epoll实例 CHECK((epoll_fd = epoll_create1(0)) >= 0, { close(server_fd); exit(EXIT_FAILURE); }, "epoll_create1 failed: %s", strerror(errno)); LOG_DEBUG("Epoll instance created: fd=%d", epoll_fd); // 添加服务器套接字到epoll监听 ev.events = EPOLLIN; // 监听可读事件 ev.data.fd = server_fd; // 关联服务器文件描述符 CHECK(epoll_ctl(epoll_fd, EPOLL_CTL_ADD, server_fd, &ev) == 0, { close(server_fd); close(epoll_fd); exit(EXIT_FAILURE); }, "epoll_ctl server_fd failed: %s", strerror(errno)); LOG_INFO("Web server running on port %d", PORT); while (1) { // 等待事件发生,-1表示无限等待 int nfds = epoll_wait(epoll_fd, events, MAX_EVENTS, -1); if (nfds < 0) { if (errno == EINTR) continue; // 被信号中断 LOG_ERROR("epoll_wait failed: %s", strerror(errno)); continue; } LOG_DEBUG("Epoll events triggered: %d", nfds); // 处理所有就绪事件 for (int i = 0; i < nfds; i++) { // 处理新客户端连接 if (events[i].data.fd == server_fd) { struct sockaddr_in client_addr; socklen_t client_len = sizeof(client_addr); // 接受新连接 int client_fd = accept(server_fd, (struct sockaddr *)&client_addr, &client_len); if (client_fd < 0) { LOG_ERROR("accept failed: %s", strerror(errno)); continue; } LOG_DEBUG("New connection accepted: fd=%d", client_fd); // 设置非阻塞模式(边缘触发必须) int flags = fcntl(client_fd, F_GETFL, 0); if (fcntl(client_fd, F_SETFL, flags | O_NONBLOCK) < 0) { LOG_ERROR("fcntl nonblock failed: %s", strerror(errno)); close(client_fd); continue; } // 创建并初始化客户端信息结构 client_info_t *client = malloc(sizeof(client_info_t)); if (!client) { LOG_CRITICAL("malloc client_info failed: %s", strerror(errno)); close(client_fd); continue; } memset(client, 0, sizeof(client_info_t)); client->fd = client_fd; // 获取客户端IP inet_ntop(AF_INET, &client_addr.sin_addr, client->client_ip, INET_ADDRSTRLEN); LOG_DEBUG("Client IP: %s", client->client_ip); // 添加客户端到epoll监听(边缘触发模式) ev.events = EPOLLIN | EPOLLET; // 边缘触发模式 ev.data.ptr = client; // 关联客户端数据结构 if (epoll_ctl(epoll_fd, EPOLL_CTL_ADD, client_fd, &ev) < 0) { LOG_ERROR("epoll_ctl client_fd failed: %s", strerror(errno)); close(client_fd); free(client); } else { LOG_DEBUG("Added client to epoll: fd=%d", client_fd); } } // 处理客户端数据可读事件 else if (events[i].events & EPOLLIN) { client_info_t *client = (client_info_t *)events[i].data.ptr; LOG_DEBUG("Data available on fd=%d", client->fd); // 非阻塞读取数据 ssize_t count = recv(client->fd, client->buffer + client->bytes_received, BUFFER_SIZE - client->bytes_received - 1, 0); if (count > 0) { // 更新已接收字节数 client->bytes_received += count; client->buffer[client->bytes_received] = '\0'; // 确保字符串结束 LOG_DEBUG("Received %zd bytes (total %zd) from %s", count, client->bytes_received, client->client_ip); // 检查HTTP请求是否结束(通过空行判断) if (strstr(client->buffer, "\r\n\r\n") != NULL) { LOG_DEBUG("Full request received from %s", client->client_ip); // 移除epoll监听 epoll_ctl(epoll_fd, EPOLL_CTL_DEL, client->fd, NULL); // 处理完整请求 process_client(client->fd, client); // 关闭连接并释放资源 close(client->fd); free(client); } } // 处理连接关闭或错误 else if (count == 0 || (count < 0 && errno != EAGAIN)) { LOG_DEBUG("Connection closed by client: %s", client->client_ip); // 移除epoll监听 epoll_ctl(epoll_fd, EPOLL_CTL_DEL, client->fd, NULL); close(client->fd); free(client); } // EAGAIN错误在边缘触发模式下是正常的,表示数据已读完 } } } // 清理资源 LOG_INFO("Shutting down epoll server..."); close(epoll_fd); close(server_fd); LOG_INFO("Server shutdown complete"); return 0; } makefile: SRC_DIR = ./src INCLUDE_DIR = ./include BUILD_DIR = ./build OBJ_DIR := $(BUILD_DIR)/obj # 编译器设置 CC = gcc CFLAGS = -Wall -I $(INCLUDE_DIR) -pthread -Iinclude # 公共模块(所有服务器共享) COMMON_SRCS := \ $(SRC_DIR)/http_parser.c \ $(SRC_DIR)/http_response.c \ $(SRC_DIR)/thread_pool.c \ $(SRC_DIR)/utils.c \ $(SRC_DIR)/debug.c \ COMMON_OBJS := $(patsubst $(SRC_DIR)/%.c, $(OBJ_DIR)/%.o, $(COMMON_SRCS)) # 服务器主程序 SERVER_PROCESS_SRC := $(SRC_DIR)/server_process.c SERVER_PTHREAD_SRC := $(SRC_DIR)/server_pthread.c SERVER_EPOLL_SRC := $(SRC_DIR)/server_epoll.c SERVER_PROCESS_OBJ := $(patsubst $(SRC_DIR)/%.c, $(OBJ_DIR)/%.o, $(SERVER_PROCESS_SRC)) SERVER_PTHREAD_OBJ := $(patsubst $(SRC_DIR)/%.c, $(OBJ_DIR)/%.o, $(SERVER_PTHREAD_SRC)) SERVER_EPOLL_OBJ := $(patsubst $(SRC_DIR)/%.c, $(OBJ_DIR)/%.o, $(SERVER_EPOLL_SRC)) # 最终可执行文件 TARGET_PROCESS = $(BUILD_DIR)/server_process TARGET_PTHREAD = $(BUILD_DIR)/server_pthread TARGET_EPOLL = $(BUILD_DIR)/server_epoll all: $(TARGET_PROCESS) $(TARGET_PTHREAD) $(TARGET_EPOLL) # 创建build目录 $(BUILD_DIR) $(OBJ_DIR): @mkdir -p $@ # 编译公共模块 $(OBJ_DIR)/%.o: $(SRC_DIR)/%.c | $(OBJ_DIR) $(CC) $(CFLAGS) -c $< -o $@ # 编译服务器主程序 $(OBJ_DIR)/server_%.o: $(SRC_DIR)/server_%.c | $(OBJ_DIR) $(CC) $(CFLAGS) -c $< -o $@ # 链接生成可执行文件 $(TARGET_PROCESS): $(SERVER_PROCESS_OBJ) $(COMMON_OBJS) $(CC) $(CFLAGS) -o $@ $^ $(LDFLAGS) @echo "$@ building success!......" $(TARGET_PTHREAD): $(SERVER_PTHREAD_OBJ) $(COMMON_OBJS) $(CC) $(CFLAGS) -o $@ $^ $(LDFLAGS) @echo "$@ building success!......" $(TARGET_EPOLL): $(SERVER_EPOLL_OBJ) $(COMMON_OBJS) $(CC) $(CFLAGS) -o $@ $^ $(LDFLAGS) @echo "$@ building success!......" # 清理编译文件 clean: @echo "clean..." rm -rf $(BUILD_DIR) 你学习一下我的代码,帮我写一下报告,注意报告中不要贴代码,你主要是设计一下报告的框架和写文字图片。我写了一部分了,你可以接着完善并继续写: 1. 需求介绍 1.1. 需求概述与背景说明 本课题实验旨在通过使用多线程,多进程,IO多路复用三种不同的处理方式,实现一个web server以承载附件的web,并对比三种方式各自的优缺点。通过深入理解服务器架构设计的原理,掌握Linux环境下高性能网络编程的核心技术。 1.2. 课题目的 1. 熟悉公司的开发设计流程 2. 熟悉HTTP协议 3. 熟悉并掌握Linux环境应用开发 4. 熟悉socket网络编程 5. 加强多线程/多进程编程能力 6. 熟悉经典IO复用设计模型 1.3. 课题要求 1. 实现以多线程方式实现Web server功能(必修) 2. 实现以多进程方式实现Web server功能(必修) 3. 实现以IO多路复用方式实现Web server功能(必修) 4. 提供完善Debug机制(必修) 5. Web server需完成GET和POST等基础请求处理(必修) 6. 如何使其具备高并发性并实现(选修) 7. 使用Makefile构建(必修) 2. 系统架构描述 2.1. 系统框架图 通过http协议构建一个Web服务器,该服务器能够处理浏览器发过来的http请求,并根据http请求返回http响应给浏览器。该服务器的功能上可以存放各种各样的资源,用户可以利用浏览器来访问我的服务器资源,也可以提交数据给服务器,让服务器去处理结果,并返回给浏览器。 2.2. 模块划分 2.3. 模块描述 3. 4.
08-16
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值