pthread_create后没有detach导致内存持续增长

本文介绍了一种隐蔽的内存泄漏问题,由于pthread_create后未正确detach线程导致内存持续增长。通过详细分析和解决过程,提供了避免此类内存泄漏的有效方法。

解决了一个隐蔽的内存泄漏——pthread_create后没有detach导致内存持续增长

昨天解决了一个隐蔽的内存泄漏问题,原因是pthread_create后的僵死线程没有释放导致的内存持续增长。
现象是这样的:短时间内程序运行正常,但跑了12小时左右,用top查看其内存占用居然高达2G,于是马上意识到有内存泄漏。

最先想到的是malloc/free、new/delete没有配对,申请的内存没有释放。于是写了个跟踪malloc/free调用的模块,不过检查中并没有找到未释放的内存。之后怀疑是不是 free then malloc 导致的内存管理错误(事实证明虽然free后不是立即回收内存,但是接连调用free & malloc并不会影响操作系统的内存管理),不过写了个小程序发现并不是这么回事。

陷入窘境了,只好用最小系统法把功能部分和内存分配都给屏蔽掉,这时发现内存泄漏依然存在!仔细看top的输出,几乎是每次创建线程时内存就往上涨一点,只是增长速度不是很快,看来是线程的问题了。仔细分析发现,之前图简单 pthread_create (&thread, NULL, &thread_function, NULL); 就这么写了,参数2没有设置线程结束后自动detach,并且没有使用pthread_join或pthread_detach释放执行结束后线程的空间!

 

Linux man page 里有已经说明了这个问题:
    When a joinable thread terminates, its memory resources (thread descriptor and stack) are not deallocated until another thread performs pthread_join on it. Therefore, pthread_join must be called  once  for each joinable thread created to avoid memory leaks.

也就说线程执行完后如果不join的话,线程的资源会一直得不到释放而导致内存泄漏!一时的图快后患无穷啊。

 

解决办法

1 // 最简单的办法,在线程执行结束后调用pthread_detach让他自己释放
 2 pthread_detach(pthread_self());
 3 
 4 
 5 // 或者创建线程前设置 PTHREAD_CREATE_DETACHED 属性
 6 pthread_attr_t attr;
 7 pthread_t thread;
 8 pthread_attr_init (&attr);
 9 pthread_attr_setdetachstate (&attr, PTHREAD_CREATE_DETACHED);
10 pthread_create (&thread, &attr, &thread_function, NULL);
11 pthread_attr_destroy (&attr);

 

第2行的那种方法最简单,在线程函数尾部加上这句话就可以将线程所占用的资源给释放掉;或者像 5-11 所示的方法设置detach属性,这样也会在线程return/pthread_exit后释放内存。

其实仔细想想,valgrind检查时已经提示了pthread_create没有释放的问题,只是之前没引起注意。其实这样的问题也只有在长时间运行时,慢慢积累这一点点的内存才会暴露出来,看来valgrind的提示也不能置之不理啊。

``` Task stTaskQueue[256]; int iTaskCount = 0; pthread_mutex_t stMutexQueue; pthread_cond_t stCondQueue; void* PthreadPoolWorker(void* arg) { Task* pstTask; while(g_iExit == 0) { pthread_mutex_lock(&stMutexQueue); while(iTaskCount == 0) { pthread_cond_wait(&stCondQueue, &stMutexQueue); } pstTask = &stTaskQueue[--iTaskCount]; pthread_mutex_unlock(&stMutexQueue); pstTask->task(pstTask->arg); } } int ThreadPoolSubmit(void* (*task)(void*),void* arg) { int iRet = -1; if(pthread_mutex_lock(&stMutexQueue) != 0) { cmn_PrtError("Error in locking mutex"); } if(iTaskCount > 255) { pthread_mutex_unlock(&stMutexQueue); cmn_PrtError("Error in queue overflow"); } stTaskQueue[iTaskCount].task = task; stTaskQueue[iTaskCount].arg = arg; iTaskCount++; if(pthread_mutex_unlock(&stMutexQueue) != 0) { cmn_PrtError("Error in unlocking mutex"); } if(pthread_cond_signal(&stCondQueue) < 0) { cmn_PrtError("Error in signaling condition"); } iRet = 0; _Exit: return iRet; } int ThreadPoolInit(void) { int iRet = -1; if(pthread_mutex_init(&stMutexQueue, NULL) <0) { cmn_PrtError("Error in initializing mutex"); } if(pthread_cond_init(&stCondQueue, NULL)) { cmn_PrtError("Error in initializing condition"); } for(int i = 0; i < 5; i++) { pthread_t stThread; pthread_create(&stThread, NULL, PthreadPoolWorker, NULL); pthread_detach(stThread); } iRet = 0; _Exit: return iRet; }```static int s_iServerSocket = 0; static pthread_t s_stServerThread; int TCPSvr_Run(void) { int iRet = -1; int iClientSocket; struct sockaddr_in stClientaddr; socklen_t addrlen = sizeof(stClientaddr); if(listen(s_iServerSocket,cmnDfn_MAX_CLIENT) < 0) { cmn_PrtError("listen error"); } while (g_iExit == 0) { if((accept(s_iServerSocket, (struct sockaddr *)&stClientaddr,&addrlen)) < 0) { cmn_PrtError("accept failed"); } if(ThreadPoolInit()) { cmn_PrtError("Error in initializing thread pool"); } if(ThreadPoolSubmit(RcvTrd, (void *)(uintptr_t)iClientSocket) < 0) { cmn_PrtError("Error in submitting task to thread pool"); } if(ThreadPoolSubmit(SndTrd, (void *)(uintptr_t)iClientSocket) < 0) { cmn_PrtError("Error in submitting task to thread pool"); } } _Exit: return iRet; } 检查代码是否有bug并给出解决方案
03-13
==14333== Process terminating with default action of signal 2 (SIGINT) ==14333== at 0x4E46D2D: __pthread_timedjoin_ex (pthread_join_common.c:89) ==14333== by 0x10D245: main (test_server.c:477) ==14333== ==14333== HEAP SUMMARY: ==14333== in use at exit: 560 bytes in 3 blocks ==14333== total heap usage: 2,220,165 allocs, 2,220,162 frees, 190,690,297 bytes allocated ==14333== ==14333== 16 bytes in 1 blocks are still reachable in loss record 1 of 3 ==14333== at 0x4C31B0F: malloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so) ==14333== by 0x10EC19: graph_init (graph_operate.c:28) ==14333== by 0x10D04D: main (test_server.c:428) ==14333== ==14333== 272 bytes in 1 blocks are possibly lost in loss record 2 of 3 ==14333== at 0x4C33B25: calloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so) ==14333== by 0x4013646: allocate_dtv (dl-tls.c:286) ==14333== by 0x4013646: _dl_allocate_tls (dl-tls.c:530) ==14333== by 0x4E46227: allocate_stack (allocatestack.c:627) ==14333== by 0x4E46227: pthread_create@@GLIBC_2.2.5 (pthread_create.c:644) ==14333== by 0x10D217: main (test_server.c:474) ==14333== ==14333== 272 bytes in 1 blocks are possibly lost in loss record 3 of 3 ==14333== at 0x4C33B25: calloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so) ==14333== by 0x4013646: allocate_dtv (dl-tls.c:286) ==14333== by 0x4013646: _dl_allocate_tls (dl-tls.c:530) ==14333== by 0x4E46227: allocate_stack (allocatestack.c:627) ==14333== by 0x4E46227: pthread_create@@GLIBC_2.2.5 (pthread_create.c:644) ==14333== by 0x10D234: main (test_server.c:475) ==14333== ==14333== LEAK SUMMARY: ==14333== definitely lost: 0 bytes in 0 blocks ==14333== indirectly lost: 0 bytes in 0 blocks ==14333== possibly lost: 544 bytes in 2 blocks ==14333== still reachable: 16 bytes in 1 blocks ==14333== suppressed: 0 bytes in 0 blocks ==14333== ==14333== For counts of detected and suppressed errors, rerun with: -v ==14333== ERROR SUMMARY: 2 errors from 2 contexts (suppressed: 0 from 0) 为什么我这里明明内存都是放掉了,但是在top - 16:05:57 up 1:47, 1 user, load average: 0.34, 0.72, 0.51 任务: 0 total, 0 running, 0 sleeping, 0 stopped, 0 zombie %Cpu(s): 0.6 us, 0.6 sy, 0.0 ni, 98.8 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 KiB Mem : 4013244 total, 601612 free, 2365876 used, 1045756 buff/cache KiB Swap: 2097148 total, 2097148 free, 0 used. 1388216 avail Mem 进程 USER PR NI VIRT RES SHR � %CPU %MEM TIME+ mem还是在增长
最新发布
09-30
下面代码存在资源泄露吗? #include <stdio.h> #include <stdlib.h> #include <string.h> #include <unistd.h> #include <pthread.h> #include <sys/socket.h> #include <netinet/in.h> #include <arpa/inet.h> #include "common/debug.h" #include "common/http_parser.h" #define PORT 8001 #define BACKLOG 128 #define BUFFER_SIZE 8192 void handle_request(int client_fd); void *client_handler(void *arg); int main() { int server_fd, client_fd; struct sockaddr_in address; int addrlen = sizeof(address); // 创建套接字 if ((server_fd = socket(AF_INET, SOCK_STREAM, 0)) == 0) { LOG_ERROR("socket failed"); exit(EXIT_FAILURE); } // 设置套接字选项 int opt = 1; setsockopt(server_fd, SOL_SOCKET, SO_REUSEADDR, &opt, sizeof(opt)); // 绑定地址 address.sin_family = AF_INET; address.sin_addr.s_addr = INADDR_ANY; address.sin_port = htons(PORT); if (bind(server_fd, (struct sockaddr *)&address, sizeof(address)) < 0) { LOG_ERROR("bind failed"); exit(EXIT_FAILURE); } // 监听 if (listen(server_fd, BACKLOG) < 0) { LOG_ERROR("listen failed"); exit(EXIT_FAILURE); } LOG_INFO("Threaded server listening on port %d", PORT); /* 以上为服务器的初始化,下面while循环为主线程的任务 */ while (1) { // 接受新连接 if ((client_fd = accept(server_fd, (struct sockaddr *)&address, (socklen_t*)&addrlen)) < 0) { LOG_WARN("accept failed"); continue; } LOG_INFO("New connection from %s", inet_ntoa(address.sin_addr)); // 为客户端分配内存 int *client_ptr = malloc(sizeof(int)); if (!client_ptr) { LOG_ERROR("Memory allocation failed"); close(client_fd); continue; } *client_ptr = client_fd; // 创建线程处理客户端 pthread_t thread_id; if (pthread_create(&thread_id, NULL, client_handler, client_ptr) != 0) { LOG_ERROR("thread create failed"); free(client_ptr); close(client_fd); } // 分离线程,主线程继续accept pthread_detach(thread_id); } return 0; } void handle_request(int client_fd) { char buffer[BUFFER_SIZE]; ssize_t bytes_read = read(client_fd, buffer, sizeof(buffer) - 1); if (bytes_read <= 0) { LOG_WARN("Failed to read request or connection closed"); close(client_fd); return; } buffer[bytes_read] = '\0'; LOG_DEBUG("Received request:\n%.*s", (int)bytes_read, buffer); // 解析HTTP请求 HttpRequest request; if (parse_http_request(buffer, bytes_read, &request) != 0) { LOG_WARN("Failed to parse HTTP request"); HttpResponse response; build_400_response(&response); char response_buffer[BUFFER_SIZE]; size_t response_length = sizeof(response_buffer); build_http_response(&response, response_buffer, &response_length); write(client_fd, response_buffer, response_length); close(client_fd); return; } // 处理请求 HttpResponse response = {0}; char file_path[PATH_MAX]; if (strcmp(request.path, "/") == 0) { snprintf(file_path, sizeof(file_path), "%s%s", request.path, DEFAULT_FILE); } else { snprintf(file_path, sizeof(file_path), "%s%s", ROOT_DIR, request.path); } // 防止路径遍历攻击 if (strstr(file_path, "..")) { build_403_response(&response); } else { FILE *file = fopen(file_path, "rb"); if (file) { fseek(file, 0, SEEK_END); long file_size = ftell(file); fseek(file, 0, SEEK_SET); if (file_size > MAX_FILE_SIZE) { build_413_response(&response); } else { char *file_content = malloc(file_size + 1); if (file_content) { fread(file_content, 1, file_size, file); file_content[file_size] = '\0'; build_200_response( &response, file_content, get_content_type(request.path) ); response.body_length = file_size; response.owns_body = 1; // 标记需要释放内存 } else { build_500_response(&response); } } fclose(file); } else { build_404_response(&response); } } // 构建并发送响应 char response_buffer[BUFFER_SIZE]; size_t response_length = sizeof(response_buffer); build_http_response(&response, response_buffer, &response_length); write(client_fd, response_buffer, response_length); close(client_fd); // 清理资源 free_http_request(&request); } void *client_handler(void *arg) { int client_fd = *((int *)arg); free(arg); // 释放分配的整数内存 handle_request(client_fd); return NULL; }
08-15
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值