Received memory warning.

今天程序在运行的时候收到了以下的信息:

2011-10-27 14:04:42.119 JingDuTianXia[6465:707] Received memory warning. Level=2

2011-10-27 14:04:42.122 JingDuTianXia[6465:707] applicationDidReceiveMemoryWarning

(gdb) continue

Program received signal:  “EXC_BAD_ACCESS”.

关于些内存信息,有以下的通知文件来定义
http://www.opensource.apple.com/source/Libc/Libc-594.1.4/include/libkern/OSMemoryNotification.h

That's the header for kernel code that generates memory warnings, and it declares the following typedef:

这个是关于产生内存警告信息的核心代码头文件,并且它定义了以下的类型

typedef enum {
OSMemoryNotificationLevelAny = -1,
OSMemoryNotificationLevelNormal = 0,
OSMemoryNotificationLevelWarning = 1,
OSMemoryNotificationLevelUrgent = 2,
OSMemoryNotificationLevelCritical = 3
} OSMemoryNotificationLevel;

我们显然没有关于关键方法(OSMemoryNotificationCurrentLevel())的实现,此实现有关于产生何种警告用于返回的逻辑,但以上类型是可能返回的值列表

而我们的工作就是对这些接收到的低内存警告信息进行反映

归根到底,我们要处理接收到的这个内存警告,而不要纠结于产生这些信息的具体逻辑


 

转载于:https://www.cnblogs.com/Piosa/archive/2011/10/27/2226594.html

WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 97 closing signal SIGTERM ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: -11) local_rank: 1 (pid: 98) of binary: /opt/conda/bin/python Traceback (most recent call last): File "/opt/conda/bin/torchrun", line 33, in <module> sys.exit(load_entry_point('torch==1.13.1+cu116', 'console_scripts', 'torchrun')()) File "/opt/conda/lib/python3.9/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 346, in wrapper return f(*args, **kwargs) File "/opt/conda/lib/python3.9/site-packages/torch/distributed/run.py", line 762, in main run(args) File "/opt/conda/lib/python3.9/site-packages/torch/distributed/run.py", line 753, in run elastic_launch( File "/opt/conda/lib/python3.9/site-packages/torch/distributed/launcher/api.py", line 132, in __call__ return launch_agent(self._config, self._entrypoint, list(args)) File "/opt/conda/lib/python3.9/site-packages/torch/distributed/launcher/api.py", line 246, in launch_agent raise ChildFailedError( torch.distributed.elastic.multiprocessing.errors.ChildFailedError: ==================================================== tools/train.py FAILED ---------------------------------------------------- Failures: <NO_OTHER_FAILURES> ---------------------------------------------------- Root Cause (first observed failure): [0]: time : 2025-03-09_20:49:13 host : yons-MS-7E06 rank : 1 (local_rank: 1) exitcode : -11 (pid: 98) error_file: <N/A> traceback : Signal 11 (SIGSEGV) received by PID 98 ====================================================这是什么问题,怎么解决
最新发布
03-11
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值