DataNode抛内存溢出oom的error

本文分析了CDH集群中DataNode出现内存溢出错误的原因,尽管实际内存使用远低于配置上限。文章详细记录了错误日志,并提供了解决方案,包括调整系统参数和限制设置,确保DataNode稳定运行。

CDH查看DataNode的内存情况

在这里插入图片描述

DataNode报错信息

2017-12-17 23:58:14,422 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-1437036909-![img](file:///C:\Users\Administrator\AppData\Roaming\Tencent\QQTempSys\%W@GJ$ACOF(TYDYECOKVDYB.png)192.168.17.36-1509097205664:blk_1074725940_987917, type=HAS_DOWNSTREAM_IN_PIPELINE terminating
2017-12-17 23:58:31,425 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: DataNode is out of memory. Will retry in 30 seconds.
java.lang.OutOfMemoryError: unable to create new native thread
	at java.lang.Thread.start0(Native Method)
	at java.lang.Thread.start(Thread.java:714)
	at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:154)
	at java.lang.Thread.run(Thread.java:745)
2017-12-17 23:59:01,426 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: DataNode is out of memory. Will retry in 30 seconds.
java.lang.OutOfMemoryError: unable to create new native thread
	at java.lang.Thread.start0(Native Method)
	at java.lang.Thread.start(Thread.java:714)
	at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:154)
	at java.lang.Thread.run(Thread.java:745)
2017-12-17 23:59:05,520 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: DataNode is out of memory. Will retry in 30 seconds.
java.lang.OutOfMemoryError: unable to create new native thread
	at java.lang.Thread.start0(Native Method)
	at java.lang.Thread.start(Thread.java:714)
	at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:154)
	at java.lang.Thread.run(Thread.java:745)
2017-12-17 23:59:31,429 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-1437036909-![img]

为什么会报错

根据 CDH查看DataNode的内存情况,发现DN最大占用内存才607.1M,(生产环境DN的内存配置为2G就足够用了)为甚么还会抛出内存溢出错误呢,应该是配置信息的参数没有配正确,解决方法:

1.
echo "kernel.threads-max=196605" >> /etc/sysctl.conf
echo "kernel.pid_max=196605" >> /etc/sysctl.conf
echo "vm.max_map_count=393210" >> /etc/sysctl.conf
sysctl -p  

2.
/etc/security/limits.conf
* soft nofile 196605
* hard nofile 196605
* soft nproc 196605
* hard nproc 196605
[bhy@master hadoop-2.8.3]$ start-dfs.sh Starting namenodes on [master] master: master: Authorized users only. All activities may be monitored and reported. master: namenode running as process 47702. Stop it first. master: master: Authorized users only. All activities may be monitored and reported. localhost: localhost: Authorized users only. All activities may be monitored and reported. slave: slave: Authorized users only. All activities may be monitored and reported. localhost: mv: 无法获取'/home/bhy/testhadoop/hadoop-2.8.3/logs/hadoop-bhy-datanode-master.out.2' 的文件状态(stat): 没有那个文件或目录 localhost: mv: 无法获取'/home/bhy/testhadoop/hadoop-2.8.3/logs/hadoop-bhy-datanode-master.out.1' 的文件状态(stat): 没有那个文件或目录 master: starting datanode, logging to /home/bhy/testhadoop/hadoop-2.8.3/logs/hadoop-bhy-datanode-master.out localhost: mv: 无法获取'/home/bhy/testhadoop/hadoop-2.8.3/logs/hadoop-bhy-datanode-master.out' 的文件状态(stat): 没有那个文件或目录 localhost: starting datanode, logging to /home/bhy/testhadoop/hadoop-2.8.3/logs/hadoop-bhy-datanode-master.out slave: datanode running as process 35714. Stop it first. master: ulimit -a for user bhy master: core file size (blocks, -c) unlimited master: data seg size (kbytes, -d) unlimited master: scheduling priority (-e) 0 master: file size (blocks, -f) unlimited master: pending signals (-i) 11396 master: max locked memory (kbytes, -l) 64 master: max memory size (kbytes, -m) unlimited master: open files (-n) 1024 master: pipe size (512 bytes, -p) 8 Starting secondary namenodes [master] master: master: Authorized users only. All activities may be monitored and reported. master: secondarynamenode running as process 48219. Stop it first.
06-02
评论
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值