hadoop 2.2.0 cluster errors messages

本文介绍了在使用HDFS进行文件操作时遇到的NoRouteToHostException错误及其解决方法,涉及防火墙配置相关内容;同时记录了使用Sqoop从MySQL导入数据到HDFS时出现的容器分配失败问题,并给出了相应的解决方案。
1. when put local file to HDFS using #hadoop fs -put in.txt /test, there is a error message:

hdfs.DFSClient: Exception in createBlockOutputStream java.net.NoReouteToHostException

solution: shutdown firewall in all nodes(non-secure mode, if run in secure mode, you should maual configure the firewall rules).

centos :# service iptables save

#service iptables stop

#chkconfig iptables off

ubuntu 12.04 : #sudo ufw disable


fedora20:

#sudo systemctl status firewalld.service

#sudo systemctl stop firewalld.service

#sudo systemctl disable firewalld.service


2. Could Only Be Replicated To .

see:http://wiki.apache.org/hadoop/CouldOnlyBeReplicatedTo



3.when using sqoop to import data from mysql to hdfs,

sqoop>start job -j 3

there result is faild. Due to the following errors:

2014-03-20 02:31:14,695 INFO [RMCommunicator Allocator]
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator:
Cannot assign container Container: [ContainerId:
container_1395399417464_0002_01_000012, NodeId: f2.zhj:40543,
NodeHttpAddress: f2.zhj:8042, Resource: ,
Priority: 20, Token: Token { kind: ContainerToken,
service: 192.168.122.3:40543 }, ] for a map as either container memory
less than required 1024 or no pending map tasks - maps.isEmpty=true


2014-03-20 02:33:49,930 WARN [CommitterEvent Processor #2]
org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter:
Could not delete hdfs://192.168.122.1:2014/test/actor/_temporary/1
/_temporary/attempt_1395399417464_0002_m_000002


the above errors are all removed by changing all /etc/hosts in all nodes. You should comment all lines start with 127.0.0.1 and 127.0.1.1
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值