一、只启动了SecondaryNameNode其他的都未启动
root@Master:/usr/local/hadoop/hadoop-2.6.0/sbin# jps
6014 Jps
5774 SecondaryNameNode
root@Master:/usr/local/hadoop/hadoop-2.6.0/sbin#
************问题***************
只启动了SecondaryNameNode,其他的4个均未启动,发现是系统无法上网
root@Master:/usr/local/hadoop/hadoop-2.6.0/sbin# ./start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
Starting namenodes on [Master]
Master: ssh: connect to host master port 22: Network is unreachable
localhost: starting datanode, logging to /usr/local/hadoop/hadoop-2.6.0/logs/hadoop-root-datanode-Master.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop/hadoop-2.6.0/logs/hadoop-root-secondarynamenode-Master.out
starting yarn daemons
starting resourcemanager, logging to /usr/local/hadoop/hadoop-2.6.0/logs/yarn-root-resourcemanager-Master.out
localhost: starting nodemanager, logging to /usr/local/hadoop/hadoop-2.6.0/logs/yarn-root-nodemanager-Master.out
root@Master:/usr/local/hadoop/hadoop-2.6.0/sbin# ifconfig
eth0 Link encap:Ethernet HWaddr 00:0c:29:3f:50:ac
UP BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:104 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:9650 (9.6 KB) TX bytes:0 (0.0 B)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:1997 errors:0 dropped:0 overruns:0 frame:0
TX packets:1997 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:366036 (366.0 KB) TX bytes:366036 (366.0 KB)
root@Master:/usr/local/hadoop/hadoop-2.6.0/sbin#
********************解决办法************************************
虚拟机页面
编辑—》虚拟网络编辑器—》选择NAT模式—》NAT设置---》更新网关IP(192.168.153.1)
配置完后在命令编辑器中输入查看ip地址
Ifconfig
然后配置hosts文件中对应的ip
*************************解决后情况*****************************
Ifconfig后有对应的Ip了
---重启hadoop服务
除了DataNode外都启动了
*****************问题*************************
Namenode下的VERSION中clusterID 和datanode中的clusterID不一致
*****************解决办法*********************
编辑/usr/local/hadoop/hadoop-2.6.0/hdfs/data/current/VERSION
把clusterID更新成/usr/local/hadoop/hadoop-2.6.0/hdfs/name/current/VERSION中的clusterID的值,重启查看,datanode起来了。