hadoop dfsadmin -report failed

本文详细记录了Hadoop集群的搭建过程,包括SSH免密登录配置、HDFS格式化、启动所有Hadoop服务以及检查Hadoop集群状态等关键步骤。
Connecting to 192.168.3.110:22...
Connection established.
To escape to local shell, press 'Ctrl+Alt+]'.

Last login: Tue Jul 19 00:25:20 2016 from 192.168.3.101
[root@cdh1 ~]# cat /etc/sysconfig/network
NETWORKING=yes
#HOSTNAME=centos6cdh1
HOSTNAME=cdh1
[root@cdh1 ~]# hadoop dfsadmin -report
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.

report: Call From cdh1/192.168.3.110 to cdh1:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
[root@cdh1 ~]# ps -e | grep ssh
 1541 ?        00:00:00 sshd
 2019 ?        00:00:00 sshd
[root@cdh1 ~]# ssh localhost
The authenticity of host 'localhost (127.0.0.1)' can't be established.
RSA key fingerprint is af:36:2d:e5:e2:04:e3:c5:e8:50:92:c8:f1:2d:97:09.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'localhost' (RSA) to the list of known hosts.
root@localhost's password: 
Permission denied, please try again.
root@localhost's password: 
Permission denied, please try again.
root@localhost's password: 
Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).
[root@cdh1 ~]# ll
total 104
-rw-------. 1 root root  2241 Jul 19  2016 anaconda-ks.cfg
drwxr-xr-x. 2 root root  4096 Jul 18 23:03 Desktop
-rw-r--r--. 1 root root 67918 Jul 19  2016 install.log
-rw-r--r--. 1 root root 14677 Jul 19  2016 install.log.syslog
drwxr-xr-x. 2 root root  4096 Jul 18 23:03 Public
drwxr-xr-x. 2 root root  4096 Jul 18 23:03 Templates
[root@cdh1 ~]# mkdir sshlocal
[root@cdh1 ~]# cd sshlocal/
[root@cdh1 sshlocal]# LL
-bash: LL: command not found
[root@cdh1 sshlocal]# ll
total 0
[root@cdh1 sshlocal]# ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa
Generating public/private dsa key pair.
Your identification has been saved in /root/.ssh/id_dsa.
Your public key has been saved in /root/.ssh/id_dsa.pub.
The key fingerprint is:
8d:50:e3:aa:22:9e:11:0d:8a:4a:2f:36:56:85:85:95 root@cdh1
The key's randomart image is:
+--[ DSA 1024]----+
|    oo. o        |
|   .oE o .       |
| . . .. .        |
|o o .  o o       |
|oo o  . S .      |
|o +  .           |
|o*...            |
|+.=.             |
| o               |
+-----------------+
[root@cdh1 sshlocal]# ll
total 0
[root@cdh1 sshlocal]# cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
[root@cdh1 sshlocal]# ll
total 0
[root@cdh1 sshlocal]# ls -l ~/.ssh/
total 16
-rw-r--r--. 1 root root 599 Jul 19 00:38 authorized_keys
-rw-------. 1 root root 668 Jul 19 00:37 id_dsa
-rw-r--r--. 1 root root 599 Jul 19 00:37 id_dsa.pub
-rw-r--r--. 1 root root 391 Jul 19 00:36 known_hosts
[root@cdh1 sshlocal]# ssh localhost
Last login: Tue Jul 19 00:33:36 2016 from 192.168.3.101
[root@cdh1 ~]# chmod 700 ~/.ssh
[root@cdh1 ~]# chmod 644 ~/.ssh/authorized_keys
[root@cdh1 ~]# ls -l ~/.ssh/
total 16
-rw-r--r--. 1 root root 599 Jul 19 00:38 authorized_keys
-rw-------. 1 root root 668 Jul 19 00:37 id_dsa
-rw-r--r--. 1 root root 599 Jul 19 00:37 id_dsa.pub
-rw-r--r--. 1 root root 391 Jul 19 00:36 known_hosts
[root@cdh1 ~]# exit
logout
Connection to localhost closed.
[root@cdh1 sshlocal]# ll
total 0
[root@cdh1 sshlocal]# chmod 700 ~/.ssh
[root@cdh1 sshlocal]# chmod 644 ~/.ssh/authorized_keys
[root@cdh1 sshlocal]# ls -l ~/.ssh/
total 16
-rw-r--r--. 1 root root 599 Jul 19 00:38 authorized_keys
-rw-------. 1 root root 668 Jul 19 00:37 id_dsa
-rw-r--r--. 1 root root 599 Jul 19 00:37 id_dsa.pub
-rw-r--r--. 1 root root 391 Jul 19 00:36 known_hosts
[root@cdh1 sshlocal]# ls -a ~/
.                .bash_logout   .config  .esd_auth  .gnote           .gvfs            install.log.syslog  Public         sshlocal
..               .bash_profile  .cshrc   .gconf     .gnupg           .ICEauthority    .kde                .pulse         .tcshrc
anaconda-ks.cfg  .bashrc        .dbus    .gconfd    .gstreamer-0.10  .imsettings.log  .local              .pulse-cookie  Templates
.bash_history    .cache         Desktop  .gnome2    .gtk-bookmarks   install.log      .nautilus           .ssh           .viminfo
[root@cdh1 sshlocal]# ls -l ~/
total 108
-rw-------. 1 root root  2241 Jul 19  2016 anaconda-ks.cfg
drwxr-xr-x. 2 root root  4096 Jul 18 23:03 Desktop
-rw-r--r--. 1 root root 67918 Jul 19  2016 install.log
-rw-r--r--. 1 root root 14677 Jul 19  2016 install.log.syslog
drwxr-xr-x. 2 root root  4096 Jul 18 23:03 Public
drwxr-xr-x. 2 root root  4096 Jul 19 00:37 sshlocal
drwxr-xr-x. 2 root root  4096 Jul 18 23:03 Templates
[root@cdh1 sshlocal]# ls -la ~/
total 232
dr-xr-x---. 21 root root  4096 Jul 19 00:37 .
dr-xr-xr-x. 27 root root  4096 Jul 19 00:33 ..
-rw-------.  1 root root  2241 Jul 19  2016 anaconda-ks.cfg
-rw-------.  1 root root  2091 Jul 19 00:40 .bash_history
-rw-r--r--.  1 root root    18 May 20  2009 .bash_logout
-rw-r--r--.  1 root root   176 May 20  2009 .bash_profile
-rw-r--r--.  1 root root   176 Sep 23  2004 .bashrc
drwxr-xr-x.  2 root root  4096 Jul 18 23:09 .cache
drwx------.  6 root root  4096 Jul 19 00:08 .config
-rw-r--r--.  1 root root   100 Sep 23  2004 .cshrc
drwx------.  3 root root  4096 Jul 18 23:03 .dbus
drwxr-xr-x.  2 root root  4096 Jul 18 23:03 Desktop
-rw-------.  1 root root    16 Jul 18 23:04 .esd_auth
drwx------.  4 root root  4096 Jul 19 00:08 .gconf
drwx------.  2 root root  4096 Jul 19 00:31 .gconfd
drwx------.  5 root root  4096 Jul 18 23:06 .gnome2
drwxr-xr-x.  3 root root  4096 Jul 18 23:04 .gnote
drwx------.  2 root root  4096 Jul 19 00:08 .gnupg
drwxr-xr-x.  2 root root  4096 Jul 18 23:06 .gstreamer-0.10
-rw-r--r--.  1 root root   107 Jul 18 23:04 .gtk-bookmarks
drwx------.  2 root root  4096 Jul 18 23:03 .gvfs
-rw-------.  1 root root   620 Jul 19 00:08 .ICEauthority
-rw-r--r--.  1 root root   623 Jul 19 00:08 .imsettings.log
-rw-r--r--.  1 root root 67918 Jul 19  2016 install.log
-rw-r--r--.  1 root root 14677 Jul 19  2016 install.log.syslog
drwx------.  3 root root  4096 Jul 18 23:07 .kde
drwxr-xr-x.  3 root root  4096 Jul 18 23:03 .local
drwxr-xr-x.  2 root root  4096 Jul 18 23:04 .nautilus
drwxr-xr-x.  2 root root  4096 Jul 18 23:03 Public
drwx------.  2 root root  4096 Jul 18 23:04 .pulse
-rw-------.  1 root root   256 Jul 18 23:03 .pulse-cookie
drwx------.  2 root root  4096 Jul 19 00:38 .ssh
drwxr-xr-x.  2 root root  4096 Jul 19 00:37 sshlocal
-rw-r--r--.  1 root root   129 Dec  4  2004 .tcshrc
drwxr-xr-x.  2 root root  4096 Jul 18 23:03 Templates
-rw-------.  1 root root  5643 Jul 19 00:32 .viminfo
[root@cdh1 sshlocal]# 
[root@cdh1 sshlocal]# 
[root@cdh1 sshlocal]# hadoop dfsadmin -report
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.

report: Call From cdh1/192.168.3.110 to cdh1:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
[root@cdh1 sshlocal]# hadoop namenode -format
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.

16/07/19 00:44:03 INFO namenode.NameNode: STARTUP_MSG: 
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = cdh1/192.168.3.110
STARTUP_MSG:   args = [-format]
STARTUP_MSG:   version = 2.6.0
STARTUP_MSG:   classpath = /user/local/hadoop-2.6.0/etc/hadoop:/user/local/hadoop-2.6.0/share/hadoop/common/lib/log4j-1.2.17.jar:/user/local/hadoop-2.6.0/share/hadoop/common/lib/paranamer-2.3.jar:/user/local/hadoop-2.6.0/share/hadoop/common/lib/jersey-json-1.9.jar:/user/local/hadoop-2.6.0/share/hadoop/common/lib/commons-httpclient-3.1.jar:/user/local/hadoop-2.6.0/share/hadoop/common/lib/jsp-api-2.1.jar:/user/local/hadoop-2.6.0/share/hadoop/common/lib/httpclient-4.2.5.jar:/user/local/hadoop-2.6.0/share/hadoop/common/lib/jasper-runtime-5.5.23.jar:/user/local/hadoop-2.6.0/share/hadoop/common/lib/xz-1.0.jar:/user/local/hadoop-2.6.0/share/hadoop/common/lib/hadoop-auth-2.6.0.jar:/user/local/hadoop-2.6.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar:/user/local/hadoop-2.6.0/share/hadoop/common/lib/curator-recipes-2.6.0.jar:/user/local/hadoop-2.6.0/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/user/local/hadoop-2.6.0/share/hadoop/common/lib/htrace-core-3.0.4.jar:/user/local/hadoop-2.6.0/share/hadoop/common/lib/jsch-0.1.42.jar:/user/local/hadoop-2.6.0/share/hadoop/common/lib/zookeeper-3.4.6.jar:/user/local/hadoop-2.6.0/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/user/local/hadoop-2.6.0/share/hadoop/common/lib/commons-compress-1.4.1.jar:/user/local/hadoop-2.6.0/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/user/local/hadoop-2.6.0/share/hadoop/common/lib/httpcore-4.2.5.jar:/user/local/hadoop-2.6.0/share/hadoop/common/lib/mockito-all-1.8.5.jar:/user/local/hadoop-2.6.0/share/hadoop/common/lib/servlet-api-2.5.jar:/user/local/hadoop-2.6.0/share/hadoop/common/lib/api-util-1.0.0-M20.jar:/user/local/hadoop-2.6.0/share/hadoop/common/lib/commons-cli-1.2.jar:/user/local/hadoop-2.6.0/share/hadoop/common/lib/commons-codec-1.4.jar:/user/local/hadoop-2.6.0/share/hadoop/common/lib/avro-1.7.4.jar:/user/local/hadoop-2.6.0/share/hadoop/common/lib/activation-1.1.jar:/user/local/hadoop-2.6.0/share/hadoop/common/lib/commons-net-3.1.jar:/user/local/hadoop-2.6.0/share/hadoop/common/lib/jersey-core-1.9.jar:/user/local/hadoop-2.6.0/share/hadoop/common/lib/jets3t-0.9.0.jar:/user/local/hadoop-2.6.0/share/hadoop/common/lib/jsr305-1.3.9.jar:/user/local/hadoop-2.6.0/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/user/local/hadoop-2.6.0/share/hadoop/common/lib/commons-collections-3.2.1.jar:/user/local/hadoop-2.6.0/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/user/local/hadoop-2.6.0/share/hadoop/common/lib/jetty-6.1.26.jar:/user/local/hadoop-2.6.0/share/hadoop/common/lib/apacheds-i18n-2.0.0-M15.jar:/user/local/hadoop-2.6.0/share/hadoop/common/lib/stax-api-1.0-2.jar:/user/local/hadoop-2.6.0/share/hadoop/common/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/user/local/hadoop-2.6.0/share/hadoop/common/lib/hamcrest-core-1.3.jar:/user/local/hadoop-2.6.0/share/hadoop/common/lib/jettison-1.1.jar:/user/local/hadoop-2.6.0/share/hadoop/common/lib/asm-3.2.jar:/user/local/hadoop-2.6.0/share/hadoop/common/lib/gson-2.2.4.jar:/user/local/hadoop-2.6.0/share/hadoop/common/lib/commons-logging-1.1.3.jar:/user/local/hadoop-2.6.0/share/hadoop/common/lib/commons-lang-2.6.jar:/user/local/hadoop-2.6.0/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/user/local/hadoop-2.6.0/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/user/local/hadoop-2.6.0/share/hadoop/common/lib/commons-configuration-1.6.jar:/user/local/hadoop-2.6.0/share/hadoop/common/lib/commons-io-2.4.jar:/user/local/hadoop-2.6.0/share/hadoop/common/lib/jasper-compiler-5.5.23.jar:/user/local/hadoop-2.6.0/share/hadoop/common/lib/commons-el-1.0.jar:/user/local/hadoop-2.6.0/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/user/local/hadoop-2.6.0/share/hadoop/common/lib/jetty-util-6.1.26.jar:/user/local/hadoop-2.6.0/share/hadoop/common/lib/hadoop-annotations-2.6.0.jar:/user/local/hadoop-2.6.0/share/hadoop/common/lib/junit-4.11.jar:/user/local/hadoop-2.6.0/share/hadoop/common/lib/commons-digester-1.8.jar:/user/local/hadoop-2.6.0/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/user/local/hadoop-2.6.0/share/hadoop/common/lib/xmlenc-0.52.jar:/user/local/hadoop-2.6.0/share/hadoop/common/lib/api-asn1-api-1.0.0-M20.jar:/user/local/hadoop-2.6.0/share/hadoop/common/lib/guava-11.0.2.jar:/user/local/hadoop-2.6.0/share/hadoop/common/lib/commons-math3-3.1.1.jar:/user/local/hadoop-2.6.0/share/hadoop/common/lib/curator-client-2.6.0.jar:/user/local/hadoop-2.6.0/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/user/local/hadoop-2.6.0/share/hadoop/common/lib/netty-3.6.2.Final.jar:/user/local/hadoop-2.6.0/share/hadoop/common/lib/slf4j-api-1.7.5.jar:/user/local/hadoop-2.6.0/share/hadoop/common/lib/jersey-server-1.9.jar:/user/local/hadoop-2.6.0/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/user/local/hadoop-2.6.0/share/hadoop/common/lib/curator-framework-2.6.0.jar:/user/local/hadoop-2.6.0/share/hadoop/common/hadoop-common-2.6.0-tests.jar:/user/local/hadoop-2.6.0/share/hadoop/common/hadoop-common-2.6.0.jar:/user/local/hadoop-2.6.0/share/hadoop/common/hadoop-nfs-2.6.0.jar:/user/local/hadoop-2.6.0/share/hadoop/hdfs:/user/local/hadoop-2.6.0/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/user/local/hadoop-2.6.0/share/hadoop/hdfs/lib/jsp-api-2.1.jar:/user/local/hadoop-2.6.0/share/hadoop/hdfs/lib/jasper-runtime-5.5.23.jar:/user/local/hadoop-2.6.0/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/user/local/hadoop-2.6.0/share/hadoop/hdfs/lib/htrace-core-3.0.4.jar:/user/local/hadoop-2.6.0/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/user/local/hadoop-2.6.0/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/user/local/hadoop-2.6.0/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/user/local/hadoop-2.6.0/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/user/local/hadoop-2.6.0/share/hadoop/hdfs/lib/jsr305-1.3.9.jar:/user/local/hadoop-2.6.0/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/user/local/hadoop-2.6.0/share/hadoop/hdfs/lib/asm-3.2.jar:/user/local/hadoop-2.6.0/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/user/local/hadoop-2.6.0/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/user/local/hadoop-2.6.0/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/user/local/hadoop-2.6.0/share/hadoop/hdfs/lib/commons-io-2.4.jar:/user/local/hadoop-2.6.0/share/hadoop/hdfs/lib/commons-el-1.0.jar:/user/local/hadoop-2.6.0/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/user/local/hadoop-2.6.0/share/hadoop/hdfs/lib/xml-apis-1.3.04.jar:/user/local/hadoop-2.6.0/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/user/local/hadoop-2.6.0/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/user/local/hadoop-2.6.0/share/hadoop/hdfs/lib/guava-11.0.2.jar:/user/local/hadoop-2.6.0/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/user/local/hadoop-2.6.0/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/user/local/hadoop-2.6.0/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/user/local/hadoop-2.6.0/share/hadoop/hdfs/lib/xercesImpl-2.9.1.jar:/user/local/hadoop-2.6.0/share/hadoop/hdfs/hadoop-hdfs-2.6.0-tests.jar:/user/local/hadoop-2.6.0/share/hadoop/hdfs/hadoop-hdfs-2.6.0.jar:/user/local/hadoop-2.6.0/share/hadoop/hdfs/hadoop-hdfs-nfs-2.6.0.jar:/user/local/hadoop-2.6.0/share/hadoop/yarn/lib/log4j-1.2.17.jar:/user/local/hadoop-2.6.0/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/user/local/hadoop-2.6.0/share/hadoop/yarn/lib/aopalliance-1.0.jar:/user/local/hadoop-2.6.0/share/hadoop/yarn/lib/jersey-json-1.9.jar:/user/local/hadoop-2.6.0/share/hadoop/yarn/lib/commons-httpclient-3.1.jar:/user/local/hadoop-2.6.0/share/hadoop/yarn/lib/xz-1.0.jar:/user/local/hadoop-2.6.0/share/hadoop/yarn/lib/jackson-core-asl-1.9.13.jar:/user/local/hadoop-2.6.0/share/hadoop/yarn/lib/zookeeper-3.4.6.jar:/user/local/hadoop-2.6.0/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/user/local/hadoop-2.6.0/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/user/local/hadoop-2.6.0/share/hadoop/yarn/lib/jackson-jaxrs-1.9.13.jar:/user/local/hadoop-2.6.0/share/hadoop/yarn/lib/servlet-api-2.5.jar:/user/local/hadoop-2.6.0/share/hadoop/yarn/lib/commons-cli-1.2.jar:/user/local/hadoop-2.6.0/share/hadoop/yarn/lib/commons-codec-1.4.jar:/user/local/hadoop-2.6.0/share/hadoop/yarn/lib/activation-1.1.jar:/user/local/hadoop-2.6.0/share/hadoop/yarn/lib/jersey-core-1.9.jar:/user/local/hadoop-2.6.0/share/hadoop/yarn/lib/jsr305-1.3.9.jar:/user/local/hadoop-2.6.0/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/user/local/hadoop-2.6.0/share/hadoop/yarn/lib/commons-collections-3.2.1.jar:/user/local/hadoop-2.6.0/share/hadoop/yarn/lib/jetty-6.1.26.jar:/user/local/hadoop-2.6.0/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/user/local/hadoop-2.6.0/share/hadoop/yarn/lib/jettison-1.1.jar:/user/local/hadoop-2.6.0/share/hadoop/yarn/lib/asm-3.2.jar:/user/local/hadoop-2.6.0/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/user/local/hadoop-2.6.0/share/hadoop/yarn/lib/commons-lang-2.6.jar:/user/local/hadoop-2.6.0/share/hadoop/yarn/lib/jackson-mapper-asl-1.9.13.jar:/user/local/hadoop-2.6.0/share/hadoop/yarn/lib/jersey-client-1.9.jar:/user/local/hadoop-2.6.0/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/user/local/hadoop-2.6.0/share/hadoop/yarn/lib/commons-io-2.4.jar:/user/local/hadoop-2.6.0/share/hadoop/yarn/lib/jline-0.9.94.jar:/user/local/hadoop-2.6.0/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/user/local/hadoop-2.6.0/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/user/local/hadoop-2.6.0/share/hadoop/yarn/lib/guice-3.0.jar:/user/local/hadoop-2.6.0/share/hadoop/yarn/lib/guava-11.0.2.jar:/user/local/hadoop-2.6.0/share/hadoop/yarn/lib/javax.inject-1.jar:/user/local/hadoop-2.6.0/share/hadoop/yarn/lib/jackson-xc-1.9.13.jar:/user/local/hadoop-2.6.0/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/user/local/hadoop-2.6.0/share/hadoop/yarn/lib/jersey-server-1.9.jar:/user/local/hadoop-2.6.0/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/user/local/hadoop-2.6.0/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.6.0.jar:/user/local/hadoop-2.6.0/share/hadoop/yarn/hadoop-yarn-server-tests-2.6.0.jar:/user/local/hadoop-2.6.0/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.6.0.jar:/user/local/hadoop-2.6.0/share/hadoop/yarn/hadoop-yarn-api-2.6.0.jar:/user/local/hadoop-2.6.0/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.6.0.jar:/user/local/hadoop-2.6.0/share/hadoop/yarn/hadoop-yarn-common-2.6.0.jar:/user/local/hadoop-2.6.0/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.6.0.jar:/user/local/hadoop-2.6.0/share/hadoop/yarn/hadoop-yarn-client-2.6.0.jar:/user/local/hadoop-2.6.0/share/hadoop/yarn/hadoop-yarn-registry-2.6.0.jar:/user/local/hadoop-2.6.0/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.6.0.jar:/user/local/hadoop-2.6.0/share/hadoop/yarn/hadoop-yarn-server-common-2.6.0.jar:/user/local/hadoop-2.6.0/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.6.0.jar:/user/local/hadoop-2.6.0/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/user/local/hadoop-2.6.0/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/user/local/hadoop-2.6.0/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/user/local/hadoop-2.6.0/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/user/local/hadoop-2.6.0/share/hadoop/mapreduce/lib/xz-1.0.jar:/user/local/hadoop-2.6.0/share/hadoop/mapreduce/lib/jackson-core-asl-1.9.13.jar:/user/local/hadoop-2.6.0/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/user/local/hadoop-2.6.0/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/user/local/hadoop-2.6.0/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/user/local/hadoop-2.6.0/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/user/local/hadoop-2.6.0/share/hadoop/mapreduce/lib/asm-3.2.jar:/user/local/hadoop-2.6.0/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/user/local/hadoop-2.6.0/share/hadoop/mapreduce/lib/leveldbjni-all-1.8.jar:/user/local/hadoop-2.6.0/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/user/local/hadoop-2.6.0/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/user/local/hadoop-2.6.0/share/hadoop/mapreduce/lib/guice-3.0.jar:/user/local/hadoop-2.6.0/share/hadoop/mapreduce/lib/hadoop-annotations-2.6.0.jar:/user/local/hadoop-2.6.0/share/hadoop/mapreduce/lib/junit-4.11.jar:/user/local/hadoop-2.6.0/share/hadoop/mapreduce/lib/javax.inject-1.jar:/user/local/hadoop-2.6.0/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/user/local/hadoop-2.6.0/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/user/local/hadoop-2.6.0/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/user/local/hadoop-2.6.0/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/user/local/hadoop-2.6.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.0.jar:/user/local/hadoop-2.6.0/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.6.0.jar:/user/local/hadoop-2.6.0/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.6.0-tests.jar:/user/local/hadoop-2.6.0/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.6.0.jar:/user/local/hadoop-2.6.0/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.6.0.jar:/user/local/hadoop-2.6.0/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.6.0.jar:/user/local/hadoop-2.6.0/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.6.0.jar:/user/local/hadoop-2.6.0/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.6.0.jar:/user/local/hadoop-2.6.0/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.6.0.jar:/user/local/hadoop-2.6.0/contrib/capacity-scheduler/*.jar:/user/local/hadoop-2.6.0/contrib/capacity-scheduler/*.jar
STARTUP_MSG:   build = Unknown -r Unknown; compiled by 'root' on 2016-06-16T22:12Z
STARTUP_MSG:   java = 1.7.0_67
************************************************************/
16/07/19 00:44:03 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
16/07/19 00:44:03 INFO namenode.NameNode: createNameNode [-format]
Formatting using clusterid: CID-63af8bd6-576e-4f96-9b43-51080e8fbf88
16/07/19 00:44:05 INFO namenode.FSNamesystem: No KeyProvider found.
16/07/19 00:44:05 INFO namenode.FSNamesystem: fsLock is fair:true
16/07/19 00:44:06 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
16/07/19 00:44:06 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
16/07/19 00:44:06 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
16/07/19 00:44:06 INFO blockmanagement.BlockManager: The block deletion will start around 2016 Jul 19 00:44:06
16/07/19 00:44:06 INFO util.GSet: Computing capacity for map BlocksMap
16/07/19 00:44:06 INFO util.GSet: VM type       = 32-bit
16/07/19 00:44:06 INFO util.GSet: 2.0% max memory 889 MB = 17.8 MB
16/07/19 00:44:06 INFO util.GSet: capacity      = 2^22 = 4194304 entries
16/07/19 00:44:06 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false
16/07/19 00:44:06 INFO blockmanagement.BlockManager: defaultReplication         = 1
16/07/19 00:44:06 INFO blockmanagement.BlockManager: maxReplication             = 512
16/07/19 00:44:06 INFO blockmanagement.BlockManager: minReplication             = 1
16/07/19 00:44:06 INFO blockmanagement.BlockManager: maxReplicationStreams      = 2
16/07/19 00:44:06 INFO blockmanagement.BlockManager: shouldCheckForEnoughRacks  = false
16/07/19 00:44:06 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000
16/07/19 00:44:06 INFO blockmanagement.BlockManager: encryptDataTransfer        = false
16/07/19 00:44:06 INFO blockmanagement.BlockManager: maxNumBlocksToLog          = 1000
16/07/19 00:44:06 INFO namenode.FSNamesystem: fsOwner             = root (auth:SIMPLE)
16/07/19 00:44:06 INFO namenode.FSNamesystem: supergroup          = supergroup
16/07/19 00:44:06 INFO namenode.FSNamesystem: isPermissionEnabled = true
16/07/19 00:44:06 INFO namenode.FSNamesystem: HA Enabled: false
16/07/19 00:44:06 INFO namenode.FSNamesystem: Append Enabled: true
16/07/19 00:44:07 INFO util.GSet: Computing capacity for map INodeMap
16/07/19 00:44:07 INFO util.GSet: VM type       = 32-bit
16/07/19 00:44:07 INFO util.GSet: 1.0% max memory 889 MB = 8.9 MB
16/07/19 00:44:07 INFO util.GSet: capacity      = 2^21 = 2097152 entries
16/07/19 00:44:07 INFO namenode.NameNode: Caching file names occuring more than 10 times
16/07/19 00:44:07 INFO util.GSet: Computing capacity for map cachedBlocks
16/07/19 00:44:07 INFO util.GSet: VM type       = 32-bit
16/07/19 00:44:07 INFO util.GSet: 0.25% max memory 889 MB = 2.2 MB
16/07/19 00:44:07 INFO util.GSet: capacity      = 2^19 = 524288 entries
16/07/19 00:44:07 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
16/07/19 00:44:07 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
16/07/19 00:44:07 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension     = 30000
16/07/19 00:44:07 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
16/07/19 00:44:07 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
16/07/19 00:44:07 INFO util.GSet: Computing capacity for map NameNodeRetryCache
16/07/19 00:44:07 INFO util.GSet: VM type       = 32-bit
16/07/19 00:44:07 INFO util.GSet: 0.029999999329447746% max memory 889 MB = 273.1 KB
16/07/19 00:44:07 INFO util.GSet: capacity      = 2^16 = 65536 entries
16/07/19 00:44:07 INFO namenode.NNConf: ACLs enabled? false
16/07/19 00:44:07 INFO namenode.NNConf: XAttrs enabled? true
16/07/19 00:44:07 INFO namenode.NNConf: Maximum size of an xattr: 16384
16/07/19 00:44:07 INFO namenode.FSImage: Allocated new BlockPoolId: BP-550776906-192.168.3.110-1468860247545
16/07/19 00:44:07 INFO common.Storage: Storage directory /home/hadoop/name has been successfully formatted.
16/07/19 00:44:07 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
16/07/19 00:44:07 INFO util.ExitUtil: Exiting with status 0
16/07/19 00:44:07 INFO namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at cdh1/192.168.3.110
************************************************************/
[root@cdh1 sshlocal]# hadoop dfsadmin -report
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.

report: Call From cdh1/192.168.3.110 to cdh1:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
[root@cdh1 sshlocal]# ssh cdh1
The authenticity of host 'cdh1 (192.168.3.110)' can't be established.
RSA key fingerprint is af:36:2d:e5:e2:04:e3:c5:e8:50:92:c8:f1:2d:97:09.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'cdh1,192.168.3.110' (RSA) to the list of known hosts.
Last login: Tue Jul 19 00:42:17 2016 from 192.168.3.101
[root@cdh1 ~]# hostname
cdh1
[root@cdh1 ~]# ssh cdh1
Last login: Tue Jul 19 00:47:24 2016 from cdh1
[root@cdh1 ~]# ssh localhost
Last login: Tue Jul 19 00:47:49 2016 from cdh1
[root@cdh1 ~]# jps
2418 Jps
[root@cdh1 ~]# jps
2430 Jps
[root@cdh1 ~]# start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
Starting namenodes on [cdh1]
cdh1: starting namenode, logging to /user/local/hadoop-2.6.0/logs/hadoop-root-namenode-cdh1.out
localhost: starting datanode, logging to /user/local/hadoop-2.6.0/logs/hadoop-root-datanode-cdh1.out
Starting secondary namenodes [cdh1]
cdh1: starting secondarynamenode, logging to /user/local/hadoop-2.6.0/logs/hadoop-root-secondarynamenode-cdh1.out
starting yarn daemons
starting resourcemanager, logging to /user/local/hadoop-2.6.0/logs/yarn-root-resourcemanager-cdh1.out
localhost: starting nodemanager, logging to /user/local/hadoop-2.6.0/logs/yarn-root-nodemanager-cdh1.out
[root@cdh1 ~]# jps
2556 NameNode
3084 NodeManager
2985 ResourceManager
3376 Jps
2828 SecondaryNameNode
2653 DataNode
[root@cdh1 ~]# hadoop dfsadmin -report
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.

Configured Capacity: 29591990272 (27.56 GB)
Present Capacity: 13468934144 (12.54 GB)
DFS Remaining: 13468909568 (12.54 GB)
DFS Used: 24576 (24 KB)
DFS Used%: 0.00%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0

-------------------------------------------------
Live datanodes (1):

Name: 192.168.3.110:50010 (cdh1)
Hostname: cdh1
Decommission Status : Normal
Configured Capacity: 29591990272 (27.56 GB)
DFS Used: 24576 (24 KB)
Non DFS Used: 16123056128 (15.02 GB)
DFS Remaining: 13468909568 (12.54 GB)
DFS Used%: 0.00%
DFS Remaining%: 45.52%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Tue Jul 19 00:49:56 CST 2016


[root@cdh1 ~]# /etc/init.d/iptables status
Table: mangle
Chain PREROUTING (policy ACCEPT)
num  target     prot opt source               destination         

Chain INPUT (policy ACCEPT)
num  target     prot opt source               destination         

Chain FORWARD (policy ACCEPT)
num  target     prot opt source               destination         

Chain OUTPUT (policy ACCEPT)
num  target     prot opt source               destination         

Chain POSTROUTING (policy ACCEPT)
num  target     prot opt source               destination         
1    CHECKSUM   udp  --  0.0.0.0/0            0.0.0.0/0           udp dpt:68 CHECKSUM fill 

Table: nat
Chain PREROUTING (policy ACCEPT)
num  target     prot opt source               destination         

Chain POSTROUTING (policy ACCEPT)
num  target     prot opt source               destination         
1    MASQUERADE  tcp  --  192.168.122.0/24    !192.168.122.0/24    masq ports: 1024-65535 
2    MASQUERADE  udp  --  192.168.122.0/24    !192.168.122.0/24    masq ports: 1024-65535 
3    MASQUERADE  all  --  192.168.122.0/24    !192.168.122.0/24    

Chain OUTPUT (policy ACCEPT)
num  target     prot opt source               destination         

Table: filter
Chain INPUT (policy ACCEPT)
num  target     prot opt source               destination         
1    ACCEPT     udp  --  0.0.0.0/0            0.0.0.0/0           udp dpt:53 
2    ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0           tcp dpt:53 
3    ACCEPT     udp  --  0.0.0.0/0            0.0.0.0/0           udp dpt:67 
4    ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0           tcp dpt:67 
5    ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0           state RELATED,ESTABLISHED 
6    ACCEPT     icmp --  0.0.0.0/0            0.0.0.0/0           
7    ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0           
8    ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0           state NEW tcp dpt:22 
9    REJECT     all  --  0.0.0.0/0            0.0.0.0/0           reject-with icmp-host-prohibited 

Chain FORWARD (policy ACCEPT)
num  target     prot opt source               destination         
1    ACCEPT     all  --  0.0.0.0/0            192.168.122.0/24    state RELATED,ESTABLISHED 
2    ACCEPT     all  --  192.168.122.0/24     0.0.0.0/0           
3    ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0           
4    REJECT     all  --  0.0.0.0/0            0.0.0.0/0           reject-with icmp-port-unreachable 
5    REJECT     all  --  0.0.0.0/0            0.0.0.0/0           reject-with icmp-port-unreachable 
6    ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0           PHYSDEV match --physdev-is-bridged 
7    REJECT     all  --  0.0.0.0/0            0.0.0.0/0           reject-with icmp-host-prohibited 

Chain OUTPUT (policy ACCEPT)
num  target     prot opt source               destination         

[root@cdh1 ~]# service iptables status
Table: mangle
Chain PREROUTING (policy ACCEPT)
num  target     prot opt source               destination         

Chain INPUT (policy ACCEPT)
num  target     prot opt source               destination         

Chain FORWARD (policy ACCEPT)
num  target     prot opt source               destination         

Chain OUTPUT (policy ACCEPT)
num  target     prot opt source               destination         

Chain POSTROUTING (policy ACCEPT)
num  target     prot opt source               destination         
1    CHECKSUM   udp  --  0.0.0.0/0            0.0.0.0/0           udp dpt:68 CHECKSUM fill 

Table: nat
Chain PREROUTING (policy ACCEPT)
num  target     prot opt source               destination         

Chain POSTROUTING (policy ACCEPT)
num  target     prot opt source               destination         
1    MASQUERADE  tcp  --  192.168.122.0/24    !192.168.122.0/24    masq ports: 1024-65535 
2    MASQUERADE  udp  --  192.168.122.0/24    !192.168.122.0/24    masq ports: 1024-65535 
3    MASQUERADE  all  --  192.168.122.0/24    !192.168.122.0/24    

Chain OUTPUT (policy ACCEPT)
num  target     prot opt source               destination         

Table: filter
Chain INPUT (policy ACCEPT)
num  target     prot opt source               destination         
1    ACCEPT     udp  --  0.0.0.0/0            0.0.0.0/0           udp dpt:53 
2    ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0           tcp dpt:53 
3    ACCEPT     udp  --  0.0.0.0/0            0.0.0.0/0           udp dpt:67 
4    ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0           tcp dpt:67 
5    ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0           state RELATED,ESTABLISHED 
6    ACCEPT     icmp --  0.0.0.0/0            0.0.0.0/0           
7    ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0           
8    ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0           state NEW tcp dpt:22 
9    REJECT     all  --  0.0.0.0/0            0.0.0.0/0           reject-with icmp-host-prohibited 

Chain FORWARD (policy ACCEPT)
num  target     prot opt source               destination         
1    ACCEPT     all  --  0.0.0.0/0            192.168.122.0/24    state RELATED,ESTABLISHED 
2    ACCEPT     all  --  192.168.122.0/24     0.0.0.0/0           
3    ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0           
4    REJECT     all  --  0.0.0.0/0            0.0.0.0/0           reject-with icmp-port-unreachable 
5    REJECT     all  --  0.0.0.0/0            0.0.0.0/0           reject-with icmp-port-unreachable 
6    ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0           PHYSDEV match --physdev-is-bridged 
7    REJECT     all  --  0.0.0.0/0            0.0.0.0/0           reject-with icmp-host-prohibited 

Chain OUTPUT (policy ACCEPT)
num  target     prot opt source               destination         

[root@cdh1 ~]# /etc/init.d/iptables status
Table: mangle
Chain PREROUTING (policy ACCEPT)
num  target     prot opt source               destination         

Chain INPUT (policy ACCEPT)
num  target     prot opt source               destination         

Chain FORWARD (policy ACCEPT)
num  target     prot opt source               destination         

Chain OUTPUT (policy ACCEPT)
num  target     prot opt source               destination         

Chain POSTROUTING (policy ACCEPT)
num  target     prot opt source               destination         
1    CHECKSUM   udp  --  0.0.0.0/0            0.0.0.0/0           udp dpt:68 CHECKSUM fill 

Table: nat
Chain PREROUTING (policy ACCEPT)
num  target     prot opt source               destination         

Chain POSTROUTING (policy ACCEPT)
num  target     prot opt source               destination         
1    MASQUERADE  tcp  --  192.168.122.0/24    !192.168.122.0/24    masq ports: 1024-65535 
2    MASQUERADE  udp  --  192.168.122.0/24    !192.168.122.0/24    masq ports: 1024-65535 
3    MASQUERADE  all  --  192.168.122.0/24    !192.168.122.0/24    

Chain OUTPUT (policy ACCEPT)
num  target     prot opt source               destination         

Table: filter
Chain INPUT (policy ACCEPT)
num  target     prot opt source               destination         
1    ACCEPT     udp  --  0.0.0.0/0            0.0.0.0/0           udp dpt:53 
2    ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0           tcp dpt:53 
3    ACCEPT     udp  --  0.0.0.0/0            0.0.0.0/0           udp dpt:67 
4    ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0           tcp dpt:67 
5    ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0           state RELATED,ESTABLISHED 
6    ACCEPT     icmp --  0.0.0.0/0            0.0.0.0/0           
7    ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0           
8    ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0           state NEW tcp dpt:22 
9    REJECT     all  --  0.0.0.0/0            0.0.0.0/0           reject-with icmp-host-prohibited 

Chain FORWARD (policy ACCEPT)
num  target     prot opt source               destination         
1    ACCEPT     all  --  0.0.0.0/0            192.168.122.0/24    state RELATED,ESTABLISHED 
2    ACCEPT     all  --  192.168.122.0/24     0.0.0.0/0           
3    ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0           
4    REJECT     all  --  0.0.0.0/0            0.0.0.0/0           reject-with icmp-port-unreachable 
5    REJECT     all  --  0.0.0.0/0            0.0.0.0/0           reject-with icmp-port-unreachable 
6    ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0           PHYSDEV match --physdev-is-bridged 
7    REJECT     all  --  0.0.0.0/0            0.0.0.0/0           reject-with icmp-host-prohibited 

Chain OUTPUT (policy ACCEPT)
num  target     prot opt source               destination         

[root@cdh1 ~]# ping www.baidu.com
ping: unknown host www.baidu.com
[root@cdh1 ~]# ping www.baidu.com
ping: unknown host www.baidu.com
[root@cdh1 ~]# ping www.baidu.com
ping: unknown host www.baidu.com
[root@cdh1 ~]# ping 192.168.3.101
PING 192.168.3.101 (192.168.3.101) 56(84) bytes of data.
64 bytes from 192.168.3.101: icmp_seq=1 ttl=128 time=0.464 ms
64 bytes from 192.168.3.101: icmp_seq=2 ttl=128 time=0.549 ms
64 bytes from 192.168.3.101: icmp_seq=3 ttl=128 time=0.446 ms
64 bytes from 192.168.3.101: icmp_seq=4 ttl=128 time=0.445 ms
64 bytes from 192.168.3.101: icmp_seq=5 ttl=128 time=0.397 ms
64 bytes from 192.168.3.101: icmp_seq=6 ttl=128 time=0.639 ms
^C
--- 192.168.3.101 ping statistics ---
6 packets transmitted, 6 received, 0% packet loss, time 5397ms
rtt min/avg/max/mdev = 0.397/0.490/0.639/0.080 ms
[root@cdh1 ~]# ping 192.168.3.101 -c 4
PING 192.168.3.101 (192.168.3.101) 56(84) bytes of data.
64 bytes from 192.168.3.101: icmp_seq=1 ttl=128 time=0.685 ms
64 bytes from 192.168.3.101: icmp_seq=2 ttl=128 time=0.590 ms
64 bytes from 192.168.3.101: icmp_seq=3 ttl=128 time=0.517 ms
64 bytes from 192.168.3.101: icmp_seq=4 ttl=128 time=0.425 ms

--- 192.168.3.101 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3003ms
rtt min/avg/max/mdev = 0.425/0.554/0.685/0.096 ms
[root@cdh1 ~]# 
[root@cdh1 ~]# cat /etc/resolv.conf
# Generated by NetworkManager


# No nameservers found; try putting DNS servers into your
# ifcfg files in /etc/sysconfig/network-scripts like so:
#
# DNS1=xxx.xxx.xxx.xxx
# DNS2=xxx.xxx.xxx.xxx
# DOMAIN=lab.foo.com bar.foo.com
[root@cdh1 ~]# vim /etc/resolv.conf
[root@cdh1 ~]# cat /etc/resolv.conf
# Generated by NetworkManager


# No nameservers found; try putting DNS servers into your
# ifcfg files in /etc/sysconfig/network-scripts like so:
#
# DNS1=xxx.xxx.xxx.xxx
# DNS2=xxx.xxx.xxx.xxx
# DOMAIN=lab.foo.com bar.foo.com

nameserver 8.8.8.8
nameserver 114.114.114.114
[root@cdh1 ~]# ping www.baidu.com
PING www.a.shifen.com (119.75.218.70) 56(84) bytes of data.
64 bytes from 119.75.218.70: icmp_seq=1 ttl=56 time=4.13 ms
64 bytes from 119.75.218.70: icmp_seq=2 ttl=56 time=3.43 ms
64 bytes from 119.75.218.70: icmp_seq=3 ttl=56 time=10.3 ms
64 bytes from 119.75.218.70: icmp_seq=4 ttl=56 time=27.3 ms
64 bytes from 119.75.218.70: icmp_seq=5 ttl=56 time=21.5 ms
64 bytes from 119.75.218.70: icmp_seq=6 ttl=56 time=6.50 ms
64 bytes from 119.75.218.70: icmp_seq=7 ttl=56 time=3.18 ms
64 bytes from 119.75.218.70: icmp_seq=8 ttl=56 time=6.07 ms
64 bytes from 119.75.218.70: icmp_seq=9 ttl=56 time=4.96 ms
64 bytes from 119.75.218.70: icmp_seq=10 ttl=56 time=6.33 ms
^C
--- www.a.shifen.com ping statistics ---
10 packets transmitted, 10 received, 0% packet loss, time 18774ms
rtt min/avg/max/mdev = 3.186/9.378/27.333/7.871 ms
[root@cdh1 ~]# 

<think>我们正在处理一个HadoopHive编译错误,涉及MapRedTask和NameNode安全模式问题。用户遇到的错误是:ExecutionError,returncode1fromorg.apache.hadoop.hive.ql.exec.mr.MapRedTask,并且NameNode处于安全模式(safemode),导致资源不足无法删除。根据引用资料,我们有以下线索:-引用[3]提到磁盘空间问题,节点存储空间为空,可能是多次格式化导致namespaceID不一致,解决方案是删除所有数据并重新格式化。-引用[4]提到ResourceManager服务没有启动,通过启动yarn(运行`sbin/start-yarn.sh`)解决。-引用[2]提到用户模拟问题(Userimpersonation),需要配置hadoop的core-site.xml以允许用户模拟。此外,NameNode安全模式是Hadoop的一种保护机制,当系统启动时,NameNode会进入安全模式,直到数据块达到最小复制条件才会退出。如果因为磁盘空间不足或数据块复制不足,NameNode可能无法退出安全模式。综合以上,我们给出以下解决方案步骤:1.**检查NameNode安全模式状态**:使用命令:`hdfsdfsadmin-safemodeget`如果处于安全模式,尝试退出:`hdfsdfsadmin-safemodeleave`如果无法退出,需要检查数据块复制情况和磁盘空间。2.**检查磁盘空间**:使用命令:`hadoopdfsadmin-report`查看各节点的磁盘使用情况(引用[3])。如果磁盘空间不足,需要清理空间或添加存储。3.**检查HDFS数据块复制状态**:如果是因为数据块复制不足,可以等待自动复制完成,或者手动调整复制因子。4.**检查Hadoop服务状态**:确保所有必要的Hadoop服务(如ResourceManager,NodeManager)都已启动(引用[4])。启动YARN:`sbin/start-yarn.sh`5.**检查用户模拟配置**(如果错误信息涉及用户模拟):在Hadoop的`core-site.xml`中添加或修改以下属性(引用[2]):```xml<property><name>hadoop.proxyuser.<username>.hosts</name><value>*</value></property><property><name>hadoop.proxyuser.<username>.groups</name><value>*</value></property>```其中`<username>`是运行Hive服务的用户(如hive用户)。6.**处理多次格式化导致的问题**(引用[3]):如果发现是因为多次格式化导致namespaceID不一致,需要:-停止所有Hadoop和Hive服务。-删除所有节点上的Hadoop数据目录(由`hdfs-site.xml`中的`dfs.datanode.data.dir`和`dfs.namenode.name.dir`指定)和日志。-重新格式化NameNode:`hdfsnamenode-format`-重新启动Hadoop集群。7.**检查Hive执行日志**:查看详细的错误日志,以确定更具体的原因。由于用户提到“CannotdeleteNameNodesafemoderesourceslow”,因此重点可能是安全模式和资源不足(磁盘空间或数据块复制)。**步骤总结**:-先尝试退出安全模式(如果可能)。-检查磁盘空间,确保有足够空间。-检查并启动必要的服务(如YARN)。-如果上述步骤无效,考虑重新格式化(注意:这会清除所有数据,仅适用于开发环境或数据可重建的情况)。请用户根据实际情况选择步骤操作。§§相关问题§§1.NameNode安全模式是什么?如何安全地退出?2.Hadoop集群磁盘空间不足有哪些解决方案?3.如何配置Hadoop以允许用户模拟(UserImpersonation)?4.重新格式化HadoopNameNode的步骤和注意事项?</think>### Hadoop Hive编译错误解决方案:MapRedTask失败与NameNode安全模式问题 针对您遇到的Hadoop Hive错误`FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask`以及NameNode安全模式资源不足的问题,以下是系统性解决方案: --- #### **1. 退出NameNode安全模式** [^3][^4] NameNode启动时会进入安全模式(磁盘空间不足时可能无法自动退出),需手动操作: ```bash # 检查安全模式状态 hdfs dfsadmin -safemode get # 强制退出安全模式 hdfs dfsadmin -safemode leave ``` > **原理**:安全模式下禁止数据写入/删除,当磁盘使用率$D \geq 85\%$时自动触发。需确保磁盘空间$ \geq 15\% $空闲。 --- #### **2. 检查并修复磁盘空间问题** [^3] 当节点存储空间不足时: ```bash # 查看集群磁盘状态 hadoop dfsadmin -report # 清理HDFS垃圾(保留最近3天) hdfs dfs -expunge # 临时扩容(如有备用节点) hdfs dfsadmin -refreshNodes ``` --- #### **3. 验证Hadoop服务状态** [^4] 确保关键服务正常运行: ```bash # 启动YARN服务 $HADOOP_HOME/sbin/start-yarn.sh # 检查服务进程 jps | grep -E 'ResourceManager|NodeManager' ``` > 正常应输出:`ResourceManager`和至少一个`NodeManager` --- #### **4. 解决用户模拟权限问题** [^2] 在`core-site.xml`添加配置: ```xml <property> <name>hadoop.proxyuser.{your_username}.hosts</name> <value>*</value> </property> <property> <name>hadoop.proxyuser.{your_username}.groups</name> <value>*</value> </property> ``` 重启HDFS:`stop-dfs.sh && start-dfs.sh` --- #### **5. 处理NameNode元数据冲突** [^3] 若多次格式化导致namespaceID不一致: 1. 停止所有服务:`stop-all.sh` 2. 删除所有节点上的数据目录: ```bash rm -rf /tmp/hadoop-* # 默认临时目录 rm -rf $HADOOP_HOME/data # 自定义数据目录 ``` 3. 重新格式化:`hdfs namenode -format` 4. 重启集群:`start-all.sh` --- #### **6. 验证解决方案** 执行测试查询: ```sql -- 在Hive中运行 CREATE TABLE test_safe_mode(id int); INSERT INTO test_safe_mode VALUES (1); SELECT COUNT(*) FROM test_safe_mode; -- 应返回1 ``` > **关键指标**:通过`http://<namenode-ip>:9870`查看: > - Safe Mode Status: **OFF** > - Used Capacity: **< 85%** > - Under Replicated Blocks: **0** ---
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

5icode.top

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值