Hadoo伪分布式部署中需要设置NN、DN、SNN的相关参数,在真正生产中,不可能使用IP来启动相关组件得,因为IP很有可能会变动,也很可能你得修改大量代码中得IP。所以需要调整NN、DN、SNN启动方式localhost为hadoop001
一、先查看当前服务是以localhost启动的
[root@hadoop001 ~]# su - hadoop
[hadoop@hadoop001 ~]$ cd app/
[hadoop@hadoop001 app]$ cd hadoop/
[hadoop@hadoop001 hadoop]$ sbin/start-dfs.sh
19/07/04 20:57:06 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [localhost]
localhost: starting namenode, logging to /home/hadoop/software/hadoop-2.6.0-cdh5.7.0/logs/hadoop-hadoop-namenode-hadoop001.out
localhost: starting datanode, logging to /home/hadoop/software/hadoop-2.6.0-cdh5.7.0/logs/hadoop-hadoop-datanode-hadoop001.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /home/hadoop/software/hadoop-2.6.0-cdh5.7.0/logs/hadoop-hadoop-secondarynamenode-hadoop001.out
19/07/04 20:57:32 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
[hadoop@hadoop001 hadoop]$ jps
3778 SecondaryNameNode
3500 NameNode
3597 DataNode
3903 Jps
[hadoop@hadoop001 hadoop]$ netstat -nlp|grep 3500
#如果用另外一个服务的账号去查看端口号会抛错
#如果查看进程没有端口号,尝试用root查看
(Not all processes could be identified, non-owned process info
will not be shown, you would have to be root to see it all.)
tcp 0 0 0.0.0.0:50070 0.0.0.0:* LISTEN 3500/java
tcp 0 0 127.0.0.1:9000 0.0.0.0:* LISTEN 3500/java
tcp 0 0 :::35001 :::* LISTEN -
企业里不可在配置文件里配置ip,而以hostname配置 hosts事先配置好
ip必须变更
hosts文件: 第一二行 千万不要删除注释掉;配置内网ip hostname
二.NN、DN、SNN参数配置
(1) NN参数配置
core-site.xml
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://hadoop001:9000</value>
</property>
</configuration>
(2) DN参数配置
[hadoop@hadoop001 hadoop]$ cat slaves
localhost
[hadoop@hadoop001 hadoop]$ vi slaves
[hadoop@hadoop001 hadoop]$ cat slaves
hadoop001
(3) SNN参数配置
[hadoop@hadoop001 hadoop]$ vi hdfs-site.xml
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>hadoop001:50090</value>
</property>
<property>
<name>dfs.namenode.secondary.https-address</name>
<value>hadoop001:50091</value>
</property>
</configuration>
三.用hadoop用户启动hdfs进程
先把进程停止然后再启动
此时hdfs进程的启动显示全部为hadoop-01的主机名
[hadoop@hadoop001 hadoop]$ sbin/start-dfs.sh
19/07/04 22:02:14 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [hadoop001]
hadoop001: starting namenode, logging to /home/hadoop/software/hadoop-2.6.0-cdh5.7.0/logs/hadoop-hadoop-namenode-hadoop001.out
hadoop001: starting datanode, logging to /home/hadoop/software/hadoop-2.6.0-cdh5.7.0/logs/hadoop-hadoop-datanode-hadoop001.out
Starting secondary namenodes [hadoop001]
hadoop001: starting secondarynamenode, logging to /home/hadoop/software/hadoop-2.6.0-cdh5.7.0/logs/hadoop-hadoop-secondarynamenode-hadoop001.out
19/07/04 22:02:36 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
[hadoop@hadoop001 hadoop]$