hostname启动Hadoop

本文介绍了Hadoop组件的架构,包括namenode、datanode和secondary namenode,强调了在企业环境中通常使用hostname而非IP进行配置。详细讲述了如何在主机文件中配置hostname,特别是针对内网服务器和云服务器的不同处理方式。此外,还提到了修改核心和HDFS配置文件以适应hostname启动,并简单讨论了http与https的区别。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

1、Hadoop组件

  • namenode:nn 名称节点,老大
  • datanode:dn 数据节点,小弟,真正做数据的读写操作的
  • secondary namenode:snn 第二名称节点,老二
    ——主从架构(大数据组件大部分都是主从架构)

2、启动Hadoop

[hadoop@hadoop001 hadoop]$ sbin/start-dfs.sh
19/07/03 21:02:04 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [localhost]
localhost: starting namenode, logging to /home/hadoop/software/hadoop-2.6.0-cdh5.7.0/logs/hadoop-hadoop-namenode-hadoop001.out
localhost: starting datanode, logging to /home/hadoop/software/hadoop-2.6.0-cdh5.7.0/logs/hadoop-hadoop-datanode-hadoop001.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /home/hadoop/software/hadoop-2.6.0-cdh5.7.0/logs/hadoop-hadoop-secondarynamenode-hadoop001.out
19/07/03 21:03:12 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
  • 发现此时namenode与datanode是以localhost启动的,secondarynamenode是以0.0.0.0启动的。
    而在企业中,参数都不以ip的形式配置,而hostname配置,同时对应的hosts文件应事先配置好。因为正常情况下hostname不可能变的,但ip是可能变化的,如果以hostname配置,未来ip无论因为何种原因怎么变化,只需要改host文件即可。
  • 当发生变迁时,ip必须变更
    a.机器是内网服务器,hosts文件配置:192.168.137.130 hadoop001(第一行第二行不要删除)
    b.机器是云服务器时,hosts文件配置:内网ip hadoop001(云服务器有两个ip:内网ip、公网ip)
    尤其如果阿里云的配置使用公网ip,那么有可能kafka服务起不来,因为公网ip在阿里云上是漂着的

3、修改参数
如何找参数?——core-site.xml / hdfs-site.xml

  • nn
core-site.xml
<configuration>
    <property>
            <name>fs.defaultFS</name>
            <value>hdfs://hadoop001:9000</value>
   </property>
</configuration>
  • dn
[hadoop@hadoop001 hadoop]$ cat slaves 
localhost
[hadoop@hadoop001 hadoop]$ vi slaves 
[hadoop@hadoop001 hadoop]$ cat slaves 
hadoop001
  • snn
hdfs-site.xml
<property>
            <name>dfs.namenode.secondary.http-address</name>
            <value>hadoop001:50090</value>
</property>
<property>
             <name>dfs.namenode.secondary.https-address</name>
            <value>hadoop001:50091</value>
</property>

http和https:https相比http多了一个由TLS(SSL)提供的secure,简单来说,https是HTTP的安全版。

4、以hostname启动

[hadoop@hadoop001 hadoop]$ sbin/start-dfs.sh 
19/07/03 21:30:47 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [hadoop001]
hadoop001: starting namenode, logging to /home/hadoop/software/hadoop-2.6.0-cdh5.7.0/logs/hadoop-hadoop-namenode-hadoop001.out
hadoop001: starting datanode, logging to /home/hadoop/software/hadoop-2.6.0-cdh5.7.0/logs/hadoop-hadoop-datanode-hadoop001.out
Starting secondary namenodes [hadoop001]
hadoop001: starting secondarynamenode, logging to /home/hadoop/software/hadoop-2.6.0-cdh5.7.0/logs/hadoop-hadoop-secondarynamenode-hadoop001.out
19/07/03 21:31:04 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值