安装BigTop编译的Hadoop3.1.1版本过程记录

本文记录安装编译后的rpm包的过程,主要是在不同主机上分配好需要安装的服务,使用yum命令安装,然后修改对应的配置文件,最后启动服务即可。

1、前期准备

主要包括主机的设置方面,配置主机名、IP和主机名映射、关闭防火墙等

#关闭防火墙
hostnamectl set-hostname hdp1

#修改主机名
172.16.25.139 hdp1
172.16.25.140 hdp2
172.16.25.141 hdp3
172.16.25.142 hdp4
172.16.25.143 hdp5

#关闭防火墙
systemctl stop firewalld
systemctl disable firewalld

2、配置免密码登陆

分别在hdp1、hdp2配置免密登陆

在hdp1上
cd ~
ssh-keygen -t rsa
cd .ssh
cat id_rsa.pub >> authorized_keys
scp ~/.ssh/authorized_keys hdp1:~/.ssh
scp ~/.ssh/authorized_keys hdp2:~/.ssh
scp ~/.ssh/authorized_keys hdp3:~/.ssh
scp ~/.ssh/authorized_keys hdp4:~/.ssh

在hdp2上
cd ~/.ssh
cat id_rsa.pub >> authorized_keys
scp ~/.ssh/authorized_keys hdp1:~/.ssh
scp ~/.ssh/authorized_keys hdp2:~/.ssh
scp ~/.ssh/authorized_keys hdp3:~/.ssh
scp ~/.ssh/authorized_keys hdp4:~/.ssh

3、配置JDK

每台主机分别配置

解压安装包
/opt/jdk1.8.0_131

配置环境变量,在文件/etc/profile末尾添加如下内容:
export JAVA_HOME=/opt/jdk1.8.0_131
export PATH=$PATH:$JAVA_HOME/bin

使环境变量生效
source /etc/profile

4、yum源配置、清理、缓存

配置yum源

rpm包放到下面的目录中
/var/www/html

配置hdp的repo
cat /etc/yum.repos.d/hdp.repo
[hdp]
name=hadoop
baseurl=http://172.16.25.139
enabled=1
gpgcheck=0

启动http服务
service httpd start

yum clean all
yum makecache

5、所有关于hadoop的rpm包

 yum list | grep hadoop
hadoop.x86_64                              3.1.1-1.el7                 @hdp
hadoop-hdfs.x86_64                         3.1.1-1.el7                 @hdp
hadoop-hdfs-journalnode.x86_64             3.1.1-1.el7                 @hdp
hadoop-hdfs-namenode.x86_64                3.1.1-1.el7                 @hdp
hadoop-hdfs-zkfc.x86_64                    3.1.1-1.el7                 @hdp
hadoop-hdfs-datanode.x86_64                3.1.1-1.el7                 hdp
hadoop-hdfs-fuse.x86_64                    3.1.1-1.el7                 hdp
hadoop-hdfs-secondarynamenode.x86_64       3.1.1-1.el7                 hdp
hadoop-libhdfs.x86_64                      3.1.1-1.el7                 hdp
hadoop-libhdfs-devel.x86_64                3.1.1-1.el7                 hdp


hadoop-mapreduce.x86_64                    3.1.1-1.el7                 @hdp
hadoop-mapreduce-historyserver.x86_64      3.1.1-1.el7                 @hdp

hadoop-yarn.x86_64                         3.1.1-1.el7                 @hdp
hadoop-yarn-nodemanager.x86_64             3.1.1-1.el7                 @hdp
hadoop-yarn-resourcemanager.x86_64         3.1.1-1.el7                 @hdp
hadoop-yarn-proxyserver.x86_64             3.1.1-1.el7                 hdp
hadoop-yarn-timelineserver.x86_64          3.1.1-1.el7                 hdp

hadoop-client.x86_64                       3.1.1-1.el7                 hdp
hadoop-debuginfo.x86_64                    3.1.1-1.el7                 hdp
hadoop-doc.x86_64                          3.1.1-1.el7                 hdp

hadoop-conf-pseudo.x86_64                  3.1.1-1.el7                 hdp

6、服务分配

hostnameip说明
hdp1172.16.25.139name node
hdp2172.16.25.140name node
hdp3172.16.25.141data node
hdp4172.16.25.142data node
hdp5172.16.25.143data node
 hdp1hdp2hdp3hdp4hdp5
NameNode   
DataNode  
ResourceManager   
NodeManager
Zookeeper
journalnode
zkfc   

 

7、安装服务

 

hdp1

yum install zookeeper  -y
yum install hadoop-hdfs-namenode -y
yum install hadoop-yarn-resourcemanager -y
yum install hadoop-hdfs-zkfc -y
yum install hadoop-yarn-nodemanager -y 
yum install hadoop-hdfs-journalnode -y
yum install hadoop-mapreduce-historyserver -y
yum install hadoop-yarn-proxyserver -y    
yum install hadoop-yarn-timelineserver -y
yum install hadoop-hdfs-secondarynamenode -y

hdp2

yum install zookeeper  -y
yum install hadoop-hdfs-namenode -y
yum install hadoop-yarn-resourcemanager -y
yum install hadoop-hdfs-zkfc -y 
yum install hadoop-yarn-nodemanager -y 
yum install hadoop-hdfs-journalnode -y
yum install hadoop-mapreduce-historyserver -y
yum install hadoop-hdfs-secondarynamenode -y

hdp3

yum install zookeeper  -y
yum install hadoop-hdfs-datanode -y
yum install hadoop-yarn-nodemanager -y
yum install hadoop-hdfs-journalnode -y
yum install hadoop-mapreduce-historyserver -y

hdp4和hdp5

yum install hadoop-hdfs-datanode -y
yum install hadoop-yarn-nodemanager -y
yum install hadoop-hdfs-journalnode -y
yum install hadoop-mapreduce-historyserver -y

8、配置服务

zookeeper

zoo.cfg添加

dataLogDir=/var/lib/zookeeper/log

server.1=hdp1:2888:3888
server.2=hdp2:2888:3888
server.3=hdp3:2888:3888
在data目录下添加myid文件,内容为对应每台主机的serverid,分别为1,2,3

mkdir /var/lib/zookeeper/log

hadoop

hadoop-env.sh

## 在文件开头加上,根据自己服务器配置设置jvm内存大小
export JAVA_HOME=/opt/jdk1.8.0_131
#export PATH=$PATH:$JAVA_HOME/bin
#export HADOOP_NAMENODE_OPTS=" -Xms1024m -Xmx1024m -XX:+UseParallelGC"
#export HADOOP_DATANODE_OPTS=" -Xms512m -Xmx512m"
export HADOOP_LOG_DIR=/opt/data/logs/hadoop

core-site.xml

<configuration>
    <!-- 指定hdfs的nameservice为mycluster -->
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://mycluster</value>
    </property>

    <property>
        <name>hadoop.tmp.dir</name>
        <value>/opt/data/hadoop/tmp</value>
    </property>

    <!-- 指定zookeeper地址 -->
<property>
    <name>ha.zookeeper.quorum</name>
    <value>hdp1:2181,hdp2:2181,hdp3:2181</value>
</property>

<!-- hadoop链接zookeeper的超时时长设置 -->
     <property>
         <name>ha.zookeeper.session-timeout.ms</name>
         <value>30000</value>
         <description>ms</description>
     </property>

    <property>
        <name>fs.trash.interval</name>
        <value>1440</value>
    </property>

</configuration>

hdfs-site.xml

<configuration>

<!-- journalnode集群之间通信的超时时间 -->
<property>
    <name>dfs.qjournal.start-segment.timeout.ms</name>
    <value>60000</value>
</property>
    <!--指定hdfs的nameservice为mycluster,需要和core-site.xml中的保持一致
                          dfs.ha.namenodes.[nameservice id]为在nameservice中的每一个NameNode设置唯一标示符。
        配置一个逗号分隔的NameNode ID列表。这将是被DataNode识别为所有的NameNode。
        例如,如果使用"mycluster"作为nameservice ID,并且使用"hdp1"和"hdp2"作为NameNodes标示符
    -->
    <property>
        <name>dfs.nameservices</name>
        <value>mycluster</value>
    </property>
        <!-- mycluster下面有两个NameNode,分别是hdp1,hdp2 -->
    <property>
        <name>dfs.ha.namenodes.mycluster</name>
        <value>nn1,nn1</value>
    </property>
    <!-- hdp1的RPC通信地址 -->
    <property>
        <name>dfs.namenode.rpc-address.mycluster.nn1</name>
        <value>hdp1:8020</value>
    </property>
    <!-- hdp2的RPC通信地址 -->
    <property>
        <name>dfs.namenode.rpc-address.mycluster.nn2</name>
        <value>hdp2:8020</value>
    </property>
     <!-- hdp1的http通信地址 -->
    <property>
        <name>dfs.namenode.http-address.mycluster.nn1</name>
        <value>hdp1:50070</value>
    </property>
    <!-- hdp2的http通信地址 -->
    <property>
        <name>dfs.namenode.http-address.mycluster.nn2</name>
        <value>hdp2:50070</value>
    </property>
    <!-- 指定NameNode的edits元数据的共享存储位置。也就是JournalNode列表,该url的配置格式:qjournal://host1:port1;host2:port2;host3:port3/journalId
        journalId推荐使用nameservice,默认端口号是:8485 -->
    <property>
        <name>dfs.namenode.shared.edits.dir</name>
       <value>qjournal://hdp1:8485;hdp2:8485;hdp3:8485;hdp4:8485;hdp5:8485/mycluster</value>
    </property>
    <!-- 配置失败自动切换实现方式 -->
    <property>
        <name>dfs.client.failover.proxy.provider.mycluster</name>
    <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
    </property>
    <!-- 配置隔离机制方法,多个机制用换行分割,即每个机制暂用一行 -->
    <property>
        <name>dfs.ha.fencing.methods</name>
        <value>
            sshfence
	    shell(/bin/true)
        </value>
    </property>
  <property>
     <name>dfs.permissions.enabled</name>
     <value>false</value>
  </property>
    <property>
        <name>dfs.support.append</name>
        <value>true</value>
    </property>
    <!-- 使用sshfence隔离机制时需要ssh免登陆 -->
    <property>
        <name>dfs.ha.fencing.ssh.private-key-files</name>
        <value>/root/.ssh/id_rsa</value>
    </property>
    <!-- 指定副本数 -->
    <property>
        <name>dfs.replication</name>
        <value>2</value>
    </property>
    <property>
        <name>dfs.namenode.name.dir</name>
        <value>/opt/data/hadoop/hdfs/nn</value>
    </property>
    <property>
        <name>dfs.datanode.data.dir</name>
        <value>/opt/data/hadoop/hdfs/dn</value>
    </property>
    <!-- 指定JournalNode在本地磁盘存放数据的位置 -->
    <property>
        <name>dfs.journalnode.edits.dir</name>
        <value>/opt/data/hadoop/hdfs/jn</value>
    </property>
    <!-- 开启NameNode失败自动切换 -->
    <property>
        <name>dfs.ha.automatic-failover.enabled</name>
        <value>true</value>
    </property>
    <!-- 启用webhdfs -->
    <property>
        <name>dfs.webhdfs.enabled</name>
        <value>true</value>
    </property>
    <!-- 配置sshfence隔离机制超时时间 -->
    <property>
        <name>dfs.ha.fencing.ssh.connect-timeout</name>
        <value>30000</value>
    </property>
    <property>
        <name>ha.failover-controller.cli-check.rpc-timeout.ms</name>
        <value>60000</value>
    </property>

</configuration>

mapred-site.xml

<configuration>
    <!-- 指定mr框架为yarn方式 -->
    <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>

    <!-- 指定mapreduce jobhistory地址 -->
    <property>
        <name>mapreduce.jobhistory.address</name>
        <value>hdp1:10020</value>
    </property>
    <!-- 任务历史服务器的web地址 -->
    <property>
        <name>mapreduce.jobhistory.webapp.address</name>
        <value>hdp1:19888</value>
    </property>
    <property>
      <name>mapreduce.application.classpath</name>
      <value>
          /usr/lib/hadoop/etc/hadoop,
          /usr/lib/hadoop/*,
          /usr/lib/hadoop/lib/*,
          /usr/lib/hadoop-hdfs/*,
          /usr/lib/hadoop-hdfs/lib/*,
        /usr/lib/mapreduce/*,
         /usr/lib/mapreduce/lib/*,
          /usr/lib/hadoop-yarn/*,
          /usr/lib/hadoop-yarn/lib/*
      </value>
    </property>
</configuration>

yarn-site.xml

<configuration>

<!-- Site specific YARN configuration properties -->
    <!-- 开启RM高可用 -->
    <property>
        <name>yarn.resourcemanager.ha.enabled</name>
        <value>true</value>
    </property>
    <!-- 指定RM的cluster id -->
    <property>
        <name>yarn.resourcemanager.cluster-id</name>
        <value>yrc</value>
    </property>
    <!-- 指定RM的名字 -->
    <property>
        <name>yarn.resourcemanager.ha.rm-ids</name>
        <value>rm1,rm2</value>
    </property>
    <!-- 分别指定RM的地址 -->
    <property>
        <name>yarn.resourcemanager.hostname.rm1</name>
        <value>hdp1</value>
    </property>
    <property>
        <name>yarn.resourcemanager.hostname.rm2</name>
        <value>hdp2</value>
    </property>
    <!-- 指定zk集群地址 -->
    <property>
        <name>yarn.resourcemanager.zk-address</name>
        <value>hdp1:2181,hdp2:2181,hdp3:2181</value>
    </property>
    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>
    <property>
        <name>yarn.log-aggregation-enable</name>
        <value>true</value>
    </property>
    <property>
        <name>yarn.log-aggregation.retain-seconds</name>
        <value>86400</value>
    </property>
  <property>
  <name>yarn.resourcemanager.webapp.address.rm1</name>
  <value>hdp1:8088</value>
</property>
<property>
  <name>yarn.resourcemanager.webapp.address.rm2</name>
  <value>hdp2:8088</value>
</property>
    <!-- 启用自动恢复 -->
    <property>
        <name>yarn.resourcemanager.recovery.enabled</name>
        <value>true</value>
    </property>
    <!-- 制定resourcemanager的状态信息存储在zookeeper集群上 -->
    <property>
        <name>yarn.resourcemanager.store.class</name>
        <value>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</value>
    </property>
</configuration>

workers

hdp3
hdp4
hdp5

 

9、创建目录

hdp1-hdp5

mkdir -p /opt/data/logs/hadoop
mkdir -p /opt/data/hadoop/hdfs/nn
mkdir -p /opt/data/hadoop/hdfs/dn
mkdir -p /opt/data/hadoop/hdfs/jn
mkdir -p /opt/data/hadoop/tmp
mkdir -p /opt/data/hadoop/tmp/yarn/timeline/
 
chown -R hdfs:hdfs /opt/data
chown -R yarn:yarn /opt/data/hadoop/tmp/yarn

export BIGTOP_DEFAULTS_DIR=/usr/lib
export HADOOP_PREFIX=/usr/lib/hadoop
export YARN_CONF_DIR=/usr/lib/hadoop-yarn

 

10、启动服务

 

zookeeper

hdp1-hdp3启动

hadoop

#启动journalnode
hdp1,hdp2,hdp3,hdp4,hdp5
/etc/init.d/hadoop-hdfs-journalnode start

#格式化namenode
hdp1
hadoop namenode -format


#格式化zkfc
hdp1
hdfs zkfc -formatZK
./hadoop-hdfs-zkfc start


#启动HDFS,zookeeper会选举哪个是active,哪个是standby
hdp1&hdp2
/etc/init.d/hadoop-hdfs-namenode start


#启动YARN
#在主备 resourcemanager 中随便选择一台进行启动
start-yarn.sh


#启动 mapreduce 任务历史服务器
mr-jobhistory-daemon.sh start historyserver

 

 

 

评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

未竟

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值