完全发布式全过程

创建一个虚拟机克隆两个相当于三个虚拟机

做好网络配置 给第二个第三个虚拟机的ip配置更改为12 13做到三个主机同步

1分别创建三个主机名 123

hostnamectl set-hostname sunreze1 2 3

2设置selinux

vi /etc/selinux/config

 将里面的selinux等于disabled

 3编辑hosts

vi /etc/hosts

192.168.1.11 sunrenze1

192.168.1.12 sunrenze2

192.168.1.13 sunrenze3

(三个ip加主机名)

 4创建用户

useradd sunrenze

设置密码 passwd sunrenze 密码000000

5给用户权限

vi /etc/sudoers

按esc后输入 set nu 回车 输入111回车 查找111行 写入sunrenze加内容冲齐 

6在主机1 里面创建两个目录

mkdir /opt/{software,module}

在主机2 3下面只创建module

mkdir /opt/module

7给所有文件权限

chown -R sunrenze:sunrenze /opt/*

8 免密配置 

登录普通用户

su - sunrenze

设置免密

ssh-keygen -t rsa

复制给主机名

ssh-copy-id sunrenze1

ssh-copy-id sunrenze2

ssh-copy-id sunrenze3

8在主机1里面software下上传两个文件

9解压到module下

 tar xf /opt/software/jdk-8u162-linux-x64.tar.gz -C /opt/module/
tar xf /opt/software/hadoop-3.3.1.tar -C /opt/module/

 hadoop同理解压

10解压完成后编辑sunrenze.sh

sudo vi /etc/profile.d/sunrenze.sh

编写

export JAVA_HOME=/opt/module/jdk1.8.0_162
export PATH=$PATH:$JAVA_HOME/bin
export HADOOP_HOME=/opt/module/hadoop-3.3.1
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin

 12生效

source /etc/profile

13测试 

java -version

hadoop version

这里环境配置已经完成

13切换到hadoop下编辑配置文件

1.

vi hadoop-env.sh

export JAVA_HOME=/opt/module/jdk1.8.0_162
export HADOOP_MAPRED_HOME=/opt/module/hadoop-3.3.1

export HDFS_NAMENODE_UESR=sunrenze
export HDFS_DATANODE_USER=sunrenze
export HDFS_SECONDARYNAMENODE=sunrenze

export YARN_RESOURCEMANAGER_USER=sunrenze
export YARN_NODEMANAGER_USER=sunrenze

2.vi core-site.xml

<configuration>
        <property>
                <name>fs.defaultFS</name>
                <value>hdfs://sunrenze1:9000</value>
        </property>
        <property>
                <name>hadoop.tmp.dir</name>
                <value>/opt/module/hadoop-3.3.1/tmp</value>
        </property>
        <property>
                <name>hadoop.http.staticuser.user</name>
                <value>sunrenze</value>
        </property>
</configuration>

3. vi hdfs-site.xml

<configuration>
        <property>
                <name>dfs.namenode.http-address</name>
                <value>sunrenze1:9870</value>
        </property>
        <property>
                <name>dfs.namenode.secondary.http-address</name>
                <value>sunrenze3:9868</value>
        </property>
</configuration>

4.vi yarn-site.xml

<configuration>

<!-- Site specific YARN configuration properties -->
<property>
        <name>yarn.resourcemanager.hostname</name>
        <value>sunrenze2</value>
</property>
<property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
</property>
<property>
        <name>yarn.nodemanager.env-whitelist</name>
<value>JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,CLASSPATH_PREPEND_DISTCACHE,HADOOP_YARN_HOME,HADOOP_HOME,PATH,LANG,TZ</value>
</property>
</configuration>

5.vi mapred-site.xml

<configuration>
        <property>
                <name>mapreduce.framework.name</name>
                <value>yarn</value>
        </property>
        <property>
                <name>mapreduce.application.classpath</name>
                <value>/opt/module/hadoop-3.3.1/etc/hadoop:/opt/module/hadoop-3.3.1/share/hadoop/common/lib/*:/opt/module/hadoop-3.3.1/share/hadoop/common/*:/opt/module/hadoop-3.3.1/share/hadoop/hdfs:/opt/module/hadoop-3.3.1/share/hadoop/hdfs/lib/*:/opt/module/hadoop-3.3.1/share/hadoop/hdfs/*:/opt/module/hadoop-3.3.1/share/hadoop/mapreduce/*:/opt/module/hadoop-3.3.1/share/hadoop/yarn:/opt/module/hadoop-3.3.1/share/hadoop/yarn/lib/*:/opt/module/hadoop-3.3.1/share/hadoop/yarn/*</value>
        </property>
        <property>
              <name>mapreduce.jobhistory.address</name>
              <value>sunrenze1:10020</value>
        </property>

        <property>
              <name>mapreduce.jobhistory.webapp.address</name>
              <value>sunrenze1:19888</value>
        </property>
</configuration>

6.vi workers

sunrenze1

sunrenze2

sunrenze3

15分发文件给主机2 3  
scp -r /opt/module/* sunrenze@sunrenze3:/opt/module/

scp -r /opt/module/* sunrenze@sunrenze2:/opt/module/


scp /etc/profile.d/sunrenze.sh root@sunrenze3:/etc/profile.d/

scp /etc/profile.d/sunrenze.sh root@sunrenze2:/etc/profile.d/

14格式化

hdfs namenode -format

15启动

start-dfs.sh start-yarn.sh

16主机1 单独启动

mapred --daemon start historyserver

17主机2 单独启动

yarn --daemon start resourcemanager

全部启动

18测试网页

192.168.1.11:9870

192.168.1.11:19888

192.168.1.12:8088

记得防火墙全部关闭

19创建脚本文件bin

mkdir bin

20编辑姓名文件

vi bin/sunrenze.sh

if [ $# -lt 1 ]
then
   echo "没有输入参数!"
   exit
fi

case $1 in
'start')
     echo "*************************开启namenode**************************"
     ssh sunrenze1 hdfs --daemon start namenode
     echo "*************************开启datanode**************************"
     for host in sunrenze1 sunrenze2 sunrenze3
     do
         echo "*****************$host***************"
         ssh $host hdfs --daemon start datanode
     done
     echo "*************************开启secondarynamenode**************************"
     ssh sunrenze3 hdfs --daemon start secondarynamenode
     echo "*************************开启resourcemanager**************************"
     ssh sunrenze2 yarn --daemon start resourcemanager
     echo "*************************开启nodemanager**************************"
     for host in sunrenze1 sunrenze2 sunrenze3
     do
         echo "**************$host****************"
         ssh $host yarn --daemon start nodemanager
     done
     echo "*************************开启historyserver**************************"
     ssh sunrenze1 mapred --daemon start historyserver
     ;;
'stop')
      echo "*************************关闭historyserver**************************"
      ssh sunrenze1 mapred --daemon stop historyserver
      echo "*************************关闭nodemanager**************************"
      for host in sunrenze1 sunrenze2 sunrenze3
      do
          echo "*********$host***********"
          ssh $host yarn --daemon stop nodemanager
      done
      echo "*************************关闭resourcemanager**************************"
      ssh sunrenze2 yarn --daemon stop resourcemanager
 echo "*************************关闭secondarynamenode**************************"
      ssh sunrenze3 hdfs --daemon stop secondarynamenode
      echo "*************************关闭datanode**************************"
      for host in sunrenze1 sunrenze2 sunrenze3
      do
          ssh $host hdfs --daemon stop datanode
      done
      echo "*************************关闭namenode**************************"
      ssh sunrenze1 hdfs --daemon stop namenode
      ;;
'jps')
     echo "***************查看服务****************"
     for host in sunrenze1 sunrenze2 sunrenze3
     do
         echo "***********$host****************"
         ssh $host jps
     done
     ;;
'*')
    echo "******************参数输入错误*****************"
     ;;
    esac

21查看

cat bin/sunrenze.sh

使用脚本开启   ./bin/sunrenze.sh start

22赋予权限

chmod +x bin/sunrenze.sh

23设置时间同步

下载

sudo yum install -y ntp

修改配置文件

编辑配置文件

vi /etc/sysconfig/ntpd

SYNC_HWCLOCK=yes

开启ntpd服务

sudo systemctl start ntpd

sudo systemctl status ntpd

其他主机配置

主机2

sudo crontab -e 

写入

*/1 * * * * /usr/sbin/ntpdate sunrenze1

主机3同理

最后测试

date
————————————————
版权声明:本文为优快云博主「恋爱泽」的原创文章,遵循CC 4.0 BY-SA版权协议,转载请附上原文出处链接及本声明。
原文链接:https://blog.youkuaiyun.com/2301_76767460/article/details/133078716

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值