-
新建三台虚拟机
处理器:1
内存:2048M
网络1:共享网络
网络2:Host-Only网络
tips:新建虚拟机hadoop1,第一次登陆后直接shutdownnow。
利用hadoop1克隆出hadoop2和hadoop3。
hadoop1 hadoop2 hadoop3 hdfs NameNode
DataNodeDataNode SecondaryNameNode
DataNodeyarn NodeManager NodeManager NodeManager
ResourceManager -
配置网络信息
-
修改配置信息
vi /etc/sysconfig/network-scripts/ifcfg-eth0 vi /etc/sysconfig/network-scripts/ifcfg-eth1 ONBOOT=yes -
重启网络服务
service network restart
-
-
配置IP和主机名字映射关系
vim /etc/hosts 10.37.129.18 hadoop1 10.37.129.19 hadoop2 10.37.129.20 hadoop3 -
配置免密登录
-
生成秘钥对
ssh-keygen 然后一直回车键操作 -
查看生成的私钥和公钥
-
把公钥复制到hadoop1、hadoop2、hadoop3
ssh-copy-id -i ./id_rsa.pub root@hadoop1 ssh-copy-id -i ./id_rsa.pub root@hadoop2 ssh-copy-id -i ./id_rsa.pub root@hadoop3这里需要注意的是,hadoop1本身也要做免密登录设置。
-
测试ssh免密登录是否成功
ssh hadoop1 ssh hadoop2 ssh hadoop3
-
-
关闭防火墙
systemctl status firewalld.service systemctl stop firewalld.service systemctl disable firewalld.service -
准备安装包,放在/opt/softwares下面,并解压
jdk-8u171-linux-x64.tar.gz hadoop-2.9.2.tar.gz tar -zxvf jdk-8u171-linux-x64.tar.gz tar -zxvf hadoop-2.9.2.tar.gz -
配置~/.bash_profile
export JAVA_HOME=/opt/softwares/jdk1.8.0_171 export HADOOP_HOME=/opt/softwares/hadoop-2.9.2 export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin java -version hadoop version -
配置/opt/softwares/hadoop-2.9.2/etc/hadoop/core-site.xml
<configuration> <property> <name>fs.defaultFS</name> <value>hdfs://hadoop1:9000</value> </property> <property> <name>hadoop.tmp.dir</name> <value>/opt/hadoop/tmp/dir</value> </property> </configuration> -
配置/opt/softwares/hadoop-2.9.2/etc/hadoop/hdfs-site.xml
<configuration> <property> <name>dfs.namenode.secondary.http-address</name> <value>hadoop3:50090</value> </property> <property> <name>dfs.replication</name> <value>3</value> </property> </configuration> -
配置/opt/softwares/hadoop-2.9.2/etc/hadoop/slaves
hadoop1 hadoop2 hadoop3 -
配置/opt/softwares/hadoop-2.9.2/etc/hadoop/mapred-site.xml
<configuration> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> </configuration> -
配置/opt/softwares/hadoop-2.9.2/etc/hadoop/yarn-site.xml
<property> <name>yarn.resourcemanager.hostname</name> <value>hadoop3</value> </property> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> </configuration> -
配置/opt/softwares/hadoop-2.9.2/etc/hadoop/hadoop-env.sh
/opt/softwares/hadoop-2.9.2/etc/hadoop/mapred-env.sh
/opt/softwares/hadoop-2.9.2/etc/hadoop/yarn-env.sh
export JAVA_HOME=/opt/softwares/jdk1.8.0_171 -
在hadoop1上启动hdfs
/opt/softwares/hadoop-2.9.2/sbin/start-dfs.sh访问
http://10.37.129.18:50070/dfshealth.html#tab-overview -
在hadoop3上启动yarn
/opt/softwares/hadoop-2.9.2/sbin/start-yarn.sh访问
http://10.37.129.20:8088/cluster
创建hadoop集群
最新推荐文章于 2025-06-05 17:13:04 发布

1676

被折叠的 条评论
为什么被折叠?



