hadoop-2.6.0-cdh5.16.1集群部署详细文档
准备工作
- 准备集群机器
192.168.113.101 master
192.168.113.102 slaver1
192.168.113.103 slaver2
- ssh无密码访问简单配置
- Java基础环境配置
- 安装包 hadoop-2.6.0-cdh5.16.1.tar.gz
开始安装(上传文件,修改配置)
上传软件包到指定位置
cd /opt/
解压到当前文件夹
tail -xvf hadoop-2.6.0-cdh5.16.1.tar.gz
得到文件夹
hadoop-2.6.0-cdh5.16.1
修改配置文件
cd /opt/hadoop/etc/hadoop/
需要修改的配置文件
hadoop-env.sh
yarn-env.sh
core-site.xml
hdfs-site.xml
mapred-site.xml
yarn-site.xml
slaves
hadoop-env.sh
export JAVA_HOME=/你自己/的jdk路径/
yarn-env.sh
export JAVA_HOME=/你自己/的jdk路径/
export HADOOP_YARN_USER=${HADOOP_YARN_USER:-yarn}
export YARN_CONF_DIR="/opt/hadoop-2.6.0-cdh5.16.1/etc/hadoop/"
core-site.xml
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://master:9000</value>
</property>
<property>
<name>io.file.buffer.size</name>
<value>131072</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>file:/opt/hadoop-2.6.0-cdh5.16.1/hadoopData/tmpdir</value>
</property>
<property>
<name>hadoop.proxyuser.root.hosts</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.root.groups</name>
<value>*</value>
</property>
</configuration>
注意新建数据目录
mkdir -p /opt/hadoop/hadoopData/tmpdir
hdfs-site.xml
<configuration>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>master:9001</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/opt/hadoop-2.6.0-cdh5.16.1/hadoopData/dfs/name</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/opt/hadoop-2.6.0-cdh5.16.1/hadoopData/dfs/data</value>
</property>
<property>
<name>dfs.replication</name>
<value>3</value>
</property>
<property>
<name>dfs.webhdfs.enabled</name>
<value>true</value>
</property>
<property>
<name>dfs.permissions</name>
<value>false</value>
</property>
<property>
<name>dfs.web.ugi</name>
<value>supergroup</value>
</property>
</configuration>
注意新建数据目录
mkdir -p /opt/hadoop-2.6.0-cdh5.16.1/hadoopData/dfs/name
mkdir -p /opt/hadoop-2.6.0-cdh5.16.1/hadoopData/dfs/data
mapred-site.xml
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapreduce.jobhistory.address</name>
<value>master:10020</value>
</property>
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>master:19888</value>
</property>
</configuration>
yarn-site.xml
<configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<property>
<name>yarn.resourcemanager.address</name>
<value>master:8032</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>master:8030</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>master:8031</value>
</property>
<property>
<name>yarn.resourcemanager.admin.address</name>
<value>master:8033</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address</name>
<value>master:8088</value>
</property>
</configuration>
slaves
配置datanode节点 (数据存储节点)
192.168.113.102 slaver1
192.168.113.103 slaver2
masters
配置高可用
slaver1
slaver2
分发文件 准备启动
配置环境变量
vim /etc/profile
export HADOOP_HOME=/opt/hadoop-2.6.0-cdh5.16.1/
export PATH=:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH
scp -r /etc/profile root@slaver1:/etc/profile
scp -r /etc/profile root@slaver2:/etc/profile
在每个节点使用生效命令
. /etc/profile
拷贝hadoop安装文件到子节点
主节点上执行:
scp -r /opt/hadoop-2.6.0-cdh5.16.1 root@slaver1:/opt/hadoop-2.6.0-cdh5.16.1/
scp -r /opt/hadoop-2.6.0-cdh5.16.1 root@slaver2:/opt/hadoop-2.6.0-cdh5.16.1/
在主节点格式化namenode
hadoop namenode -format
提示:successfully formatted表示格式化成功
启动hadoop
start-all.sh
进程检查
jps
主节点
NameNode
SecondaryNameNode
ResourceManager
子节点
DataNode
NodeManager
非22端口解决办法
vim /opt/hadoop-2.6.0-cdh5.16.1/etc/hadoop/hadoop-env.sh
在最后一行加上
export HADOOP_SSH_OPTS="-p 对应端口"