续前一章 centos7 快速配置手册1
### JDK 部分
yum erase -y java-1.8.0-openjdk java-1.8.0-openjdk-headless # 会删除 mysql-connector-java log4j
-- yum install -y mysql-connector-java # 会安装 openjdk 不要安装这个包
上传安装文件
jdk-8u131-linux-x64.tar.gz # 不能使用32位jdk
mysql-connector-java-5.1.42.tar.gz
hadoop-2.6.0.tar.gz
解压jdk安装文件
tar -xzf jdk-8u131-linux-x64.tar.gz
mv jdk1.8.0_131 /usr/lib/jvm
echo "export JAVA_HOME=/usr/lib/jvm/jdk1.8.0_131" >>/etc/profile
echo "export JRE_HOME=\${JAVA_HOME}/jre" >>/etc/profile
echo "export CLASSPATH=.:\${JAVA_HOME}/lib:\${JRE_HOME}/lib" >>/etc/profile
echo "export PATH=\${JAVA_HOME}/bin:\$PATH" >>/etc/profile
### hadoop 部分
tar -xzf hadoop-2.6.0.tar.gz
mv hadoop-2.6.0 hadoop
mv hadoop /usr
useradd htjs
chown -R htjs:htjs /usr/hadoop
ls /usr/hadoop
su - htjs
mkdir -p ~/dfs/name
mkdir ~/dfs/data
mkdir ~/tmp
这里要涉及到的配置文件有7个: ~ /usr/hadoop
~/etc/hadoop/hadoop-env.sh
~/etc/hadoop/yarn-env.sh
~/etc/hadoop/slaves
~/etc/hadoop/core-site.xml
~/etc/hadoop/hdfs-site.xml
~/etc/hadoop/mapred-site.xml
~/etc/hadoop/yarn-site.xml
配置文件1:hadoop-env.sh
修改JAVA_HOME值 export JAVA_HOME=/usr/lib/jvm/jdk1.8.0_131
配置文件2:yarn-env.sh
修改JAVA_HOME值 export JAVA_HOME=/usr/lib/jvm/jdk1.8.0_131
配置文件3:slaves
node2
node3
配置文件4:core-site.xml
添加以下代码:
fs.defaultFS
hdfs://node1:8020
io.file.buffer.size
131072
hadoop.tmp.dir
file:/home/htjs/tmp
Abase for other temporary directories.
hadoop.proxyuser.htjs.hosts
*
hadoop.proxyuser.htjs.groups
*
配置文件5:hdfs-site.xml
添加以下代码:
dfs.namenode.secondary.http-address
node1:9001
dfs.namenode.name.dir
file:/home/htjs/dfs/name
dfs.datanode.data.dir
file:/home/htjs/dfs/data
dfs.replication
3
dfs.webhdfs.enabled
true
(标红的部分要根据具体情况而定)
配置文件6:mapred-site.xml
先把mapred-site.xml.template文件复制并改名字为mapred-site.xml,然后添加以下代码:
mapreduce.framework.name
yarn
mapreduce.jobhistory.address
node1:10020
mapreduce.jobhistory.webapp.address
node1:19888
配置文件7:yarn-site.xml
添加以下代码:
yarn.nodemanager.aux-services
mapreduce_shuffle
yarn.nodemanager.aux-services.mapreduce.shuffle.class
org.apache.hadoop.mapred.ShuffleHandler
yarn.resourcemanager.address
node1:8032
yarn.resourcemanager.scheduler.address
node1:8030
yarn.resourcemanager.resource-tracker.address
node1:8031
yarn.resourcemanager.admin.address
node1:8033
yarn.resourcemanager.webapp.address
node1:8088
vi /etc/profile
......
export JAVA_HOME=/usr/lib/jvm/jdk1.8.0_131
export JRE_HOME=${JAVA_HOME}/jre
export CLASSPATH=.:${JAVA_HOME}/lib:${JRE_HOME}/lib
export PATH=${JAVA_HOME}/bin:$PATH
export HADOOP_HOME=/usr/hadoop
export PATH=${HADOOP_HOME}/bin:${HADOOP_HOME}/sbin:$PATH
关闭node1 ,移除虚拟盘,克隆虚机 ,配置node2 node3 的主机名 IP 和hosts ssh互信
rm -rf hadoop-2.6.0.tar.gz jdk-8u131-linux-x64.tar.gz
umount /mnt/dvd
shutdown -h now
hostnamectl set-hostname node2 # 10.3.105.42
vi /etc/sysconfig/network-scripts/ifcfg-enp0s3
hostnamectl set-hostname node3 # 10.3.105.43
vi /etc/sysconfig/network-scripts/ifcfg-enp0s3
scp cp ~/.ssh/authorized_keys node2:/root/.ssh
scp cp ~/.ssh/authorized_keys node3:/root/.ssh
scp cp ~/.ssh/id_rsa node2:/root/.ssh
scp cp ~/.ssh/id_rsa node3:/root/.ssh
mkdir /home/htjs/.ssh
cp ~/.ssh/authorized_keys /home/htjs/.ssh/
cp ~/.ssh/id_rsa /home/htjs/.ssh
chown -R htjs:htjs /home/htjs/.ssh
scp /etc/profile node2:/etc/
scp /etc/profile node3:/etc/
在node1上格式化
hdfs namenode -format
在node1上启动hadoop
$ start-dfs.sh (namenode datanoe secondarynamenode)
$ start-yarn.sh (resourcemanager nodemanager)
停止hadoop
$ stop-yarn.sh
$ stop-dfs.sh
检验hadoop
http ://10.3.105.41:9001/
ResourceManager http ://10.3.105.41:8088/
NameNode http ://10.3.105.41:50070/
MapReduce JobHistory Server 19888
edit c:\windows\system32\drivers\etc\hosts
10.3.105.41 node1
10.3.105.42 node2
10.3.105.43 node3
hadoop 验证:
# login 10.3.105.41
hdfs dfs -mkdir /ogg1
hdfs dfs -copyFromLocal README.txt /ogg1/
hdfs dfs -ls /ogg1
下一章节 hive的配置
### JDK 部分
yum erase -y java-1.8.0-openjdk java-1.8.0-openjdk-headless # 会删除 mysql-connector-java log4j
-- yum install -y mysql-connector-java # 会安装 openjdk 不要安装这个包
上传安装文件
jdk-8u131-linux-x64.tar.gz # 不能使用32位jdk
mysql-connector-java-5.1.42.tar.gz
hadoop-2.6.0.tar.gz
解压jdk安装文件
tar -xzf jdk-8u131-linux-x64.tar.gz
mv jdk1.8.0_131 /usr/lib/jvm
echo "export JAVA_HOME=/usr/lib/jvm/jdk1.8.0_131" >>/etc/profile
echo "export JRE_HOME=\${JAVA_HOME}/jre" >>/etc/profile
echo "export CLASSPATH=.:\${JAVA_HOME}/lib:\${JRE_HOME}/lib" >>/etc/profile
echo "export PATH=\${JAVA_HOME}/bin:\$PATH" >>/etc/profile
### hadoop 部分
tar -xzf hadoop-2.6.0.tar.gz
mv hadoop-2.6.0 hadoop
mv hadoop /usr
useradd htjs
chown -R htjs:htjs /usr/hadoop
ls /usr/hadoop
su - htjs
mkdir -p ~/dfs/name
mkdir ~/dfs/data
mkdir ~/tmp
这里要涉及到的配置文件有7个: ~ /usr/hadoop
~/etc/hadoop/hadoop-env.sh
~/etc/hadoop/yarn-env.sh
~/etc/hadoop/slaves
~/etc/hadoop/core-site.xml
~/etc/hadoop/hdfs-site.xml
~/etc/hadoop/mapred-site.xml
~/etc/hadoop/yarn-site.xml
配置文件1:hadoop-env.sh
修改JAVA_HOME值 export JAVA_HOME=/usr/lib/jvm/jdk1.8.0_131
配置文件2:yarn-env.sh
修改JAVA_HOME值 export JAVA_HOME=/usr/lib/jvm/jdk1.8.0_131
配置文件3:slaves
node2
node3
配置文件4:core-site.xml
添加以下代码:
fs.defaultFS
hdfs://node1:8020
io.file.buffer.size
131072
hadoop.tmp.dir
file:/home/htjs/tmp
Abase for other temporary directories.
hadoop.proxyuser.htjs.hosts
*
hadoop.proxyuser.htjs.groups
*
配置文件5:hdfs-site.xml
添加以下代码:
dfs.namenode.secondary.http-address
node1:9001
dfs.namenode.name.dir
file:/home/htjs/dfs/name
dfs.datanode.data.dir
file:/home/htjs/dfs/data
dfs.replication
3
dfs.webhdfs.enabled
true
(标红的部分要根据具体情况而定)
配置文件6:mapred-site.xml
先把mapred-site.xml.template文件复制并改名字为mapred-site.xml,然后添加以下代码:
mapreduce.framework.name
yarn
mapreduce.jobhistory.address
node1:10020
mapreduce.jobhistory.webapp.address
node1:19888
配置文件7:yarn-site.xml
添加以下代码:
yarn.nodemanager.aux-services
mapreduce_shuffle
yarn.nodemanager.aux-services.mapreduce.shuffle.class
org.apache.hadoop.mapred.ShuffleHandler
yarn.resourcemanager.address
node1:8032
yarn.resourcemanager.scheduler.address
node1:8030
yarn.resourcemanager.resource-tracker.address
node1:8031
yarn.resourcemanager.admin.address
node1:8033
yarn.resourcemanager.webapp.address
node1:8088
vi /etc/profile
......
export JAVA_HOME=/usr/lib/jvm/jdk1.8.0_131
export JRE_HOME=${JAVA_HOME}/jre
export CLASSPATH=.:${JAVA_HOME}/lib:${JRE_HOME}/lib
export PATH=${JAVA_HOME}/bin:$PATH
export HADOOP_HOME=/usr/hadoop
export PATH=${HADOOP_HOME}/bin:${HADOOP_HOME}/sbin:$PATH
关闭node1 ,移除虚拟盘,克隆虚机 ,配置node2 node3 的主机名 IP 和hosts ssh互信
rm -rf hadoop-2.6.0.tar.gz jdk-8u131-linux-x64.tar.gz
umount /mnt/dvd
shutdown -h now
hostnamectl set-hostname node2 # 10.3.105.42
vi /etc/sysconfig/network-scripts/ifcfg-enp0s3
hostnamectl set-hostname node3 # 10.3.105.43
vi /etc/sysconfig/network-scripts/ifcfg-enp0s3
scp cp ~/.ssh/authorized_keys node2:/root/.ssh
scp cp ~/.ssh/authorized_keys node3:/root/.ssh
scp cp ~/.ssh/id_rsa node2:/root/.ssh
scp cp ~/.ssh/id_rsa node3:/root/.ssh
mkdir /home/htjs/.ssh
cp ~/.ssh/authorized_keys /home/htjs/.ssh/
cp ~/.ssh/id_rsa /home/htjs/.ssh
chown -R htjs:htjs /home/htjs/.ssh
scp /etc/profile node2:/etc/
scp /etc/profile node3:/etc/
在node1上格式化
hdfs namenode -format
在node1上启动hadoop
$ start-dfs.sh (namenode datanoe secondarynamenode)
$ start-yarn.sh (resourcemanager nodemanager)
停止hadoop
$ stop-yarn.sh
$ stop-dfs.sh
检验hadoop
http ://10.3.105.41:9001/
ResourceManager http ://10.3.105.41:8088/
NameNode http ://10.3.105.41:50070/
MapReduce JobHistory Server 19888
edit c:\windows\system32\drivers\etc\hosts
10.3.105.41 node1
10.3.105.42 node2
10.3.105.43 node3
hadoop 验证:
# login 10.3.105.41
hdfs dfs -mkdir /ogg1
hdfs dfs -copyFromLocal README.txt /ogg1/
hdfs dfs -ls /ogg1
下一章节 hive的配置
来自 “ ITPUB博客 ” ,链接:http://blog.itpub.net/271063/viewspace-2142497/,如需转载,请注明出处,否则将追究法律责任。
转载于:http://blog.itpub.net/271063/viewspace-2142497/