1.创建hadoop用户
(1)sudo useradd -m hadoop -s /bin/bash
(2)为hadoop 用户增加管理员权限
sudo adduser hadoop sudo
(3)打开配置文件sudoers
sudo cat /etc/sudoers,加入hadoop ALL=(ALL:ALL) ALL
2.安装jdk(已讲解)
3.安装SSH协议,并配置无密码登录
(1)下载安装SSH服务端
sudo apt-get install openssh-server
(2)执行命令生成目录
ssch localhost
(3)切换到目录
cd~/ssh/
(4)生成密钥
ssh-keygen -t rsa
(5)授权
catid_rsa.pub >> authorized_keys
(6)验证登录
sshlocalhost
4.安装Hadoop
(1)下载压缩包hadoop-2.6.0
(2)解压到/opt目录下
sudo tar -xsvf ~/下载/hadoop-2.6.0.tar.gz-C /opt/
(3)配置Hadoop的环境变量
export HADOOP_HOME="/opt/hadoop-2.6.0"
exportPATH="$JAVA_HOME/bin:$HADOOP_HOME/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:$PATH"
5.配置Hadoop
(1)hadoop-env.sh
加入Java的环境变量 exportJAVA_HOME=${JAVA_HOME}
(2)core-site.xml
<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>file:/opt/hadoop-2.6.0/tmp</value>
</property>
<property>
<name>fs.defaultFS</name>
<value>hdfs://localhost:9000</value>
</property>
</configuration>
(3)hdfs-site.xml
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/opt/hadoop-2.6.0/tmp/dfs/name</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/opt/hadoop-2.6.0/tmp/dfs/data</value>
</property>
</configuration>
(4)mapred-site.xml
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>localhost:9001</value>
</property>
</configuration>
6.将jdk,Hadoop软件指定给hadoop用户
chown -R hadoop:hadoop hadoop-2.6.0/
chown -R hadoop:hadoop jdk1.7.0_80/
7.格式化hsfs文件
/opt/hadoop-2.6.0$ bin/hdfs namenode-format
过程若格式化失败,出现不能创建文件的问题,解决方式为:
赋予hadoop-2.6.0文件权限
chown -R hadoop:hadoop(组名:用户名) hadoop-2.6.0/
sudo chmod -R a+w hadoop-2.6.0/
8.启动hadoop的守护进程
/opt/hadoop-2.6.0$ sbin/start-all.sh
9.命令jps查看启动的守护进程
/opt/jdk1.7.0_80$ bin/jps
若datanode进程未开启,查看日志/opt/hadoop-2.6.0/log错误信息显示为
非法权限,则解决为:
cd /opt/hadoop-2.6.0/tmp/dfs/
sudo chmod 755 data
10.关闭hadoop的守护进程
/opt/hadoop-2.6.0$ sbin/stop-all.sh
627

被折叠的 条评论
为什么被折叠?



