第一篇 Hadoop安装单节点集群

本文详细介绍如何在CentOS 6.8上搭建Hadoop伪分布式环境,包括JDK安装与配置、Hadoop核心配置文件调整、文件系统格式化及各组件启动过程。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

#实验一:hadoop伪分布式环境的搭建
##1.实验环境

系统IP地址工具
centos6.8(最小化安装)ip:192.168.1.63xshell 5
##2.查看实验环境
[root@localhost ~]# cat /etc/issue
CentOS release 6.8 (Final)
Kernel \r on an \m
[root@localhost ~]# ifconfiggrep inet
      inet addr:192.168.1.63  Bcast:192.168.1.255  Mask:255.255.255.0
      inet6 addr: fe80::a00:27ff:fe0a:ce73/64 Scope:Link
      inet addr:127.0.0.1  Mask:255.0.0.0
      inet6 addr: ::1/128 Scope:Host

[root@localhost ~]#

##3.用rz命令上次本地下载好的hadoop、jdk
[root@localhost ~]# yum install openssh-client* -y
[root@localhost ~]# yum install lrzsz* -y
Installed:
lrzsz.x86_64 0:0.12.20-27.1.el6
Complete!
[root@localhost ~]# rz
[root@localhost ~]# ls
anaconda-ks.cfg hadoop-2.7.3.tar.gz install.log install.log.syslog jdk-8u121-linux-x64.rpm
##4.安装JDK1.8并配置环境变量
[root@localhost ~]# rpm -ivh jdk-8u121-linux-x64.rpm
[root@localhost java]# vim /etc/profile
export JAVA_HOME=/usr/java/jdk1.8.0_121/
export PATH=$PATH:$JAVA_HOME/bin
##5.解压hadoop到/usr/local/
[root@localhost ~]# tar -xf hadoop-2.7.3.tar.gz -C /usr/local/bin/hadoop/
[root@localhost ~]# cd !$
cd /usr/local/bin/hadoop/
##6.修改etc/hadoop/core-site.xml
[root@localhost hadoop]# cd hadoop-2.7.3/
[root@localhost hadoop-2.7.3]# vi etc/hadoop/core-site.xml

<configuration>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://localhost:9000</value>
    </property>
</configuration>

##7.修改etc/hadoop/hdfs-site.xml
[root@localhost hadoop-2.7.3]# vim etc/hadoop/hdfs-site.xml

<configuration>
    <property>
        <name>dfs.replication</name>
        <value>1</value>
    </property>
</configuration>

##8.格式化文件系统
[root@localhost hadoop-2.7.3]# bin/hdfs namenode -format
##9.修改etc/hadoop/hadoop-env.sh
[root@localhost hadoop-2.7.3]# vim etc/hadoop/hadoop-env.sh
把里面的JAVA_HOME写死
##10.启动NameNode守护进程和DataNode守护进程
[root@localhost hadoop-2.7.3]# sbin/start-dfs.sh
Starting namenodes on [localhost]
root@localhost’s password:
localhost: starting namenode, logging to /usr/local/bin/hadoop/hadoop-2.7.3/logs/hadoop-root-namenode-localhost.localdomain.out
root@localhost’s password:
localhost: starting datanode, logging to /usr/local/bin/hadoop/hadoop-2.7.3/logs/hadoop-root-datanode-localhost.localdomain.out
Starting secondary namenodes [0.0.0.0]
The authenticity of host ‘0.0.0.0 (0.0.0.0)’ can’t be established.
RSA key fingerprint is d2:fb💿1e:10:0d:ec:6c:ad:20:7c:cf:63:d8:01:18.
Are you sure you want to continue connecting (yes/no)? yes
0.0.0.0: Warning: Permanently added ‘0.0.0.0’ (RSA) to the list of known hosts.
root@0.0.0.0’s password:
0.0.0.0: starting secondarynamenode, logging to /usr/local/bin/hadoop/hadoop-2.7.3/logs/hadoop-root-secondarynamenode-localhost.localdomain.out
[root@localhost hadoop-2.7.3]# jps
1952 DataNode
2242 Jps
2133 SecondaryNameNode
1863 NameNode

注:jps命令为JDK自带的查看所有Java进程的pid(将在JVM性能调优中提到)
##11.配置ssh免密钥登陆
因为我们发现不设置,会频繁请求输入密码

[root@localhost hadoop-2.7.3]# ssh-keygen -t rsa -P ‘’ -f ~/.ssh/id_rsa #生成密钥
Generating public/private rsa key pair.
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
c2:9e:3d:2f:89:b4:aa:70:2a:ac:8f:29:d8:d9:f7:b7 root@localhost.localdomain
The key’s randomart image is:
±-[ RSA 2048]----+
| |
| |
| |
| . |
| o S |
| …+ |
|+…o .ooo. |
|+Bo . + oo. |
|O.o…o …oE. |
[root@localhost hadoop-2.7.3]# cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
[root@localhost hadoop-2.7.3]# chmod 0600 ~/.ssh/authorized_keys
##12.停止并验证是否配置成功
[root@localhost hadoop-2.7.3]# sbin/stop-dfs.sh
Stopping namenodes on [localhost]
localhost: stopping namenode
localhost: stopping datanode
Stopping secondary namenodes [0.0.0.0]
0.0.0.0: stopping secondarynamenode
[root@localhost hadoop-2.7.3]# sbin/start-dfs.sh
Starting namenodes on [localhost]
localhost: starting namenode, logging to /usr/local/bin/hadoop/hadoop-2.7.3/logs/hadoop-root-namenode-localhost.localdomain.out
localhost: starting datanode, logging to /usr/local/bin/hadoop/hadoop-2.7.3/logs/hadoop-root-datanode-localhost.localdomain.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /usr/local/bin/hadoop/hadoop-2.7.3/logs/hadoop-root-secondarynamenode-localhost.localdomain.out
[root@localhost hadoop-2.7.3]# jps
2897 SecondaryNameNode
2678 NameNode
3068 Jps
2767 DataNode
##13.浏览器访问NameNode
http://192.168.1.63:50070/

浏览器访问NameNode

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

小嘉丶学长

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值