hadoop的单机版测试和集群节点的搭建

本文详细介绍了Hadoop的单机版测试和伪分布式搭建过程。从创建用户、安装配置到测试HDFS和YARN的主要模块,如NameNode、DataNode、ResourceManager和NodeManager。在单机测试后,逐步展开伪分布式设置,包括编辑配置文件、生成密钥、格式化、开启服务,并通过浏览器进行验证。最后,文章还涵盖了分布式环境的搭建,包括多节点配置、数据同步以及客户端测试。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

Hadoop技术原理:

Hdfs主要模块:NameNode、DataNode
Yarn主要模块:ResourceManager、NodeManager
HDFS主要模块及运行原理:

1)NameNode:

功能:是整个文件系统的管理节点。维护整个文件系统的文件目录树,文件/目录的元数据和
每个文件对应的数据块列表。接收用户的请求。
2)DataNode:

功能:是HA(高可用性)的一个解决方案,是备用镜像,但不支持热备

一、hadoop单机版测试

1.创建用户并安装

[root@server1 ~]# ls
anaconda-ks.cfg  hadoop-3.0.3.tar.gz  jdk-8u181-linux-x64.tar.gz
[root@server1 ~]# useradd -u 1000 hadoop
[root@server1 ~]# passwd hadoop
Changing password for user hadoop.
New password: 
BAD PASSWORD: The password is shorter than 8 characters
Retype new password: 
passwd: all authentication tokens updated successfully.
[root@server1 ~]# mv hadoop-3.0.3.tar.gz jdk-8u181-linux-x64.tar.gz /home/hadoop/
[root@server1 ~]# su - hadoop
[hadoop@server1 ~]$ ls
hadoop-3.0.3.tar.gz  jdk-8u181-linux-x64.tar.gz
[hadoop@server1 ~]$ tar zxf jdk-8u181-linux-x64.tar.gz 
[hadoop@server1 ~]$ ls
hadoop-3.0.3.tar.gz  jdk1.8.0_181  jdk-8u181-linux-x64.tar.gz
[hadoop@server1 ~]$ ln -s jdk1.8.0_181/ java
[hadoop@server1 ~]$ tar zxf hadoop-3.0.3.tar.gz 
[hadoop@server1 ~]$ ls
hadoop-3.0.3         java          jdk-8u181-linux-x64.tar.gz
hadoop-3.0.3.tar.gz  jdk1.8.0_181
[hadoop@server1 ~]$ ln -s hadoop-3.0.3 hadoop
[hadoop@server1 ~]$ ls
hadoop        hadoop-3.0.3.tar.gz  jdk1.8.0_181
hadoop-3.0.3  java                 jdk-8u181-linux-x64.tar.gz

2、配置环境变量

[hadoop@server1 hadoop]$ cd /home/hadoop/hadoop/etc/hadoop/
[hadoop@server1 hadoop]$ pwd
/home/hadoop/hadoop/etc/hadoop
[hadoop@server1 hadoop]$ vim hadoop-env.sh

[hadoop@server1 hadoop]$ cd
[hadoop@server1 ~]$ vim .bash_profile

 3、测试

[hadoop@server1 ~]$ source .bash_profile
[hadoop@server1 ~]$ jps
942 Jps
[hadoop@server1 ~]$ cd /home/hadoop/hadoop
[hadoop@server1 hadoop]$ pwd
/home/hadoop/hadoop
[hadoop@server1 hadoop]$ mkdir input
[hadoop@server1 hadoop]$ cp etc/hadoop/*.xml input/
[hadoop@server1 hadoop]$ bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-3.0.3.jar grep input output 'dfs[a-z.]+'
[hadoop@server1 hadoop]$ cd output/
[hadoop@server1 output]$ ls
part-r-00000  _SUCCESS
[hadoop@server1 output]$ cat *
1	dfsadmin

 二、伪分布式
1.编辑文件

[hadoop@server1 output]$ cd /home/hadoop/hadoop/etc/hadoop/
[hadoop@server1 hadoop]$ pwd
/home/hadoop/hadoop/etc/hadoop
[hadoop@server1 hadoop]$ vim core-site.xml

 

[hadoop@server1 hadoop]$ vim hdfs-site.xml 

 

2.生成密钥做免密连接 

[hadoop@server1 sbin]$ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/home/hadoop/.ssh/id_rsa): 
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /home/hadoop/.ssh/id_rsa.
Your public key has been saved in /home/hadoop/.ssh/id_rsa.pub.
The key fingerprint is:
8e:f1:1d:4e:29:e2:56:eb:42:4f:79:f2:f3:89:42:69 hadoop@server1
The key's randomart image is:
+--[ RSA 2048]----+
|                 |
|                 |
|                 |
|           .     |
|      o So+      |
|     ..BE*..     |
|     .+=++o      |
|     ...o o. .   |
|       ....oo    |
+-----------------+
[hadoop@server1 sbin]$ ssh-copy-id localhost
/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
hadoop@localhost's password: 

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'localhost'"
and check to make sure that only the key(s) you wanted were added.

3.格式化并开启服务

[hadoop@server1 hadoop]$ bin/hdfs namenode -format
[hadoop@server1 hadoop]$ pwd
/home/hadoop/hadoop
[hadoop@server1 sbin]$ ./start-dfs.sh 
Starting namenodes on [localhost]
Starting datanodes
Starting secondary namenodes [server1]
[hadoop@server1 sbin]$ jps
11060 NameNode
11159 DataNode
11341 SecondaryNameNode
11485 Jps

4.浏览器查看

http://172.25.11.1:9870

 


5.测试,创建目录,并上传

[hadoop@server1 hadoop]$ bin/hdfs dfs -mkdir -p /user/hadoop
[hadoop@server1 hadoop]$ bin/hdfs dfs -ls
[hadoop@server1 hadoop]$ bin/hdfs dfs -put input
[hadoop@server1 hadoop]$ bin/hdfs dfs -ls
Found 1 items
drwxr-xr-x   - hadoop supergroup          0 2019-05-19 01:47 input

 

 

 

 6. 删除input和output文件,重新执行命令

[hadoop@server1 hadoop]$ rm -fr input/ output/
[hadoop@server1 hadoop]$ bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-3.0.3.jar grep input output 'dfs[a-z.]+'
[hadoop@server1 hadoop]$ ls
bin  etc  include  lib  libexec  LICENSE.txt  logs  NOTICE.txt  README.txt  sbin  share

 此时input和output不会出现在当前目录下,而是上传到了分布式文件系统中,网页上可以看到

[hadoop@server1 hadoop]$ bin/hdfs dfs -cat output/*
1	dfsadmin
[hadoop@server1 hadoop]$ bin/hdfs dfs -get output  ##从分布式系统中get下来output目录
[hadoop@server1 hadoop]$ cd output/
[hadoop@server1 output]$ ls
part-r-00000  _SUCCESS

三、分布式
1.先停掉服务,清除原来的数据

[hadoop@server1 hadoop]$ sbin/stop-dfs.sh 
Stopping namenodes on [localhost]
Stopping datanodes
Stopping secondary namenodes [server1]
[hadoop@server1 hadoop]$ cd /tmp/
[hadoop@server1 tmp]$ ls
hadoop  hadoop-hadoop  hsperfdata_hadoop
[hadoop@server1 tmp]$ rm -fr *


2.新开两个虚拟机,当做节点
    创建用户

[root@server2 ~]# useradd -u 1000 hadoop
[root@server3 ~]# useradd -u 1000 hadoop

  安装nfs-utils

[root@server1 ~]# yum install -y nfs-utils
[root@server2 ~]# yum install -y nfs-utils
[root@server3 ~]# yum install -y nfs-utils

[root@server1 ~]# systemctl start rpcbind
[root@server2 ~]# systemctl start rpcbind
[root@server3 ~]# systemctl start rpcbind

3.server1开启服务,配置

[root@server1 ~]# systemctl start nfs-server
[root@server1 ~]# vim /etc/exports
/home/hadoop   *(rw,anonuid=1000,anongid=1000)

[root@server1 ~]# exportfs -rv
exporting *:/home/hadoop
[root@server1 ~]# showmount -e
Export list for server1:
/home/hadoop *


4.server2,3挂载

[root@server2 ~]# vim /etc/hosts
[root@server2 ~]# mount 172.25.11.1:/home/hadoop /home/hadoop
[root@server2 ~]# df
Filesystem               1K-blocks    Used Available Use% Mounted on
/dev/sda3                 18351104 1092520  17258584   6% /
devtmpfs                    498480       0    498480   0% /dev
tmpfs                       508264       0    508264   0% /dev/shm
tmpfs                       508264    6736    501528   2% /run
tmpfs                       508264       0    508264   0% /sys/fs/cgroup
/dev/sda1                   508580  110596    397984  22% /boot
tmpfs                       101656       0    101656   0% /run/user/0
172.25.11.1:/home/hadoop  18351104 2790656  15560448  16% /home/hadoop


[root@server3 ~]# mount 172.25.11.1:/home/hadoop /home/hadoop
[root@server3 ~]# df
Filesystem               1K-blocks    Used Available Use% Mounted on
/dev/sda3                 18351104 1092468  17258636   6% /
devtmpfs                    498480       0    498480   0% /dev
tmpfs                       508264       0    508264   0% /dev/shm
tmpfs                       508264    6736    501528   2% /run
tmpfs                       508264       0    508264   0% /sys/fs/cgroup
/dev/sda1                   508580  110596    397984  22% /boot
tmpfs                       101656       0    101656   0% /run/user/0
172.25.11.1:/home/hadoop  18351104 2790656  15560448  16% /home/hadoop

此时使用hadoop用户时可以直接从server1登陆到server2和server3的

5.重新编辑文件

[root@server1 hadoop]# pwd
/home/hadoop/hadoop/etc/hadoop
[root@server1 hadoop]# vim core-site.xml

<configuration>
   <property>
      <name>fs.defaultFS</name>
      <value>hdfs://172.25.11.1:9000</value>
  </property>
</configuration>


[root@server1 hadoop]# vim hdfs-site.xml

<configuration>
    <property>
        <name>dfs.replication</name>
        <value>2</value> ##改为两个节点
    </property>
</configuration>


[root@server1 hadoop]# vim workers
[root@server1 hadoop]# cat workers 
172.25.11.2
172.25.11.3

在一个地方编辑,其他节点都有了

[hadoop@server2 hadoop]$ cat workers 
172.25.11.2
172.25.11.3

[root@server3 ~]# cat /home/hadoop/hadoop/etc/hadoop/workers 
172.25.11.2
172.25.11.3

6.格式化,并启动服务

[hadoop@server1 hadoop]$ pwd
/home/hadoop/hadoop
[hadoop@server1 hadoop]$ bin/hdfs namenode -format
[hadoop@server1 hadoop]$ sbin/start-dfs.sh 
Starting namenodes on [server1]
Starting datanodes
172.25.11.2: Warning: Permanently added '172.25.11.2' (ECDSA) to the list of known hosts.
172.25.11.3: Warning: Permanently added '172.25.11.3' (ECDSA) to the list of known hosts.
Starting secondary namenodes [server1]
[hadoop@server1 hadoop]$ sbin/start-dfs.sh 
Starting namenodes on [server1]
Starting datanodes
Starting secondary namenodes [server1]
[hadoop@server1 hadoop]$ vim /etc/hosts
[hadoop@server1 hadoop]$ vim /etc/hosts
[hadoop@server1 hadoop]$ jps
14000 SecondaryNameNode ##生成SecondaryNameNode
13814 NameNode
14153 Jps

从节点可以看到datanode信息

[hadoop@server2 ~]$ jps
10336 Jps
10273 DataNode
[hadoop@server3 ~]$ jps
1141 DataNode
1230 Jps

7.测试

[hadoop@server1 hadoop]$ bin/hdfs dfs -mkdir -p /user/hadoop
[hadoop@server1 hadoop]$ bin/hdfs dfs -mkdir input
[hadoop@server1 hadoop]$ bin/hdfs dfs -put etc/hadoop/*.xml input
[hadoop@server1 hadoop]$ bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-3.0.3.jar grep input output 'dfs[a-z.]+'

8.网页上查看,有两个节点,且数据已经上传

9.server4模拟客户端 

[root@server4 ~]# useradd -u  1000 hadoop
[root@server4 ~]# vim /etc/hosts
[root@server4 ~]# yum  install -y  nfs-utils

[root@server4 ~]# systemctl start rpcbind
[root@server4 ~]# mount 172.25.11.1:/home/hadoop /home/hadoop
[root@server4 ~]# su - hadoop
[hadoop@server4 ~]$ cd /home/hadoop/hadoop/etc/hadoop/
[hadoop@server4 hadoop]$ vim workers
[hadoop@server4 hadoop]$ cat workers
172.25.11.2
172.25.11.3
172.25.11.4
[hadoop@server4 hadoop]$ sbin/hadoop-daemon.sh start
[hadoop@server4 hadoop]$ jps
2609 Jps
2594 DataNode

浏览器查看,节点添加成功
 

[hadoop@server4 hadoop]$ dd if=/dev/zero of=bigfile bs=1M count=500
500+0 records in
500+0 records out
524288000 bytes (524 MB) copied, 25.8653 s, 20.3 MB/s
[hadoop@server4 hadoop]$ bin/hdfs dfs -put bigfile
  • 显示bigfile已经上传成功
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值