hadoop安装:
[root@server1 ~]# useradd hadoop
[root@server1 ~]# passwd hadoop
[root@server1 ~]# id hadoop
uid=500(hadoop) gid=500(hadoop) groups=500(hadoop)
[root@server1 ~]# su - hadoop
[hadoop@server1 ~]$ tar zxf jdk-7u79-linux-x64.tar.gz
[hadoop@server1 ~]$ tar zxf hadoop-2.7.3.tar.gz
[hadoop@server1 ~]$ ln -s jdk1.7.0_79 jdk
[hadoop@server1 ~]$ ln -s hadoop-2.7.3 hadoop
[hadoop@server1 ~]$ vim hadoop/etc/hadoop/hadoop-env.sh
25 export JAVA_HOME=/home/hadoop/jdk
[hadoop@server1 ~]$ mkdir hadoop/input
#往input目录里拷贝一些数据做测试用:
[hadoop@server1 ~]$ cp hadoop/etc/hadoop/* hadoop/input/
[hadoop@server1 ~]$ cd hadoop
#过滤input目录中以dfs开头的行到output目录(output目录会自动创建):
[hadoop@server1 hadoop]$ bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.3.jar grep input output 'dfs[a-z.]+'
[hadoop@server1 hadoop]$ cat output/*
6 dfs.audit.logger
4 dfs.class
3 dfs.server.namenode.
2 dfs.period
2 dfs.audit.log.maxfilesize
2 dfs.audit.log.maxbackupindex
1 dfsmetrics.log
1 dfsadmin
1 dfs.servers
1 dfs.file
#对input中的单词进行计数并输出结果到output目录:
[hadoop@server1 hadoop]$ rm -rf output/
[hadoop@server1 hadoop]$ bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.3.jar wordcount input output
[hadoop@server1 hadoop]$ cat output/*...hadoop单节点集群:
[hadoop@server1 hadoop]$ vim /home/hadoop/.bash_profile
PATH=$PATH:$HOME/bin:/home/hadoop/jdk/bin
[hadoop@server1 hadoop]$ source /home/hadoop/.bash_profile
[hadoop@server1 hadoop]$ vim etc/hadoop/core-site.xml
19 <configuration>20 <property> 21 <name>fs.defaultFS</name> 22 <value>hdfs://172.25.254.1:9000</value> 23 </property>
24 </configuration>
[hadoop@server1 hadoop]$ vim etc/hadoop/slaves
172.25.254.1
[hadoop@server1 hadoop]$ vim etc/hadoop/hdfs-site.xml
19 <configuration>20 <property> 21 <name>dfs.replication</name> 22 <value>1</value> 23 </property>
24 </configuration>
#建立免密登陆,分配钥匙给172.25.254.1:
[hadoop@server1 hadoop]$ ssh-keygen #全部回车,默认即可
[hadoop@server1 hadoop]$ ssh-copy-id 172.25.254.1
#格式化存储:
[hadoop@server1 hadoop]$ bin/hdfs namenode -format
#开启服务:
[hadoop@server1 hadoop]$ sbin/start-dfs.sh
#查看srerve1上的服务:
[hadoop@server1 hadoop]$ jps
2202 Jps
2087 SecondaryNameNode
1904 DataNode #hadoop数据存储服务
1811 NameNode #hadoop调度器
#在hadoop服务器上创建/user/hadoop/input目录(注意这个目录与/home/hadoop/hadoop下的input不是一个概念,这个只是本地文件。现在是在服务器上创建一个input空目录)
[hadoop@server1 hadoop]$ bin/hdfs dfs -mkdir /user
[hadoop@server1 hadoop]$ bin/hdfs dfs -mkdir /user/hadoop
[hadoop@server1 hadoop]$ bin/hdfs dfs -put input input #将本地input的内容上传到服务器input目录中
[hadoop@server1 hadoop]$ bin/hdfs dfs -ls input #查看服务器上input目录下的内容...
#由于是单节点集群,服务器上的数据存储在本机/tmp/hadoop-hadoop/dfs/下
[hadoop@server1 hadoop]$ rm -rf input/ output/ #删除本地input和output
#对input中的单词进行计数并输出结果到output目录:
[hadoop@server1 hadoop]$ bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.3.jar wordcount input output
[hadoop@server1 hadoop]$ ls #查看本地并不会生成output目录,但是服务器上生成了output目录
bin include libexec logs README.txt share
etc lib LICENSE.txt NOTICE.txt sbin
[hadoop@server1 hadoop]$ bin/hdfs dfs -ls output #查看服务器上output目录
Found 2 items
-rw-r--r-- 1 hadoop supergroup 0 2018-03-06 20:36 output/_SUCCESS
-rw-r--r-- 1 hadoop supergroup 36184 2018-03-06 20:36 output/part-r-00000
#hadoop提供了一个图形界面,访问172.25.254.1:50070即可直观的看到服务器上的内容:

[hadoop@server1 hadoop]$ bin/hdfs dfs -cat output/* #查看服务器上output里的内容
[hadoop@server1 hadoop]$ bin/hdfs dfs -get output #下载服务器上output目录到本地
#本地查看下载的内容:
[hadoop@server1 hadoop]$ ls output/
part-r-00000 _SUCCESS
[hadoop@server1 hadoop]$ bin/hdfs dfs -rmr output #删除服务器上的output目录

将数据存储分散在两个节点(server2|3)上
server1:
[hadoop@server1 hadoop]$ sbin/stop-dfs.sh
Stopping namenodes on [server1]
server1: stopping namenode
172.25.254.1: stopping datanode
Stopping secondary namenodes [0.0.0.0]
0.0.0.0: stopping secondarynamenode
[hadoop@server1 hadoop]$ logout
#建立一个nfs系统,并挂在到server2|3的/home/hadoop下,这样只要在server1修改配置,其他主机都会有一样的变化(可以不用做免密、配置java环境等):
[root@server1 ~]# yum install -y nfs-utils
[root@server1 ~]# /etc/init.d/rpcbind start
[root@server1 ~]# vim /etc/exports
/home/hadoop *(rw,anonuid=500,anongid=500)
[root@server1 ~]# /etc/init.d/nfs start
server2|3:
[root@server2 ~]# yum install -y nfs-utils
[root@server2 ~]# /etc/init.d/rpcbind start
#hadoop要求所有平台的hadoop用户uid、gid必须一致:
[root@server2 ~]# useradd hadoop
[root@server2 ~]# id hadoop
uid=500(hadoop) gid=500(hadoop) groups=500(hadoop)
#必须保证三台主机时间一致:
[root@server2 ~]# date
Tue Mar 6 21:39:00 CST 2018
[root@server2 ~]# mount 172.25.254.1:/home/hadoop/ /home/hadoop/
[root@server2 ~]# su - hadoop
server1:
[root@server1 ~]# su - hadoop
[hadoop@server1 ~]$ cd /home/hadoop/hadoop
[hadoop@server1 hadoop]$ vim etc/hadoop/slaves
172.25.254.2
172.25.254.3
[hadoop@server1 hadoop]$ vim etc/hadoop/hdfs-site.xml
19 <configuration>
20 <property>
21 <name>dfs.replication</name>
22 <value>2</value>
23 </property>
24 </configuration>
#删除之前的文件(hadoop的数据存储在datanode端的/tmp下):
[hadoop@server1 hadoop]$ rm -rf /tmp/*
[hadoop@server1 hadoop]$ bin/hdfs namenode -format
[hadoop@server1 hadoop]$ sbin/start-dfs.sh
Starting namenodes on [server1]
server1: starting namenode, logging to /home/hadoop/hadoop-2.7.3/logs/hadoop-hadoop-namenode-server1.out
172.25.254.3: starting datanode, logging to /home/hadoop/hadoop-2.7.3/logs/hadoop-hadoop-datanode-server3.out
172.25.254.2: starting datanode, logging to /home/hadoop/hadoop-2.7.3/logs/hadoop-hadoop-datanode-server2.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /home/hadoop/hadoop-2.7.3/logs/hadoop-hadoop-secondarynamenode-server1.out
#可以看到数据节点已经转移到了server2和server3上:
[hadoop@server1 hadoop]$ jps
4131 SecondaryNameNode
4245 Jps
3943 NameNode
[hadoop@server2 ~]$ jps
1269 DataNode
1342 Jps
[hadoop@server3 ~]$ jps
1254 DataNode
1327 Jps
增加、删除数据存储节点:
开启nodemanager:
[hadoop@server1 hadoop]$ bin/hdfs dfs -mkdir /user
[hadoop@server1 hadoop]$ bin/hdfs dfs -mkdir /user/hadoop
[hadoop@server1 hadoop]$ bin/hdfs dfs -put etc/hadoop/ input
[hadoop@server1 hadoop]$ cp etc/hadoop/mapred-site.xml.template etc/hadoop/mapred-site.xml
[hadoop@server1 hadoop]$ vim etc/hadoop/mapred-site.xml
19 <configuration>20 <property> 21 <name>mapreduce.framework.name</name> 22 <value>yarn</value> 23 </property>
24 </configuration>
[hadoop@server1 hadoop]$ vim etc/hadoop/yarn-site.xml
15 <configuration>16 <property> 17 <name>yarn.nodemanager.aux-services</name> 18 <value>mapreduce_shuffle</value> 19 </property>
20 </configuration>
[hadoop@server1 hadoop]$ sbin/start-yarn.sh
starting yarn daemons
starting resourcemanager, logging to /home/hadoop/hadoop-2.7.3/logs/yarn-hadoop-resourcemanager-server1.out
172.25.254.2: starting nodemanager, logging to /home/hadoop/hadoop-2.7.3/logs/yarn-hadoop-nodemanager-server2.out
172.25.254.3: starting nodemanager, logging to /home/hadoop/hadoop-2.7.3/logs/yarn-hadoop-nodemanager-server3.out
#截取一个500M的文件bigfile:
[hadoop@server1 hadoop]$ dd if=/dev/zero of=bigfile bs=1M count=500
[hadoop@server1 hadoop]$ bin/hdfs dfs -put bigfile #上传bigfile到服务器


#由上面两幅图可以看出:1.hadoop双节点的存储模式是将bigfile分别在server2和server3上存一份,相当于占用了服务器1000M的空间。
2.每个节点上的bigfile被切片为4份存储起来,因为默认block值为128M(可以修改),遇到大文件,都会切片为小的block进行存储
#增加一台主机server4:
[root@server4 ~]# groupadd -g 500 hadoop
[root@server4 ~]# useradd -u 500 -g 500 hadoop
[root@server4 ~]# yum install -y nfs-utils
[root@server4 ~]# /etc/init.d/rpcbind start
[root@server4 ~]# mount 172.25.254.1:/home/hadoop/ /home/hadoop/
[root@server4 ~]# su - hadoop
[hadoop@server4 ~]$ cd /home/hadoop/hadoop
[hadoop@server4 ~]$ vim etc/hadoop/slaves
172.25.254.2
172.25.254.3
172.25.254.4
[hadoop@server4 hadoop]$ sbin/hadoop-daemon.sh start datanode
[hadoop@server4 hadoop]$ jps
1202 Jps
1164 DataNode

#现在datanode节点添加成功
#删除一个节点(server3)并将数据给server4:
[hadoop@server1 hadoop]$ vim etc/hadoop/hdfs-site.xml
19 <configuration>
20 <property>
21 <name>dfs.replication</name>
22 <value>2</value>
23 </property>24 <property> 25 <name>dfs.hosts.exclude</name> 26 <value>/home/hadoop/hadoop/etc/hadoop/hosts.exclude</value> 27 </property>
28 </configuration>
[hadoop@server1 hadoop]$ vim etc/hadoop/hosts.exclude #写入要删除的datanode节点主机ip
172.25.254.3
[hadoop@server1 hadoop]$ bin/hdfs dfsadmin -refreshNodes
[hadoop@server1 hadoop]$ bin/hdfs dfsadmin -report
server3:
[hadoop@server3 ~]$ cd /home/hadoop/hadoop
[hadoop@server3 hadoop]$ sbin/hadoop-daemon.sh stop datanode
stopping datanode
[hadoop@server3 hadoop]$ jps
1645 Jps

#上图可以看到server3已经停止工作,并且server4已经继承了server3上的500M数据
[hadoop@server1 hadoop]$ vim etc/hadoop/slaves
172.25.254.2
172.25.254.4
安装zookeeper
server1:
[hadoop@server1 hadoop]$ sbin/stop-all.sh
sever1|2|3|4:
[hadoop@server1 hadoop]$ rm -rf /tmp/*
server5:
增加一台server5主机
挂载nfs文件系统:
[root@server5 ~]# yum install nfs-utils -y
[root@server5 ~]# /etc/init.d/rpcbind start
[root@server5 ~]# useradd -u 500 hadoop
[root@server5 ~]# mount 172.25.254.1:/home/hadoop/ /home/hadoop/
[root@server5 ~]# su - hadoop
[hadoop@server1 ~]$ tar zxf zookeeper-3.4.9.tar.gz
[hadoop@server1 ~]$ cd zookeeper-3.4.9/conf
[hadoop@server1 conf]$ cp zoo_sample.cfg zoo.cfg
[hadoop@server1 conf]$ vim zoo.cfg
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/tmp/zookeeper
clientPort=2181
server.1=172.25.254.2:2888:3888
server.2=172.25.254.3:2888:3888
server.3=172.25.254.4:2888:3888
server2:
[hadoop@server2 ~]$ mkdir /tmp/zookeeper
[hadoop@server2 ~]$ echo 1 > /tmp/zookeeper/myid
server3:
[hadoop@server3 ~]$ mkdir /tmp/zookeeper
[hadoop@server3 ~]$ echo 2 > /tmp/zookeeper/myid
server4:
[hadoop@server4 ~]$ mkdir /tmp/zookeeper
[hadoop@server4 ~]$ echo 3 > /tmp/zookeeper/myid
server2|3|4:
cd /home/hadoop/zookeeper-3.4.9
bin/zkServer.sh start
在各datanode节点验证一下
[hadoop@server2 zookeeper-3.4.9]$ bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /home/hadoop/zookeeper-3.4.9/bin/../conf/zoo.cfg
Mode: follower
[hadoop@server3 zookeeper-3.4.9]$ bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /home/hadoop/zookeeper-3.4.9/bin/../conf/zoo.cfg
Mode: follower
[hadoop@server4 zookeeper-3.4.9]$ bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /home/hadoop/zookeeper-3.4.9/bin/../conf/zoo.cfg
Mode: leader
[hadoop@server1 zookeeper-3.4.9]$ cd /home/hadoop/hadoop/etc/hadoop/
[hadoop@server1 hadoop]$ vim core-site.xml
19 <configuration>
20 <property>
21 <name>fs.defaultFS</name>
22 <value>hdfs://masters</value>
23 </property>
24 <property>
25 <name>ha.zookeeper.quorum</name>
26 <value>172.25.254.2:2181,172.25.254.3:2181,172.25.254.4:2181</value>
27 </property>
28 </configuration>
[hadoop@server1 hadoop]$ vim slaves
172.25.254.2
172.25.254.3
172.25.254.4
[hadoop@server1 hadoop]$ vim hdfs-site.xml
19 <configuration>
20 <property>
21 <name>dfs.replication</name>
22 <value>3</value>
23 </property>
24 <property>
25 <name>dfs.nameservices</name>
26 <value>masters</value>
27 </property>
28 <property>
29 <name>dfs.ha.namenodes.masters</name>
30 <value>h1,h2</value>
31 </property>
32 <property>
33 <name>dfs.namenode.rpc-address.masters.h1</name>
34 <value>172.25.254.1:9000</value>
35 </property>
36 <property>
37 <name>dfs.namenode.http-address.masters.h1</name>
38 <value>172.25.254.1:50070</value>
39 </property>
40 <property>
41 <name>dfs.namenode.rpc-address.masters.h2</name>
42 <value>172.25.254.5:9000</value>
43 </property>
44 <property>
45 <name>dfs.namenode.http-address.masters.h2</name>
46 <value>172.25.254.5:50070</value>
47 </property>
48 <property>
49 <name>dfs.namenode.shared.edits.dir</name>
50 <value>qjournal://172.25.254.2:8485;172.25.254.3:8485;172.25.254.4:8485/masters</value>
51 </property>
52 <property>
53 <name>dfs.journalnode.edits.dir</name>
54 <value>/tmp/journaldata</value>
55 </property>
56 <property>
57 <name>dfs.ha.automatic-failover.enabled</name>
58 <value>true</value>
59 </property>
60 <property>
61 <name>dfs.client.failover.proxy.provider.masters</name>
62 <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
63 </property>
64 <property>
65 <name>dfs.ha.fencing.methods</name>
66 <value>
67 sshfence
68 shell(/bin/true)
69 </value>
70 </property>
71 <property>
72 <name>dfs.ha.fencing.ssh.private-key-files</name>
73 <value>/home/hadoop/.ssh/id_rsa</value>
74 </property>
75 <property>
76 <name>dfs.ha.fencing.ssh.connect-timeout</name>
77 <value>30000</value>
78 </property>
79 </configuration>
server2|3|4:
cd ~/hadoop
sbin/hadoop-daemon.sh start journalnode
server1:
[hadoop@server1 hadoop]$ bin/hdfs namenode -format
[hadoop@server1 hadoop]$ scp -r /tmp/hadoop-hadoop server5:/tmp
[hadoop@server1 hadoop]$ bin/hdfs zkfc -formatZK
[hadoop@server1 hadoop]$ sbin/start-dfs.sh
[hadoop@server1 hadoop]$ bin/hdfs dfs -mkdir /user
[hadoop@server1 hadoop]$ bin/hdfs dfs -mkdir /user/hadoop
[hadoop@server1 hadoop]$ bin/hdfs dfs -put etc/hadoop/ test
yarn高可用:
[hadoop@server1 hadoop]$ vim etc/hadoop/mapred-site.xml
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>
[hadoop@server1 hadoop]$ vim etc/hadoop/yarn-site.xml
15 <configuration>
16 <property>
17 <name>yarn.nodemanager.aux-services</name>
18 <value>mapreduce_shuffle</value>
19 </property>
20 <property>
21 <name>yarn.resourcemanager.ha.enabled</name>
22 <value>true</value>
23 </property>
24 <property>
25 <name>yarn.resourcemanager.cluster-id</name>
26 <value>RM_CLUSTER</value>
27 </property>
28 <property>
29 <name>yarn.resourcemanager.ha.rm-ids</name>
30 <value>rm1,rm2</value>
31 </property>
32 <property>
33 <name>yarn.resourcemanager.hostname.rm1</name>
34 <value>172.25.254.1</value>
35 </property>
36 <property>
37 <name>yarn.resourcemanager.hostname.rm2</name>
38 <value>172.25.254.5</value>
39 </property>
40 <property>
41 <name>yarn.resourcemanager.recovery.enabled</name>
42 <value>true</value>
43 </property>
44 <property>
45 <name>yarn.resourcemanager.store.class</name>
46 <value>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</value>
47 </property>
48 <property>
49 <name>yarn.resourcemanager.zk-address</name>
50 <value>172.25.254.2:2181,172.25.254.3:2181,172.25.254.4:2181</value>
51 </property>
52 </configuration>
[hadoop@server1 hadoop]$ sbin/start-yarn.sh
[hadoop@server5 hadoop]$ sbin/yarn-daemon.sh start resourcemanager
hbase:
[hadoop@server1 ~]$ tar zxf hbase-1.2.4-bin.tar.gz
[hadoop@server1 ~]$ cd hbase-1.2.4/conf/
[hadoop@server1 conf]$ vim hbase-env.sh
27 export JAVA_HOME=/home/hadoop/jdk
28 export HADOOP_HOME=/home/hadoop/hadoop
128 export HBASE_MANAGES_ZK=false
[hadoop@server1 conf]$ vim hbase-site.xml
23 <configuration>
24 <property>
25 <name>hbase.rootdir</name>
26 <value>hdfs://masters/hbase</value>
27 </property>
28 <property>
29 <name>hbase.cluster.distributed</name>
30 <value>true</value>
31 </property>
32 <property>
33 <name>hbase.zookeeper.quorum</name>
34 <value>172.25.254.2,172.25.254.3,172.25.254.4</value>
35 </property>
36 <property>
37 <name>dfs.replication</name>
38 <value>2</value>
39 </property>
40 <property>
41 <name>hbase.master</name>
42 <value>h1</value>
43 </property>
44 </configuration>
[hadoop@server1 conf]$ vim regionservers
172.25.254.2
172.25.254.3
172.25.254.4
[hadoop@server1 hbase-1.2.4]$ bin/start-hbase.sh
starting master, logging to /home/hadoop/hbase-1.2.4/bin/../logs/hbase-hadoop-master-server1.out
172.25.254.3: starting regionserver, logging to /home/hadoop/hbase-1.2.4/bin/../logs/hbase-hadoop-regionserver-server3.out
172.25.254.4: starting regionserver, logging to /home/hadoop/hbase-1.2.4/bin/../logs/hbase-hadoop-regionserver-server4.out
172.25.254.2: starting regionserver, logging to /home/hadoop/hbase-1.2.4/bin/../logs/hbase-hadoop-regionserver-server2.out
[hadoop@server1 hbase-1.2.4]$ jps
5858 HMaster
5963 Jps
3543 NameNode
3347 DFSZKFailoverController
5551 ResourceManager
server5:
[hadoop@server5 hbase-1.2.4]$ bin/hbase-daemon.sh start master
starting master, logging to /home/hadoop/hbase-1.2.4/bin/../logs/hbase-hadoop-master-server5.out
server1:
[hadoop@server1 hbase-1.2.4]$ bin/hbase shell
hbase(main):001:0> create 'test', 'cf'
0 row(s) in 9.5020 seconds
=> Hbase::Table - test
hbase(main):002:0> list 'test'
TABLE
test
1 row(s) in 0.1020 seconds
=> ["test"]
hbase(main):003:0> put 'test', 'row1', 'cf:a', 'value1'
0 row(s) in 0.3770 seconds
hbase(main):004:0> put 'test', 'row2', 'cf:b', 'value2'
0 row(s) in 0.0090 seconds
hbase(main):005:0> put 'test', 'row3', 'cf:c', 'value3'
0 row(s) in 0.0120 seconds
hbase(main):006:0> scan 'test'
ROW COLUMN+CELL
row1 column=cf:a, timestamp=1520843181549, value=value1
row2 column=cf:b, timestamp=1520843188744, value=value2
row3 column=cf:c, timestamp=1520843194493, value=value3
3 row(s) in 0.0970 seconds
hbase(main):007:0> quit
测试高可用:
[hadoop@server1 hadoop]$ bin/hdfs dfs -ls /
Found 2 items
drwxr-xr-x - hadoop supergroup 0 2018-03-12 15:49 /hbase
drwxr-xr-x - hadoop supergroup 0 2018-03-12 14:45 /user
[hadoop@server1 hadoop]$ jps
5858 HMaster
3543 NameNode
6574 Jps
3347 DFSZKFailoverController
5551 ResourceManager
[hadoop@server1 hadoop]$ kill 5858
[hadoop@server1 hbase-1.2.4]$ bin/start-hbase.sh
[hadoop@server1 hbase-1.2.4]$ bin/hbase shell 使用scan 'test'查看数据还在
16万+

被折叠的 条评论
为什么被折叠?



