blog迁移至 :http://www.micmiu.com
[size=medium]Hadoop一个分布式系统基础架构,是Apache基金下的一个子项目。用户可以在不了解分布式底层细节的情况下,开发分布式程序。充分利用集群的威力高速运算和存储。Hadoop实现了一个分布式文件系统(Hadoop Distributed File System),简称HDFS。HDFS有着高容错性的特点,并且设计用来部署在低廉的(low-cost)硬件上。而且它提供高传输率(high throughput)来访问应用程序的数据,适合那些有着超大数据集(large data set)的应用程序。HDFS放宽了(relax)POSIX的要求(requirements)这样可以流的形式访问(streaming access)文件系统中的数据。
Hadoop官网:[url=http://hadoop.apache.org/]http://hadoop.apache.org/[/url][/size]
[size=medium][color=red]本文详细介绍如何搭建一个Hadoop的测试环境,将分别从单机、伪分布式等逐个讲解,并讲述在不同操作系统(Centos、Ubuntu)搭建过程中可能碰到的问题及解决方法,所有的测试过程本人在Centos、Ubuntu下均全部测试成功,全文的目录结构:[/color]
[color=blue][list]
[*]实验环境
[*]准备工作
[*]单机演示
[*]伪分布式演示[/list][/size][/color]
[color=blue][size=large]一、实验环境:[/size][/color]
[list]
[*]Windows Vista
[*]VirtualBox + Ubuntu10.10(OpenSSH 安装并启动)
[*]jdk 版本:1.6.0_20;安装路径为:/opt/jdk1.6(Hadoop要求jdk1.6.x)
[*]hadoop-0.20.203.0rc1.tar.gz(目前最新的稳定版本)
[/list]
以Ubuntu中的用户 michael为例:
[color=blue][size=large]二、前期准备[/size][/color]
首先将hadoop-0.20.203.0rc1.tar.gz解压到/home/michael/下
修改hadoop-env.sh中JAVA_HOME的配置,找到如下信息:
[quote]# The java implementation to use. Required.
# export JAVA_HOME=/usr/lib/j2sdk1.5-sun
[/quote]
修改为:
[quote]# The java implementation to use. Required.
#当前系统的JDK的路径
export JAVA_HOME=/opt/jdk1.6
[/quote]
[color=blue][size=large]三、单机演示(Standalone Operation)[/size][/color]
所涉及的到的操作命令如下:
详细过程如下:
[quote]michael@michael-VirtualBox:~/hadoop$ [color=red]mkdir input[/color]
michael@michael-VirtualBox:~/hadoop$ [color=red]cp conf/*.xml input[/color]
michael@michael-VirtualBox:~/hadoop$ [color=red]bin/hadoop jar hadoop-examples-0.20.203.0.jar grep input output 'dfs[a-z.]+'[/color]
11/07/16 10:06:48 INFO mapred.FileInputFormat: Total input paths to process : 6
11/07/16 10:06:48 INFO mapred.JobClient: Running job: job_local_0001
11/07/16 10:06:48 INFO mapred.MapTask: numReduceTasks: 1
11/07/16 10:06:48 INFO mapred.MapTask: io.sort.mb = 100
11/07/16 10:06:49 INFO mapred.MapTask: data buffer = 79691776/99614720
11/07/16 10:06:49 INFO mapred.MapTask: record buffer = 262144/327680
11/07/16 10:06:49 INFO mapred.MapTask: Starting flush of map output
11/07/16 10:06:49 INFO mapred.JobClient: map 0% reduce 0%
11/07/16 10:06:49 INFO mapred.Task: Task:attempt_local_0001_m_000000_0 is done. And is in the process of commiting
11/07/16 10:06:51 INFO mapred.LocalJobRunner: file:/home/michael/hadoop/input/capacity-scheduler.xml:0+7457
11/07/16 10:06:51 INFO mapred.Task: Task 'attempt_local_0001_m_000000_0' done.
11/07/16 10:06:51 INFO mapred.MapTask: numReduceTasks: 1
11/07/16 10:06:51 INFO mapred.MapTask: io.sort.mb = 100
11/07/16 10:06:51 INFO mapred.MapTask: data buffer = 79691776/99614720
11/07/16 10:06:51 INFO mapred.MapTask: record buffer = 262144/327680
11/07/16 10:06:51 INFO mapred.MapTask: Starting flush of map output
11/07/16 10:06:52 INFO mapred.MapTask: Finished spill 0
11/07/16 10:06:52 INFO mapred.Task: Task:attempt_local_0001_m_000001_0 is done. And is in the process of commiting
11/07/16 10:06:52 INFO mapred.JobClient: map 100% reduce 0%
11/07/16 10:06:54 INFO mapred.LocalJobRunner: file:/home/michael/hadoop/input/hadoop-policy.xml:0+4644
11/07/16 10:06:54 INFO mapred.LocalJobRunner: file:/home/michael/hadoop/input/hadoop-policy.xml:0+4644
11/07/16 10:06:54 INFO mapred.Task: Task 'attempt_local_0001_m_000001_0' done.
11/07/16 10:06:54 INFO mapred.MapTask: numReduceTasks: 1
11/07/16 10:06:54 INFO mapred.MapTask: io.sort.mb = 100
11/07/16 10:06:55 INFO mapred.MapTask: data buffer = 79691776/99614720
11/07/16 10:06:55 INFO mapred.MapTask: record buffer = 262144/327680
11/07/16 10:06:55 INFO mapred.MapTask: Starting flush of map output
11/07/16 10:06:55 INFO mapred.Task: Task:attempt_local_0001_m_000002_0 is done. And is in the process of commiting
11/07/16 10:06:57 INFO mapred.LocalJobRunner: file:/home/michael/hadoop/input/mapred-queue-acls.xml:0+2033
11/07/16 10:06:57 INFO mapred.LocalJobRunner: file:/home/michael/hadoop/input/mapred-queue-acls.xml:0+2033
11/07/16 10:06:57 INFO mapred.Task: Task 'attempt_local_0001_m_000002_0' done.
11/07/16 10:06:57 INFO mapred.MapTask: numReduceTasks: 1
11/07/16 10:06:57 INFO mapred.MapTask: io.sort.mb = 100
11/07/16 10:06:58 INFO mapred.MapTask: data buffer = 79691776/99614720
11/07/16 10:06:58 INFO mapred.MapTask: record buffer = 262144/327680
11/07/16 10:06:58 INFO mapred.MapTask: Starting flush of map output
11/07/16 10:06:58 INFO mapred.Task: Task:attempt_local_0001_m_000003_0 is done. And is in the process of commiting
11/07/16 10:07:00 INFO mapred.LocalJobRunner: file:/home/michael/hadoop/input/mapred-site.xml:0+178
11/07/16 10:07:00 INFO mapred.LocalJobRunner: file:/home/michael/hadoop/input/mapred-site.xml:0+178
11/07/16 10:07:00 INFO mapred.Task: Task 'attempt_local_0001_m_000003_0' done.
11/07/16 10:07:00 INFO mapred.MapTask: numReduceTasks: 1
11/07/16 10:07:00 INFO mapred.MapTask: io.sort.mb = 100
11/07/16 10:07:01 INFO mapred.MapTask: data buffer = 79691776/99614720
11/07/16 10:07:01 INFO mapred.MapTask: record buffer = 262144/327680
11/07/16 10:07:01 INFO mapred.MapTask: Starting flush of map output
11/07/16 10:07:01 INFO mapred.Task: Task:attempt_local_0001_m_000004_0 is done. And is in the process of commiting
11/07/16 10:07:04 INFO mapred.LocalJobRunner: file:/home/michael/hadoop/input/core-site.xml:0+178
11/07/16 10:07:04 INFO mapred.LocalJobRunner: file:/home/michael/hadoop/input/core-site.xml:0+178
11/07/16 10:07:04 INFO mapred.Task: Task 'attempt_local_0001_m_000004_0' done.
11/07/16 10:07:04 INFO mapred.MapTask: numReduceTasks: 1
11/07/16 10:07:04 INFO mapred.MapTask: io.sort.mb = 100
11/07/16 10:07:04 INFO mapred.MapTask: data buffer = 79691776/99614720
11/07/16 10:07:04 INFO mapred.MapTask: record buffer = 262144/327680
11/07/16 10:07:04 INFO mapred.MapTask: Starting flush of map output
11/07/16 10:07:04 INFO mapred.Task: Task:attempt_local_0001_m_000005_0 is done. And is in the process of commiting
11/07/16 10:07:07 INFO mapred.LocalJobRunner: file:/home/michael/hadoop/input/hdfs-site.xml:0+178
11/07/16 10:07:07 INFO mapred.Task: Task 'attempt_local_0001_m_000005_0' done.
11/07/16 10:07:07 INFO mapred.LocalJobRunner:
11/07/16 10:07:07 INFO mapred.Merger: Merging 6 sorted segments
11/07/16 10:07:07 INFO mapred.Merger: Down to the last merge-pass, with 1 segments left of total size: 21 bytes
11/07/16 10:07:07 INFO mapred.LocalJobRunner:
11/07/16 10:07:07 INFO mapred.Task: Task:attempt_local_0001_r_000000_0 is done. And is in the process of commiting
11/07/16 10:07:07 INFO mapred.LocalJobRunner:
11/07/16 10:07:07 INFO mapred.Task: Task attempt_local_0001_r_000000_0 is allowed to commit now
11/07/16 10:07:07 INFO mapred.FileOutputCommitter: Saved output of task 'attempt_local_0001_r_000000_0' to file:/home/michael/hadoop/grep-temp-1267281521
11/07/16 10:07:10 INFO mapred.LocalJobRunner: reduce > reduce
11/07/16 10:07:10 INFO mapred.Task: Task 'attempt_local_0001_r_000000_0' done.
11/07/16 10:07:10 INFO mapred.JobClient: map 100% reduce 100%
11/07/16 10:07:10 INFO mapred.JobClient: Job complete: job_local_0001
11/07/16 10:07:10 INFO mapred.JobClient: Counters: 17
11/07/16 10:07:10 INFO mapred.JobClient: File Input Format Counters
11/07/16 10:07:10 INFO mapred.JobClient: Bytes Read=14668
11/07/16 10:07:10 INFO mapred.JobClient: File Output Format Counters
11/07/16 10:07:10 INFO mapred.JobClient: Bytes Written=123
11/07/16 10:07:10 INFO mapred.JobClient: FileSystemCounters
11/07/16 10:07:10 INFO mapred.JobClient: FILE_BYTES_READ=1106074
11/07/16 10:07:10 INFO mapred.JobClient: FILE_BYTES_WRITTEN=1231779
11/07/16 10:07:10 INFO mapred.JobClient: Map-Reduce Framework
11/07/16 10:07:10 INFO mapred.JobClient: Map output materialized bytes=55
11/07/16 10:07:10 INFO mapred.JobClient: Map input records=357
11/07/16 10:07:10 INFO mapred.JobClient: Reduce shuffle bytes=0
11/07/16 10:07:10 INFO mapred.JobClient: Spilled Records=2
11/07/16 10:07:10 INFO mapred.JobClient: Map output bytes=17
11/07/16 10:07:10 INFO mapred.JobClient: Map input bytes=14668
11/07/16 10:07:10 INFO mapred.JobClient: SPLIT_RAW_BYTES=611
11/07/16 10:07:10 INFO mapred.JobClient: Combine input records=1
11/07/16 10:07:10 INFO mapred.JobClient: Reduce input records=1
11/07/16 10:07:10 INFO mapred.JobClient: Reduce input groups=1
11/07/16 10:07:10 INFO mapred.JobClient: Combine output records=1
11/07/16 10:07:10 INFO mapred.JobClient: Reduce output records=1
11/07/16 10:07:10 INFO mapred.JobClient: Map output records=1
11/07/16 10:07:10 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
11/07/16 10:07:10 INFO mapred.FileInputFormat: Total input paths to process : 1
11/07/16 10:07:10 INFO mapred.JobClient: Running job: job_local_0002
11/07/16 10:07:10 INFO mapred.MapTask: numReduceTasks: 1
11/07/16 10:07:10 INFO mapred.MapTask: io.sort.mb = 100
11/07/16 10:07:11 INFO mapred.MapTask: data buffer = 79691776/99614720
11/07/16 10:07:11 INFO mapred.MapTask: record buffer = 262144/327680
11/07/16 10:07:11 INFO mapred.MapTask: Starting flush of map output
11/07/16 10:07:11 INFO mapred.MapTask: Finished spill 0
11/07/16 10:07:11 INFO mapred.Task: Task:attempt_local_0002_m_000000_0 is done. And is in the process of commiting
11/07/16 10:07:11 INFO mapred.JobClient: map 0% reduce 0%
11/07/16 10:07:13 INFO mapred.LocalJobRunner: file:/home/michael/hadoop/grep-temp-1267281521/part-00000:0+111
11/07/16 10:07:13 INFO mapred.LocalJobRunner: file:/home/michael/hadoop/grep-temp-1267281521/part-00000:0+111
11/07/16 10:07:13 INFO mapred.Task: Task 'attempt_local_0002_m_000000_0' done.
11/07/16 10:07:13 INFO mapred.LocalJobRunner:
11/07/16 10:07:13 INFO mapred.Merger: Merging 1 sorted segments
11/07/16 10:07:13 INFO mapred.Merger: Down to the last merge-pass, with 1 segments left of total size: 21 bytes
11/07/16 10:07:13 INFO mapred.LocalJobRunner:
11/07/16 10:07:13 INFO mapred.Task: Task:attempt_local_0002_r_000000_0 is done. And is in the process of commiting
11/07/16 10:07:13 INFO mapred.LocalJobRunner:
11/07/16 10:07:13 INFO mapred.Task: Task attempt_local_0002_r_000000_0 is allowed to commit now
11/07/16 10:07:13 INFO mapred.FileOutputCommitter: Saved output of task 'attempt_local_0002_r_000000_0' to file:/home/michael/hadoop/output
11/07/16 10:07:14 INFO mapred.JobClient: map 100% reduce 0%
11/07/16 10:07:16 INFO mapred.LocalJobRunner: reduce > reduce
11/07/16 10:07:16 INFO mapred.Task: Task 'attempt_local_0002_r_000000_0' done.
11/07/16 10:07:17 INFO mapred.JobClient: map 100% reduce 100%
11/07/16 10:07:17 INFO mapred.JobClient: Job complete: job_local_0002
11/07/16 10:07:17 INFO mapred.JobClient: Counters: 17
11/07/16 10:07:17 INFO mapred.JobClient: File Input Format Counters
11/07/16 10:07:17 INFO mapred.JobClient: Bytes Read=123
11/07/16 10:07:17 INFO mapred.JobClient: File Output Format Counters
11/07/16 10:07:17 INFO mapred.JobClient: Bytes Written=23
11/07/16 10:07:17 INFO mapred.JobClient: FileSystemCounters
11/07/16 10:07:17 INFO mapred.JobClient: FILE_BYTES_READ=606737
11/07/16 10:07:17 INFO mapred.JobClient: FILE_BYTES_WRITTEN=700981
11/07/16 10:07:17 INFO mapred.JobClient: Map-Reduce Framework
11/07/16 10:07:17 INFO mapred.JobClient: Map output materialized bytes=25
11/07/16 10:07:17 INFO mapred.JobClient: Map input records=1
11/07/16 10:07:17 INFO mapred.JobClient: Reduce shuffle bytes=0
11/07/16 10:07:17 INFO mapred.JobClient: Spilled Records=2
11/07/16 10:07:17 INFO mapred.JobClient: Map output bytes=17
11/07/16 10:07:17 INFO mapred.JobClient: Map input bytes=25
11/07/16 10:07:17 INFO mapred.JobClient: SPLIT_RAW_BYTES=110
11/07/16 10:07:17 INFO mapred.JobClient: Combine input records=0
11/07/16 10:07:17 INFO mapred.JobClient: Reduce input records=1
11/07/16 10:07:17 INFO mapred.JobClient: Reduce input groups=1
11/07/16 10:07:17 INFO mapred.JobClient: Combine output records=0
11/07/16 10:07:17 INFO mapred.JobClient: Reduce output records=1
11/07/16 10:07:17 INFO mapred.JobClient: Map output records=1
michael@michael-VirtualBox:~/hadoop$
michael@michael-VirtualBox:~/hadoop$[color=red] cat output/* [/color]
[color=blue]1 dfsadmin[/color]
michael@michael-VirtualBox:~/hadoop$ [/quote]
到此单机演示成功。
[color=blue][size=large]四、伪分布式演示(Pseudo-Distributed Operation)[/size][/color]
[size=medium]1. 修改配置文件:[/size]
conf/core-site.xml:
conf/hdfs-site.xml:
conf/mapred-site.xml:
[size=medium]2.设置SSH无密码登陆[/size]
ps:如果是Centos系统,有关SSH无密码的详细设置请看:[url=http://sjsky.iteye.com/blog/1123184]Linux(Centos)配置OpenSSH无密码登陆[/url]
[size=medium]3.测试:[/size]
[color=blue]相关操作的基本命令如下:[/color]
以上的测试操作在Centos5中测试顺利成功,而在Ubuntu10.10系统中却失败了,在执行命令:bin/hadoop fs -put conf input 时出错,错误信息类似[color=red]“could only be replicated to 0 nodes, instead of 1”[/color],引起该错误信息的原因有多种(详见:[url=http://sjsky.iteye.com/blog/1124545]http://sjsky.iteye.com/blog/1124545[/url]),但此处的出错的原因是由于hadoop.tmp.dir默认配置指向/tmp/hadoop-${user.name},而在Ubuntu系统中,/tmp目录下的文件系统的类型往往是Hadoop不支持的。
解决的办法是重新定义hadoop.tmp.dir指向,修改配置文件conf/core-site.xml如下:
再次进行测试,成功运行。
整个测试过程的详细信息如下:
[quote]michael@michael-VirtualBox:~/hadoop$ [color=red]ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa [/color]
Generating public/private dsa key pair.
Your identification has been saved in /home/michael/.ssh/id_dsa.
Your public key has been saved in /home/michael/.ssh/id_dsa.pub.
The key fingerprint is:
2a:47:e3:3a:c8:80:ab:97:d1:c6:68:54:9a:45:9f:59 michael@michael-VirtualBox
The key's randomart image is:
+--[ DSA 1024]----+
| .. E |
| o. + |
| = + |
| + |
|.. + o S |
|o + +o o |
| = =. + |
|. = .+ |
|o. .. |
+-----------------+
michael@michael-VirtualBox:~/hadoop$ [color=red]cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys [/color]
michael@michael-VirtualBox:~/hadoop$ ssh localhost
Linux michael-VirtualBox 2.6.35-22-generic #33-Ubuntu SMP Sun Sep 19 20:34:50 UTC 2010 i686 GNU/Linux
Ubuntu 10.10
[color=red]Welcome to Ubuntu![/color]
* Documentation: https://help.ubuntu.com/
71 packages can be updated.
71 updates are security updates.
New release 'natty' available.
Run 'do-release-upgrade' to upgrade to it.
Last login: Wed Jul 15 15:56:17 2011 from shnap.local
michael@michael-VirtualBox:~$ exit
注销
Connection to localhost closed.
michael@michael-VirtualBox:~/hadoop$ [color=red]bin/hadoop namenode -format[/color]
11/07/16 12:43:45 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = michael-VirtualBox/127.0.1.1
STARTUP_MSG: args = [-format]
STARTUP_MSG: version = 0.20.203.0
STARTUP_MSG: build = http://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20-security-203 -r 1099333; compiled by 'oom' on Wed May 4 07:57:50 PDT 2011
************************************************************/
11/07/16 12:43:46 INFO util.GSet: VM type = 32-bit
11/07/16 12:43:46 INFO util.GSet: 2% max memory = 19.33375 MB
11/07/16 12:43:46 INFO util.GSet: capacity = 2^22 = 4194304 entries
11/07/16 12:43:46 INFO util.GSet: recommended=4194304, actual=4194304
11/07/16 12:43:46 INFO namenode.FSNamesystem: fsOwner=michael
11/07/16 12:43:46 INFO namenode.FSNamesystem: supergroup=supergroup
11/07/16 12:43:46 INFO namenode.FSNamesystem: isPermissionEnabled=true
11/07/16 12:43:46 INFO namenode.FSNamesystem: dfs.block.invalidate.limit=100
11/07/16 12:43:46 INFO namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
11/07/16 12:43:46 INFO namenode.NameNode: Caching file names occuring more than 10 times
11/07/16 12:43:47 INFO common.Storage: Image file of size 113 saved in 0 seconds.
11/07/16 12:43:47 INFO common.Storage: Storage directory /home/michael/hadooptmp/hadoop-michael/dfs/name has been successfully formatted.
11/07/16 12:43:47 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at michael-VirtualBox/127.0.1.1
************************************************************/
michael@michael-VirtualBox:~/hadoop$[color=red] bin/start-all.sh [/color]
starting namenode, logging to /home/michael/hadoop/bin/../logs/hadoop-michael-namenode-michael-VirtualBox.out
localhost: starting datanode, logging to /home/michael/hadoop/bin/../logs/hadoop-michael-datanode-michael-VirtualBox.out
localhost: starting secondarynamenode, logging to /home/michael/hadoop/bin/../logs/hadoop-michael-secondarynamenode-michael-VirtualBox.out
starting jobtracker, logging to /home/michael/hadoop/bin/../logs/hadoop-michael-jobtracker-michael-VirtualBox.out
localhost: starting tasktracker, logging to /home/michael/hadoop/bin/../logs/hadoop-michael-tasktracker-michael-VirtualBox.out
michael@michael-VirtualBox:~/hadoop$ jps
[color=red]7948 SecondaryNameNode
8033 JobTracker
8887 Jps
7627 NameNode
7781 DataNode
8190 TaskTracker[/color]
michael@michael-VirtualBox:~/hadoop$ [color=red]bin/hadoop fs -put conf input[/color]
michael@michael-VirtualBox:~/hadoop$ [color=red]bin/hadoop jar hadoop-examples-*.jar grep input output 'dfs[a-z.]+'[/color]
11/07/16 12:46:21 INFO mapred.FileInputFormat: Total input paths to process : 15
11/07/16 12:46:21 INFO mapred.JobClient: Running job: job_201107161244_0001
11/07/16 12:46:22 INFO mapred.JobClient: map 0% reduce 0%
11/07/16 12:47:09 INFO mapred.JobClient: map 13% reduce 0%
11/07/16 12:47:33 INFO mapred.JobClient: map 26% reduce 0%
11/07/16 12:47:45 INFO mapred.JobClient: map 26% reduce 8%
11/07/16 12:47:54 INFO mapred.JobClient: map 40% reduce 8%
11/07/16 12:48:07 INFO mapred.JobClient: map 53% reduce 13%
11/07/16 12:48:16 INFO mapred.JobClient: map 53% reduce 17%
11/07/16 12:48:24 INFO mapred.JobClient: map 66% reduce 17%
11/07/16 12:48:36 INFO mapred.JobClient: map 80% reduce 22%
11/07/16 12:48:42 INFO mapred.JobClient: map 80% reduce 26%
11/07/16 12:48:45 INFO mapred.JobClient: map 93% reduce 26%
11/07/16 12:48:53 INFO mapred.JobClient: map 100% reduce 26%
11/07/16 12:48:58 INFO mapred.JobClient: map 100% reduce 33%
11/07/16 12:49:07 INFO mapred.JobClient: map 100% reduce 100%
11/07/16 12:49:14 INFO mapred.JobClient: Job complete: job_201107161244_0001
11/07/16 12:49:15 INFO mapred.JobClient: Counters: 26
11/07/16 12:49:15 INFO mapred.JobClient: Job Counters
11/07/16 12:49:15 INFO mapred.JobClient: Launched reduce tasks=1
11/07/16 12:49:15 INFO mapred.JobClient: SLOTS_MILLIS_MAPS=255488
11/07/16 12:49:15 INFO mapred.JobClient: Total time spent by all reduces waiting after reserving slots (ms)=0
11/07/16 12:49:15 INFO mapred.JobClient: Total time spent by all maps waiting after reserving slots (ms)=0
11/07/16 12:49:15 INFO mapred.JobClient: Launched map tasks=15
11/07/16 12:49:15 INFO mapred.JobClient: Data-local map tasks=15
11/07/16 12:49:15 INFO mapred.JobClient: SLOTS_MILLIS_REDUCES=115656
11/07/16 12:49:15 INFO mapred.JobClient: File Input Format Counters
11/07/16 12:49:15 INFO mapred.JobClient: Bytes Read=25623
11/07/16 12:49:15 INFO mapred.JobClient: File Output Format Counters
11/07/16 12:49:15 INFO mapred.JobClient: Bytes Written=180
11/07/16 12:49:15 INFO mapred.JobClient: FileSystemCounters
11/07/16 12:49:15 INFO mapred.JobClient: FILE_BYTES_READ=82
11/07/16 12:49:15 INFO mapred.JobClient: HDFS_BYTES_READ=27281
11/07/16 12:49:15 INFO mapred.JobClient: FILE_BYTES_WRITTEN=342206
11/07/16 12:49:15 INFO mapred.JobClient: HDFS_BYTES_WRITTEN=180
11/07/16 12:49:15 INFO mapred.JobClient: Map-Reduce Framework
11/07/16 12:49:15 INFO mapred.JobClient: Map output materialized bytes=166
11/07/16 12:49:15 INFO mapred.JobClient: Map input records=716
11/07/16 12:49:15 INFO mapred.JobClient: Reduce shuffle bytes=166
11/07/16 12:49:15 INFO mapred.JobClient: Spilled Records=6
11/07/16 12:49:15 INFO mapred.JobClient: Map output bytes=70
11/07/16 12:49:15 INFO mapred.JobClient: Map input bytes=25623
11/07/16 12:49:15 INFO mapred.JobClient: Combine input records=3
11/07/16 12:49:15 INFO mapred.JobClient: SPLIT_RAW_BYTES=1658
11/07/16 12:49:15 INFO mapred.JobClient: Reduce input records=3
11/07/16 12:49:15 INFO mapred.JobClient: Reduce input groups=3
11/07/16 12:49:15 INFO mapred.JobClient: Combine output records=3
11/07/16 12:49:15 INFO mapred.JobClient: Reduce output records=3
11/07/16 12:49:15 INFO mapred.JobClient: Map output records=3
11/07/16 12:49:16 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
11/07/16 12:49:17 INFO mapred.FileInputFormat: Total input paths to process : 1
11/07/16 12:49:18 INFO mapred.JobClient: Running job: job_201107161244_0002
11/07/16 12:49:19 INFO mapred.JobClient: map 0% reduce 0%
11/07/16 12:49:40 INFO mapred.JobClient: map 100% reduce 0%
11/07/16 12:49:55 INFO mapred.JobClient: map 100% reduce 100%
11/07/16 12:50:00 INFO mapred.JobClient: Job complete: job_201107161244_0002
11/07/16 12:50:00 INFO mapred.JobClient: Counters: 26
11/07/16 12:50:00 INFO mapred.JobClient: Job Counters
11/07/16 12:50:00 INFO mapred.JobClient: Launched reduce tasks=1
11/07/16 12:50:00 INFO mapred.JobClient: SLOTS_MILLIS_MAPS=16946
11/07/16 12:50:00 INFO mapred.JobClient: Total time spent by all reduces waiting after reserving slots (ms)=0
11/07/16 12:50:00 INFO mapred.JobClient: Total time spent by all maps waiting after reserving slots (ms)=0
11/07/16 12:50:00 INFO mapred.JobClient: Launched map tasks=1
11/07/16 12:50:00 INFO mapred.JobClient: Data-local map tasks=1
11/07/16 12:50:00 INFO mapred.JobClient: SLOTS_MILLIS_REDUCES=14357
11/07/16 12:50:00 INFO mapred.JobClient: File Input Format Counters
11/07/16 12:50:00 INFO mapred.JobClient: Bytes Read=180
11/07/16 12:50:00 INFO mapred.JobClient: File Output Format Counters
11/07/16 12:50:00 INFO mapred.JobClient: Bytes Written=52
11/07/16 12:50:00 INFO mapred.JobClient: FileSystemCounters
11/07/16 12:50:00 INFO mapred.JobClient: FILE_BYTES_READ=82
11/07/16 12:50:00 INFO mapred.JobClient: HDFS_BYTES_READ=298
11/07/16 12:50:00 INFO mapred.JobClient: FILE_BYTES_WRITTEN=41947
11/07/16 12:50:00 INFO mapred.JobClient: HDFS_BYTES_WRITTEN=52
11/07/16 12:50:00 INFO mapred.JobClient: Map-Reduce Framework
11/07/16 12:50:00 INFO mapred.JobClient: Map output materialized bytes=82
11/07/16 12:50:00 INFO mapred.JobClient: Map input records=3
11/07/16 12:50:00 INFO mapred.JobClient: Reduce shuffle bytes=82
11/07/16 12:50:00 INFO mapred.JobClient: Spilled Records=6
11/07/16 12:50:00 INFO mapred.JobClient: Map output bytes=70
11/07/16 12:50:00 INFO mapred.JobClient: Map input bytes=94
11/07/16 12:50:00 INFO mapred.JobClient: Combine input records=0
11/07/16 12:50:00 INFO mapred.JobClient: SPLIT_RAW_BYTES=118
11/07/16 12:50:00 INFO mapred.JobClient: Reduce input records=3
11/07/16 12:50:00 INFO mapred.JobClient: Reduce input groups=1
11/07/16 12:50:00 INFO mapred.JobClient: Combine output records=0
11/07/16 12:50:00 INFO mapred.JobClient: Reduce output records=3
11/07/16 12:50:00 INFO mapred.JobClient: Map output records=3
michael@michael-VirtualBox:~/hadoop$
michael@michael-VirtualBox:~/hadoop$ [color=red]cat output/output/*[/color]
[color=blue]cat: output/output/_logs: 是一个目录
1 dfs.replication
1 dfs.server.namenode.
1 dfsadmin[/color]
michael@michael-VirtualBox:~/hadoop$ [/quote]
到此伪分布式的演示成功。
转载请注明来自:Michael's blog @ [url=http://sjsky.iteye.com]http://sjsky.iteye.com[/url]
----------------------------- 分 ------------------------------ 隔 ------------------------------ 线 ------------------------------
[size=medium]Hadoop一个分布式系统基础架构,是Apache基金下的一个子项目。用户可以在不了解分布式底层细节的情况下,开发分布式程序。充分利用集群的威力高速运算和存储。Hadoop实现了一个分布式文件系统(Hadoop Distributed File System),简称HDFS。HDFS有着高容错性的特点,并且设计用来部署在低廉的(low-cost)硬件上。而且它提供高传输率(high throughput)来访问应用程序的数据,适合那些有着超大数据集(large data set)的应用程序。HDFS放宽了(relax)POSIX的要求(requirements)这样可以流的形式访问(streaming access)文件系统中的数据。
Hadoop官网:[url=http://hadoop.apache.org/]http://hadoop.apache.org/[/url][/size]
[size=medium][color=red]本文详细介绍如何搭建一个Hadoop的测试环境,将分别从单机、伪分布式等逐个讲解,并讲述在不同操作系统(Centos、Ubuntu)搭建过程中可能碰到的问题及解决方法,所有的测试过程本人在Centos、Ubuntu下均全部测试成功,全文的目录结构:[/color]
[color=blue][list]
[*]实验环境
[*]准备工作
[*]单机演示
[*]伪分布式演示[/list][/size][/color]
[color=blue][size=large]一、实验环境:[/size][/color]
[list]
[*]Windows Vista
[*]VirtualBox + Ubuntu10.10(OpenSSH 安装并启动)
[*]jdk 版本:1.6.0_20;安装路径为:/opt/jdk1.6(Hadoop要求jdk1.6.x)
[*]hadoop-0.20.203.0rc1.tar.gz(目前最新的稳定版本)
[/list]
以Ubuntu中的用户 michael为例:
[color=blue][size=large]二、前期准备[/size][/color]
首先将hadoop-0.20.203.0rc1.tar.gz解压到/home/michael/下
$tar -zxvf hadoop-0.20.203.0rc1.tar.gz -C /home/michael/
$mv hadoop-0.20.203.0 hadoop
修改hadoop-env.sh中JAVA_HOME的配置,找到如下信息:
[quote]# The java implementation to use. Required.
# export JAVA_HOME=/usr/lib/j2sdk1.5-sun
[/quote]
修改为:
[quote]# The java implementation to use. Required.
#当前系统的JDK的路径
export JAVA_HOME=/opt/jdk1.6
[/quote]
[color=blue][size=large]三、单机演示(Standalone Operation)[/size][/color]
所涉及的到的操作命令如下:
$ cd /home/michael/hadoop
$ mkdir input
$ cp conf/*.xml input
$ bin/hadoop jar hadoop-examples-*.jar grep input output 'dfs[a-z.]+'
$ cat output/*
详细过程如下:
[quote]michael@michael-VirtualBox:~/hadoop$ [color=red]mkdir input[/color]
michael@michael-VirtualBox:~/hadoop$ [color=red]cp conf/*.xml input[/color]
michael@michael-VirtualBox:~/hadoop$ [color=red]bin/hadoop jar hadoop-examples-0.20.203.0.jar grep input output 'dfs[a-z.]+'[/color]
11/07/16 10:06:48 INFO mapred.FileInputFormat: Total input paths to process : 6
11/07/16 10:06:48 INFO mapred.JobClient: Running job: job_local_0001
11/07/16 10:06:48 INFO mapred.MapTask: numReduceTasks: 1
11/07/16 10:06:48 INFO mapred.MapTask: io.sort.mb = 100
11/07/16 10:06:49 INFO mapred.MapTask: data buffer = 79691776/99614720
11/07/16 10:06:49 INFO mapred.MapTask: record buffer = 262144/327680
11/07/16 10:06:49 INFO mapred.MapTask: Starting flush of map output
11/07/16 10:06:49 INFO mapred.JobClient: map 0% reduce 0%
11/07/16 10:06:49 INFO mapred.Task: Task:attempt_local_0001_m_000000_0 is done. And is in the process of commiting
11/07/16 10:06:51 INFO mapred.LocalJobRunner: file:/home/michael/hadoop/input/capacity-scheduler.xml:0+7457
11/07/16 10:06:51 INFO mapred.Task: Task 'attempt_local_0001_m_000000_0' done.
11/07/16 10:06:51 INFO mapred.MapTask: numReduceTasks: 1
11/07/16 10:06:51 INFO mapred.MapTask: io.sort.mb = 100
11/07/16 10:06:51 INFO mapred.MapTask: data buffer = 79691776/99614720
11/07/16 10:06:51 INFO mapred.MapTask: record buffer = 262144/327680
11/07/16 10:06:51 INFO mapred.MapTask: Starting flush of map output
11/07/16 10:06:52 INFO mapred.MapTask: Finished spill 0
11/07/16 10:06:52 INFO mapred.Task: Task:attempt_local_0001_m_000001_0 is done. And is in the process of commiting
11/07/16 10:06:52 INFO mapred.JobClient: map 100% reduce 0%
11/07/16 10:06:54 INFO mapred.LocalJobRunner: file:/home/michael/hadoop/input/hadoop-policy.xml:0+4644
11/07/16 10:06:54 INFO mapred.LocalJobRunner: file:/home/michael/hadoop/input/hadoop-policy.xml:0+4644
11/07/16 10:06:54 INFO mapred.Task: Task 'attempt_local_0001_m_000001_0' done.
11/07/16 10:06:54 INFO mapred.MapTask: numReduceTasks: 1
11/07/16 10:06:54 INFO mapred.MapTask: io.sort.mb = 100
11/07/16 10:06:55 INFO mapred.MapTask: data buffer = 79691776/99614720
11/07/16 10:06:55 INFO mapred.MapTask: record buffer = 262144/327680
11/07/16 10:06:55 INFO mapred.MapTask: Starting flush of map output
11/07/16 10:06:55 INFO mapred.Task: Task:attempt_local_0001_m_000002_0 is done. And is in the process of commiting
11/07/16 10:06:57 INFO mapred.LocalJobRunner: file:/home/michael/hadoop/input/mapred-queue-acls.xml:0+2033
11/07/16 10:06:57 INFO mapred.LocalJobRunner: file:/home/michael/hadoop/input/mapred-queue-acls.xml:0+2033
11/07/16 10:06:57 INFO mapred.Task: Task 'attempt_local_0001_m_000002_0' done.
11/07/16 10:06:57 INFO mapred.MapTask: numReduceTasks: 1
11/07/16 10:06:57 INFO mapred.MapTask: io.sort.mb = 100
11/07/16 10:06:58 INFO mapred.MapTask: data buffer = 79691776/99614720
11/07/16 10:06:58 INFO mapred.MapTask: record buffer = 262144/327680
11/07/16 10:06:58 INFO mapred.MapTask: Starting flush of map output
11/07/16 10:06:58 INFO mapred.Task: Task:attempt_local_0001_m_000003_0 is done. And is in the process of commiting
11/07/16 10:07:00 INFO mapred.LocalJobRunner: file:/home/michael/hadoop/input/mapred-site.xml:0+178
11/07/16 10:07:00 INFO mapred.LocalJobRunner: file:/home/michael/hadoop/input/mapred-site.xml:0+178
11/07/16 10:07:00 INFO mapred.Task: Task 'attempt_local_0001_m_000003_0' done.
11/07/16 10:07:00 INFO mapred.MapTask: numReduceTasks: 1
11/07/16 10:07:00 INFO mapred.MapTask: io.sort.mb = 100
11/07/16 10:07:01 INFO mapred.MapTask: data buffer = 79691776/99614720
11/07/16 10:07:01 INFO mapred.MapTask: record buffer = 262144/327680
11/07/16 10:07:01 INFO mapred.MapTask: Starting flush of map output
11/07/16 10:07:01 INFO mapred.Task: Task:attempt_local_0001_m_000004_0 is done. And is in the process of commiting
11/07/16 10:07:04 INFO mapred.LocalJobRunner: file:/home/michael/hadoop/input/core-site.xml:0+178
11/07/16 10:07:04 INFO mapred.LocalJobRunner: file:/home/michael/hadoop/input/core-site.xml:0+178
11/07/16 10:07:04 INFO mapred.Task: Task 'attempt_local_0001_m_000004_0' done.
11/07/16 10:07:04 INFO mapred.MapTask: numReduceTasks: 1
11/07/16 10:07:04 INFO mapred.MapTask: io.sort.mb = 100
11/07/16 10:07:04 INFO mapred.MapTask: data buffer = 79691776/99614720
11/07/16 10:07:04 INFO mapred.MapTask: record buffer = 262144/327680
11/07/16 10:07:04 INFO mapred.MapTask: Starting flush of map output
11/07/16 10:07:04 INFO mapred.Task: Task:attempt_local_0001_m_000005_0 is done. And is in the process of commiting
11/07/16 10:07:07 INFO mapred.LocalJobRunner: file:/home/michael/hadoop/input/hdfs-site.xml:0+178
11/07/16 10:07:07 INFO mapred.Task: Task 'attempt_local_0001_m_000005_0' done.
11/07/16 10:07:07 INFO mapred.LocalJobRunner:
11/07/16 10:07:07 INFO mapred.Merger: Merging 6 sorted segments
11/07/16 10:07:07 INFO mapred.Merger: Down to the last merge-pass, with 1 segments left of total size: 21 bytes
11/07/16 10:07:07 INFO mapred.LocalJobRunner:
11/07/16 10:07:07 INFO mapred.Task: Task:attempt_local_0001_r_000000_0 is done. And is in the process of commiting
11/07/16 10:07:07 INFO mapred.LocalJobRunner:
11/07/16 10:07:07 INFO mapred.Task: Task attempt_local_0001_r_000000_0 is allowed to commit now
11/07/16 10:07:07 INFO mapred.FileOutputCommitter: Saved output of task 'attempt_local_0001_r_000000_0' to file:/home/michael/hadoop/grep-temp-1267281521
11/07/16 10:07:10 INFO mapred.LocalJobRunner: reduce > reduce
11/07/16 10:07:10 INFO mapred.Task: Task 'attempt_local_0001_r_000000_0' done.
11/07/16 10:07:10 INFO mapred.JobClient: map 100% reduce 100%
11/07/16 10:07:10 INFO mapred.JobClient: Job complete: job_local_0001
11/07/16 10:07:10 INFO mapred.JobClient: Counters: 17
11/07/16 10:07:10 INFO mapred.JobClient: File Input Format Counters
11/07/16 10:07:10 INFO mapred.JobClient: Bytes Read=14668
11/07/16 10:07:10 INFO mapred.JobClient: File Output Format Counters
11/07/16 10:07:10 INFO mapred.JobClient: Bytes Written=123
11/07/16 10:07:10 INFO mapred.JobClient: FileSystemCounters
11/07/16 10:07:10 INFO mapred.JobClient: FILE_BYTES_READ=1106074
11/07/16 10:07:10 INFO mapred.JobClient: FILE_BYTES_WRITTEN=1231779
11/07/16 10:07:10 INFO mapred.JobClient: Map-Reduce Framework
11/07/16 10:07:10 INFO mapred.JobClient: Map output materialized bytes=55
11/07/16 10:07:10 INFO mapred.JobClient: Map input records=357
11/07/16 10:07:10 INFO mapred.JobClient: Reduce shuffle bytes=0
11/07/16 10:07:10 INFO mapred.JobClient: Spilled Records=2
11/07/16 10:07:10 INFO mapred.JobClient: Map output bytes=17
11/07/16 10:07:10 INFO mapred.JobClient: Map input bytes=14668
11/07/16 10:07:10 INFO mapred.JobClient: SPLIT_RAW_BYTES=611
11/07/16 10:07:10 INFO mapred.JobClient: Combine input records=1
11/07/16 10:07:10 INFO mapred.JobClient: Reduce input records=1
11/07/16 10:07:10 INFO mapred.JobClient: Reduce input groups=1
11/07/16 10:07:10 INFO mapred.JobClient: Combine output records=1
11/07/16 10:07:10 INFO mapred.JobClient: Reduce output records=1
11/07/16 10:07:10 INFO mapred.JobClient: Map output records=1
11/07/16 10:07:10 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
11/07/16 10:07:10 INFO mapred.FileInputFormat: Total input paths to process : 1
11/07/16 10:07:10 INFO mapred.JobClient: Running job: job_local_0002
11/07/16 10:07:10 INFO mapred.MapTask: numReduceTasks: 1
11/07/16 10:07:10 INFO mapred.MapTask: io.sort.mb = 100
11/07/16 10:07:11 INFO mapred.MapTask: data buffer = 79691776/99614720
11/07/16 10:07:11 INFO mapred.MapTask: record buffer = 262144/327680
11/07/16 10:07:11 INFO mapred.MapTask: Starting flush of map output
11/07/16 10:07:11 INFO mapred.MapTask: Finished spill 0
11/07/16 10:07:11 INFO mapred.Task: Task:attempt_local_0002_m_000000_0 is done. And is in the process of commiting
11/07/16 10:07:11 INFO mapred.JobClient: map 0% reduce 0%
11/07/16 10:07:13 INFO mapred.LocalJobRunner: file:/home/michael/hadoop/grep-temp-1267281521/part-00000:0+111
11/07/16 10:07:13 INFO mapred.LocalJobRunner: file:/home/michael/hadoop/grep-temp-1267281521/part-00000:0+111
11/07/16 10:07:13 INFO mapred.Task: Task 'attempt_local_0002_m_000000_0' done.
11/07/16 10:07:13 INFO mapred.LocalJobRunner:
11/07/16 10:07:13 INFO mapred.Merger: Merging 1 sorted segments
11/07/16 10:07:13 INFO mapred.Merger: Down to the last merge-pass, with 1 segments left of total size: 21 bytes
11/07/16 10:07:13 INFO mapred.LocalJobRunner:
11/07/16 10:07:13 INFO mapred.Task: Task:attempt_local_0002_r_000000_0 is done. And is in the process of commiting
11/07/16 10:07:13 INFO mapred.LocalJobRunner:
11/07/16 10:07:13 INFO mapred.Task: Task attempt_local_0002_r_000000_0 is allowed to commit now
11/07/16 10:07:13 INFO mapred.FileOutputCommitter: Saved output of task 'attempt_local_0002_r_000000_0' to file:/home/michael/hadoop/output
11/07/16 10:07:14 INFO mapred.JobClient: map 100% reduce 0%
11/07/16 10:07:16 INFO mapred.LocalJobRunner: reduce > reduce
11/07/16 10:07:16 INFO mapred.Task: Task 'attempt_local_0002_r_000000_0' done.
11/07/16 10:07:17 INFO mapred.JobClient: map 100% reduce 100%
11/07/16 10:07:17 INFO mapred.JobClient: Job complete: job_local_0002
11/07/16 10:07:17 INFO mapred.JobClient: Counters: 17
11/07/16 10:07:17 INFO mapred.JobClient: File Input Format Counters
11/07/16 10:07:17 INFO mapred.JobClient: Bytes Read=123
11/07/16 10:07:17 INFO mapred.JobClient: File Output Format Counters
11/07/16 10:07:17 INFO mapred.JobClient: Bytes Written=23
11/07/16 10:07:17 INFO mapred.JobClient: FileSystemCounters
11/07/16 10:07:17 INFO mapred.JobClient: FILE_BYTES_READ=606737
11/07/16 10:07:17 INFO mapred.JobClient: FILE_BYTES_WRITTEN=700981
11/07/16 10:07:17 INFO mapred.JobClient: Map-Reduce Framework
11/07/16 10:07:17 INFO mapred.JobClient: Map output materialized bytes=25
11/07/16 10:07:17 INFO mapred.JobClient: Map input records=1
11/07/16 10:07:17 INFO mapred.JobClient: Reduce shuffle bytes=0
11/07/16 10:07:17 INFO mapred.JobClient: Spilled Records=2
11/07/16 10:07:17 INFO mapred.JobClient: Map output bytes=17
11/07/16 10:07:17 INFO mapred.JobClient: Map input bytes=25
11/07/16 10:07:17 INFO mapred.JobClient: SPLIT_RAW_BYTES=110
11/07/16 10:07:17 INFO mapred.JobClient: Combine input records=0
11/07/16 10:07:17 INFO mapred.JobClient: Reduce input records=1
11/07/16 10:07:17 INFO mapred.JobClient: Reduce input groups=1
11/07/16 10:07:17 INFO mapred.JobClient: Combine output records=0
11/07/16 10:07:17 INFO mapred.JobClient: Reduce output records=1
11/07/16 10:07:17 INFO mapred.JobClient: Map output records=1
michael@michael-VirtualBox:~/hadoop$
michael@michael-VirtualBox:~/hadoop$[color=red] cat output/* [/color]
[color=blue]1 dfsadmin[/color]
michael@michael-VirtualBox:~/hadoop$ [/quote]
到此单机演示成功。
[color=blue][size=large]四、伪分布式演示(Pseudo-Distributed Operation)[/size][/color]
[size=medium]1. 修改配置文件:[/size]
conf/core-site.xml:
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
</property>
</configuration>
conf/hdfs-site.xml:
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>
conf/mapred-site.xml:
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>localhost:9001</value>
</property>
</configuration>
[size=medium]2.设置SSH无密码登陆[/size]
$ ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa
$ cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
ps:如果是Centos系统,有关SSH无密码的详细设置请看:[url=http://sjsky.iteye.com/blog/1123184]Linux(Centos)配置OpenSSH无密码登陆[/url]
[size=medium]3.测试:[/size]
[color=blue]相关操作的基本命令如下:[/color]
#Format a new distributed-filesystem:
$ bin/hadoop namenode -format
#Start the hadoop daemons:
$ bin/start-all.sh
Copy the input files into the distributed filesystem:
$ bin/hadoop fs -put conf input
#Run some of the examples provided:
$ bin/hadoop jar hadoop-examples-*.jar grep input output 'dfs[a-z.]+'
#Copy the output files from the distributed filesystem to the local filesytem and examine them:
$ bin/hadoop fs -get output output
$ cat output/output/*
以上的测试操作在Centos5中测试顺利成功,而在Ubuntu10.10系统中却失败了,在执行命令:bin/hadoop fs -put conf input 时出错,错误信息类似[color=red]“could only be replicated to 0 nodes, instead of 1”[/color],引起该错误信息的原因有多种(详见:[url=http://sjsky.iteye.com/blog/1124545]http://sjsky.iteye.com/blog/1124545[/url]),但此处的出错的原因是由于hadoop.tmp.dir默认配置指向/tmp/hadoop-${user.name},而在Ubuntu系统中,/tmp目录下的文件系统的类型往往是Hadoop不支持的。
解决的办法是重新定义hadoop.tmp.dir指向,修改配置文件conf/core-site.xml如下:
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/home/michael/hadooptmp/hadoop-${user.name}</value>
<description>
A base for other temporary directories.
</description>
</property>
</configuration>
再次进行测试,成功运行。
整个测试过程的详细信息如下:
[quote]michael@michael-VirtualBox:~/hadoop$ [color=red]ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa [/color]
Generating public/private dsa key pair.
Your identification has been saved in /home/michael/.ssh/id_dsa.
Your public key has been saved in /home/michael/.ssh/id_dsa.pub.
The key fingerprint is:
2a:47:e3:3a:c8:80:ab:97:d1:c6:68:54:9a:45:9f:59 michael@michael-VirtualBox
The key's randomart image is:
+--[ DSA 1024]----+
| .. E |
| o. + |
| = + |
| + |
|.. + o S |
|o + +o o |
| = =. + |
|. = .+ |
|o. .. |
+-----------------+
michael@michael-VirtualBox:~/hadoop$ [color=red]cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys [/color]
michael@michael-VirtualBox:~/hadoop$ ssh localhost
Linux michael-VirtualBox 2.6.35-22-generic #33-Ubuntu SMP Sun Sep 19 20:34:50 UTC 2010 i686 GNU/Linux
Ubuntu 10.10
[color=red]Welcome to Ubuntu![/color]
* Documentation: https://help.ubuntu.com/
71 packages can be updated.
71 updates are security updates.
New release 'natty' available.
Run 'do-release-upgrade' to upgrade to it.
Last login: Wed Jul 15 15:56:17 2011 from shnap.local
michael@michael-VirtualBox:~$ exit
注销
Connection to localhost closed.
michael@michael-VirtualBox:~/hadoop$ [color=red]bin/hadoop namenode -format[/color]
11/07/16 12:43:45 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = michael-VirtualBox/127.0.1.1
STARTUP_MSG: args = [-format]
STARTUP_MSG: version = 0.20.203.0
STARTUP_MSG: build = http://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20-security-203 -r 1099333; compiled by 'oom' on Wed May 4 07:57:50 PDT 2011
************************************************************/
11/07/16 12:43:46 INFO util.GSet: VM type = 32-bit
11/07/16 12:43:46 INFO util.GSet: 2% max memory = 19.33375 MB
11/07/16 12:43:46 INFO util.GSet: capacity = 2^22 = 4194304 entries
11/07/16 12:43:46 INFO util.GSet: recommended=4194304, actual=4194304
11/07/16 12:43:46 INFO namenode.FSNamesystem: fsOwner=michael
11/07/16 12:43:46 INFO namenode.FSNamesystem: supergroup=supergroup
11/07/16 12:43:46 INFO namenode.FSNamesystem: isPermissionEnabled=true
11/07/16 12:43:46 INFO namenode.FSNamesystem: dfs.block.invalidate.limit=100
11/07/16 12:43:46 INFO namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
11/07/16 12:43:46 INFO namenode.NameNode: Caching file names occuring more than 10 times
11/07/16 12:43:47 INFO common.Storage: Image file of size 113 saved in 0 seconds.
11/07/16 12:43:47 INFO common.Storage: Storage directory /home/michael/hadooptmp/hadoop-michael/dfs/name has been successfully formatted.
11/07/16 12:43:47 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at michael-VirtualBox/127.0.1.1
************************************************************/
michael@michael-VirtualBox:~/hadoop$[color=red] bin/start-all.sh [/color]
starting namenode, logging to /home/michael/hadoop/bin/../logs/hadoop-michael-namenode-michael-VirtualBox.out
localhost: starting datanode, logging to /home/michael/hadoop/bin/../logs/hadoop-michael-datanode-michael-VirtualBox.out
localhost: starting secondarynamenode, logging to /home/michael/hadoop/bin/../logs/hadoop-michael-secondarynamenode-michael-VirtualBox.out
starting jobtracker, logging to /home/michael/hadoop/bin/../logs/hadoop-michael-jobtracker-michael-VirtualBox.out
localhost: starting tasktracker, logging to /home/michael/hadoop/bin/../logs/hadoop-michael-tasktracker-michael-VirtualBox.out
michael@michael-VirtualBox:~/hadoop$ jps
[color=red]7948 SecondaryNameNode
8033 JobTracker
8887 Jps
7627 NameNode
7781 DataNode
8190 TaskTracker[/color]
michael@michael-VirtualBox:~/hadoop$ [color=red]bin/hadoop fs -put conf input[/color]
michael@michael-VirtualBox:~/hadoop$ [color=red]bin/hadoop jar hadoop-examples-*.jar grep input output 'dfs[a-z.]+'[/color]
11/07/16 12:46:21 INFO mapred.FileInputFormat: Total input paths to process : 15
11/07/16 12:46:21 INFO mapred.JobClient: Running job: job_201107161244_0001
11/07/16 12:46:22 INFO mapred.JobClient: map 0% reduce 0%
11/07/16 12:47:09 INFO mapred.JobClient: map 13% reduce 0%
11/07/16 12:47:33 INFO mapred.JobClient: map 26% reduce 0%
11/07/16 12:47:45 INFO mapred.JobClient: map 26% reduce 8%
11/07/16 12:47:54 INFO mapred.JobClient: map 40% reduce 8%
11/07/16 12:48:07 INFO mapred.JobClient: map 53% reduce 13%
11/07/16 12:48:16 INFO mapred.JobClient: map 53% reduce 17%
11/07/16 12:48:24 INFO mapred.JobClient: map 66% reduce 17%
11/07/16 12:48:36 INFO mapred.JobClient: map 80% reduce 22%
11/07/16 12:48:42 INFO mapred.JobClient: map 80% reduce 26%
11/07/16 12:48:45 INFO mapred.JobClient: map 93% reduce 26%
11/07/16 12:48:53 INFO mapred.JobClient: map 100% reduce 26%
11/07/16 12:48:58 INFO mapred.JobClient: map 100% reduce 33%
11/07/16 12:49:07 INFO mapred.JobClient: map 100% reduce 100%
11/07/16 12:49:14 INFO mapred.JobClient: Job complete: job_201107161244_0001
11/07/16 12:49:15 INFO mapred.JobClient: Counters: 26
11/07/16 12:49:15 INFO mapred.JobClient: Job Counters
11/07/16 12:49:15 INFO mapred.JobClient: Launched reduce tasks=1
11/07/16 12:49:15 INFO mapred.JobClient: SLOTS_MILLIS_MAPS=255488
11/07/16 12:49:15 INFO mapred.JobClient: Total time spent by all reduces waiting after reserving slots (ms)=0
11/07/16 12:49:15 INFO mapred.JobClient: Total time spent by all maps waiting after reserving slots (ms)=0
11/07/16 12:49:15 INFO mapred.JobClient: Launched map tasks=15
11/07/16 12:49:15 INFO mapred.JobClient: Data-local map tasks=15
11/07/16 12:49:15 INFO mapred.JobClient: SLOTS_MILLIS_REDUCES=115656
11/07/16 12:49:15 INFO mapred.JobClient: File Input Format Counters
11/07/16 12:49:15 INFO mapred.JobClient: Bytes Read=25623
11/07/16 12:49:15 INFO mapred.JobClient: File Output Format Counters
11/07/16 12:49:15 INFO mapred.JobClient: Bytes Written=180
11/07/16 12:49:15 INFO mapred.JobClient: FileSystemCounters
11/07/16 12:49:15 INFO mapred.JobClient: FILE_BYTES_READ=82
11/07/16 12:49:15 INFO mapred.JobClient: HDFS_BYTES_READ=27281
11/07/16 12:49:15 INFO mapred.JobClient: FILE_BYTES_WRITTEN=342206
11/07/16 12:49:15 INFO mapred.JobClient: HDFS_BYTES_WRITTEN=180
11/07/16 12:49:15 INFO mapred.JobClient: Map-Reduce Framework
11/07/16 12:49:15 INFO mapred.JobClient: Map output materialized bytes=166
11/07/16 12:49:15 INFO mapred.JobClient: Map input records=716
11/07/16 12:49:15 INFO mapred.JobClient: Reduce shuffle bytes=166
11/07/16 12:49:15 INFO mapred.JobClient: Spilled Records=6
11/07/16 12:49:15 INFO mapred.JobClient: Map output bytes=70
11/07/16 12:49:15 INFO mapred.JobClient: Map input bytes=25623
11/07/16 12:49:15 INFO mapred.JobClient: Combine input records=3
11/07/16 12:49:15 INFO mapred.JobClient: SPLIT_RAW_BYTES=1658
11/07/16 12:49:15 INFO mapred.JobClient: Reduce input records=3
11/07/16 12:49:15 INFO mapred.JobClient: Reduce input groups=3
11/07/16 12:49:15 INFO mapred.JobClient: Combine output records=3
11/07/16 12:49:15 INFO mapred.JobClient: Reduce output records=3
11/07/16 12:49:15 INFO mapred.JobClient: Map output records=3
11/07/16 12:49:16 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
11/07/16 12:49:17 INFO mapred.FileInputFormat: Total input paths to process : 1
11/07/16 12:49:18 INFO mapred.JobClient: Running job: job_201107161244_0002
11/07/16 12:49:19 INFO mapred.JobClient: map 0% reduce 0%
11/07/16 12:49:40 INFO mapred.JobClient: map 100% reduce 0%
11/07/16 12:49:55 INFO mapred.JobClient: map 100% reduce 100%
11/07/16 12:50:00 INFO mapred.JobClient: Job complete: job_201107161244_0002
11/07/16 12:50:00 INFO mapred.JobClient: Counters: 26
11/07/16 12:50:00 INFO mapred.JobClient: Job Counters
11/07/16 12:50:00 INFO mapred.JobClient: Launched reduce tasks=1
11/07/16 12:50:00 INFO mapred.JobClient: SLOTS_MILLIS_MAPS=16946
11/07/16 12:50:00 INFO mapred.JobClient: Total time spent by all reduces waiting after reserving slots (ms)=0
11/07/16 12:50:00 INFO mapred.JobClient: Total time spent by all maps waiting after reserving slots (ms)=0
11/07/16 12:50:00 INFO mapred.JobClient: Launched map tasks=1
11/07/16 12:50:00 INFO mapred.JobClient: Data-local map tasks=1
11/07/16 12:50:00 INFO mapred.JobClient: SLOTS_MILLIS_REDUCES=14357
11/07/16 12:50:00 INFO mapred.JobClient: File Input Format Counters
11/07/16 12:50:00 INFO mapred.JobClient: Bytes Read=180
11/07/16 12:50:00 INFO mapred.JobClient: File Output Format Counters
11/07/16 12:50:00 INFO mapred.JobClient: Bytes Written=52
11/07/16 12:50:00 INFO mapred.JobClient: FileSystemCounters
11/07/16 12:50:00 INFO mapred.JobClient: FILE_BYTES_READ=82
11/07/16 12:50:00 INFO mapred.JobClient: HDFS_BYTES_READ=298
11/07/16 12:50:00 INFO mapred.JobClient: FILE_BYTES_WRITTEN=41947
11/07/16 12:50:00 INFO mapred.JobClient: HDFS_BYTES_WRITTEN=52
11/07/16 12:50:00 INFO mapred.JobClient: Map-Reduce Framework
11/07/16 12:50:00 INFO mapred.JobClient: Map output materialized bytes=82
11/07/16 12:50:00 INFO mapred.JobClient: Map input records=3
11/07/16 12:50:00 INFO mapred.JobClient: Reduce shuffle bytes=82
11/07/16 12:50:00 INFO mapred.JobClient: Spilled Records=6
11/07/16 12:50:00 INFO mapred.JobClient: Map output bytes=70
11/07/16 12:50:00 INFO mapred.JobClient: Map input bytes=94
11/07/16 12:50:00 INFO mapred.JobClient: Combine input records=0
11/07/16 12:50:00 INFO mapred.JobClient: SPLIT_RAW_BYTES=118
11/07/16 12:50:00 INFO mapred.JobClient: Reduce input records=3
11/07/16 12:50:00 INFO mapred.JobClient: Reduce input groups=1
11/07/16 12:50:00 INFO mapred.JobClient: Combine output records=0
11/07/16 12:50:00 INFO mapred.JobClient: Reduce output records=3
11/07/16 12:50:00 INFO mapred.JobClient: Map output records=3
michael@michael-VirtualBox:~/hadoop$
michael@michael-VirtualBox:~/hadoop$ [color=red]cat output/output/*[/color]
[color=blue]cat: output/output/_logs: 是一个目录
1 dfs.replication
1 dfs.server.namenode.
1 dfsadmin[/color]
michael@michael-VirtualBox:~/hadoop$ [/quote]
到此伪分布式的演示成功。
转载请注明来自:Michael's blog @ [url=http://sjsky.iteye.com]http://sjsky.iteye.com[/url]
----------------------------- 分 ------------------------------ 隔 ------------------------------ 线 ------------------------------