Hadoop学习第0天——配置分布式(译)

本文详细介绍Hadoop集群的搭建步骤,包括软件安装、配置文件调整等关键环节,并阐述如何通过配置实现高效运行。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

  Prerequisites先决条件 

Required Software

  1. Java TM  1.6.x, preferably from Sun, must be installed.至少是JAVA1.6,最好是SUN公司的,而不是Open JDK
  2. ssh  must be installed and   sshd  must be running to use the Hadoop scripts that manage remote Hadoop daemons.ssh必须安装(LINUX下这个基本都有),sshd必须启动,因为要管理远程的Hadoop daemons.ssh

这里要求集群安装ssh,这里需要做机器互信,使得SSH不用输入密码(实际上,下载安装scp复制到其他机器上都需要,否则每次都输入密码。。。)参见:http://lvdccyb.iteye.com/blog/1163686

Installation安装

Typically one machine in the cluster is designated as the NameNode and another machine the as JobTracker, exclusively. These are the masters. The rest of the machines in the cluster act as both DataNode and TaskTracker. These are the slaves.

The root of the distribution is referred to as HADOOP_HOME. All machines in the cluster usually have the same HADOOP_HOME path.

通常一个集群中的一台机器(节点)被指定为NameNode 而相应的,另一台机器为JobTracker。它们2个都是 masters.通剩下的都是slaves,它们同时是 DataNode 和 TaskTracker。

 

Configuration配置

 

Configuring the Hadoop Daemons

This section deals with important parameters to be specified in the following: 
conf/core-site.xml:

 

ParameterValueNotes
fs.default.nameURI of NameNode.hdfs://hostname/

 


conf/hdfs-site.xml:

 

ParameterValueNotes
dfs.name.dirPath on the local filesystem where the NameNode stores the namespace and transactions logs persistently.If this is a comma-delimited list of directories then the name table is replicated in all of the directories, for redundancy.
dfs.data.dirComma separated list of paths on the local filesystem of a DataNodewhere it should store its blocks.If this is a comma-delimited list of directories, then data will be stored in all named directories, typically on different devices.


conf/mapred-site.xml:

 

ParameterValueNotes
mapred.job.trackerHost or IP and port ofJobTracker. host:port pair.
mapred.system.dirPath on the HDFS where where the MapReduce framework stores system files e.g./hadoop/mapred/system/.This is in the default filesystem (HDFS) and must be accessible from both the server and client machines.
mapred.local.dirComma-separated list of paths on the local filesystem where temporary MapReduce data is written.Multiple paths help spread disk i/o.
mapred.tasktracker.{map|reduce}.tasks.maximumThe maximum number of MapReduce tasks, which are run simultaneously on a given TaskTracker, individually.Defaults to 2 (2 maps and 2 reduces), but vary it depending on your hardware.
dfs.hosts/dfs.hosts.excludeList of permitted/excluded DataNodes.If necessary, use these files to control the list of allowable datanodes.
mapred.hosts/mapred.hosts.excludeList of permitted/excluded TaskTrackers.If necessary, use these files to control the list of allowable TaskTrackers.
mapred.queue.namesComma separated list of queues to which jobs can be submitted.The MapReduce system always supports atleast one queue with the name as default. Hence, this parameter's value should always contain the string default. Some job schedulers supported in Hadoop, like the Capacity Scheduler, support multiple queues. If such a scheduler is being used, the list of configured queue names must be specified here. Once queues are defined, users can submit jobs to a queue using the property name mapred.job.queue.name in the job configuration. There could be a separate configuration file for configuring properties of these queues that is managed by the scheduler. Refer to the documentation of the scheduler for information on the same.
mapred.acls.enabledBoolean, specifying whether checks for queue ACLs and job ACLs are to be done for authorizing users for doing queue operations and job operations.If true, queue ACLs are checked while submitting and administering jobs and job ACLs are checked for authorizing view and modification of jobs. Queue ACLs are specified using the configuration parameters of the form mapred.queue.queue-name.acl-name, defined below under mapred-queue-acls.xml. Job ACLs are described at Job Authorization


conf/mapred-queue-acls.xml

 

ParameterValueNotes
mapred.queue.queue-name.acl-submit-jobList of users and groups that can submit jobs to the specified queue-name.The list of users and groups are both comma separated list of names. The two lists are separated by a blank. Example: user1,user2 group1,group2. If you wish to define only a list of groups, provide a blank at the beginning of the value.
mapred.queue.queue-name.acl-administer-jobsList of users and groups that can view job details, change the priority or kill jobs that have been submitted to the specifiedqueue-name.The list of users and groups are both comma separated list of names. The two lists are separated by a blank. Example: user1,user2 group1,group2. If you wish to define only a list of groups, provide a blank at the beginning of the value. Note that the owner of a job can always change the priority or kill his/her own job, irrespective of the ACLs.

Typically all the above parameters are marked as final to ensure that they cannot be overriden by user-applications.

上面太多,于是设置了几项:

最终的配置如下:

<configuration>
 <property> <name>dfs.support.append</name> <value>true</value> </property> <property> <name>dfs.datanode.max.xcievers</name> <value>4096</value> </property> <property> <name>dfs.block.size</name> <value>134217728</value> </property> <property> <name>dfs.namenode.handler.count</name> <value>40</value> </property> </configuration> 

 

 conf/hdfs-site.xml

 

  conf/hadoop-env.sh

这里只修改了JAVA_HOME项

conf/mapred-site.xml

 

 

<configuration> <property> <name>mapred.reduce.parallel.copies</name> <value>20</value> </property> </configuration>

 

 

  • Hadoop Startup

 

To start a Hadoop cluster you will need to start both the HDFS and Map/Reduce cluster.

Format a new distributed filesystem:
$ bin/hadoop namenode -format

Start the HDFS with the following command, run on the designated NameNode:
$ bin/start-dfs.sh

The bin/start-dfs.sh script also consults the ${HADOOP_CONF_DIR}/slaves file on the NameNode and starts the DataNode daemon on all the listed slaves.

Start Map-Reduce with the following command, run on the designated JobTracker:
$ bin/start-mapred.sh

The bin/start-mapred.sh script also consults the ${HADOOP_CONF_DIR}/slaves file on the JobTracker and starts the TaskTracker daemon on all the listed slaves.

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值