从hadoop0.21 到 hadoop1.0.3

本文详细介绍了从Hadoop 0.21版本升级至1.0.3版本时遇到的问题及解决方法,特别关注了HDFS配置和MapReduce配置的调整,以确保顺利启动相关服务。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

1,HDFS

在0.21版本中hdfs-site.xml的配置内容如下:

<property>
<name>dfs.data.dir</name>
<value>file:///home/hadoop/hdfs/data/</value>
</property>
<property>
<name>dfs.name.dir</name>
<value>file:///home/hadoop/hdfs/name/</value>
</property>

以上配置在0.21版本中运行正常,hadoop会依据file://来定位路径。而在1.0.3版本中如果采用同样的配置,namenode是启动不起来的,错误如下:

2012-06-05 15:07:02,669 INFO org.apache.hadoop.hdfs.server.common.Storage: Storage directory/opt/hadoop-1.0.3/file:/home/hadoop/hdfs/datadoes not exist.
2012-06-05 15:07:02,670 INFO org.apache.hadoop.hdfs.server.common.Storage: Storage directory file:/home/hadoop/hdfs/data does not exist.
2012-06-05 15:07:02,771 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException: All specified directories are not accessible or do not exist.
<wbr><wbr><wbr><wbr><wbr><wbr><wbr>at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:139)<br><wbr><wbr><wbr><wbr><wbr><wbr><wbr>at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:385)<br><wbr><wbr><wbr><wbr><wbr><wbr><wbr>at org.apache.hadoop.hdfs.server.datanode.DataNode.&lt;init&gt;(DataNode.java:299)<br><wbr><wbr><wbr><wbr><wbr><wbr><wbr>at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1582)<br><wbr><wbr><wbr><wbr><wbr><wbr><wbr>at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1521)<br><wbr><wbr><wbr><wbr><wbr><wbr><wbr>at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1539)<br><wbr><wbr><wbr><wbr><wbr><wbr><wbr>at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1665)<br><wbr><wbr><wbr><wbr><wbr><wbr><wbr>at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1682)</wbr></wbr></wbr></wbr></wbr></wbr></wbr></wbr></wbr></wbr></wbr></wbr></wbr></wbr></wbr></wbr></wbr></wbr></wbr></wbr></wbr></wbr></wbr></wbr></wbr></wbr></wbr></wbr></wbr></wbr></wbr></wbr></wbr></wbr></wbr></wbr></wbr></wbr></wbr></wbr></wbr></wbr></wbr></wbr></wbr></wbr></wbr></wbr></wbr></wbr></wbr></wbr></wbr></wbr></wbr></wbr>

2012-06-05 15:07:02,771 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at ericzhang/10.2.18.170

显然,datanode的路径有问题,完整路径应该是/home/hadoop/hdfs/data,而不应该是红色部分。配置内容改为如下:

<configuration>
<property>
<name>dfs.data.dir</name>
<value>/home/hadoop/hdfs/data/</value>
</property>
<property>
<name>dfs.name.dir</name>
<value>/home/hadoop/hdfs/name/</value>
</property>

</configuration>
这样,hdfs能够正确启动。

<wbr></wbr>

2,MAPRED配置

用0.21版本的配置文件放在1.0.3版本中使用,同样Jobtracker也启动不起来,报的错误如下:

2012-06-05 15:28:32,898 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source QueueMetrics,q=default registered.
2012-06-05 15:28:33,304 FATAL org.apache.hadoop.mapred.JobTracker: java.lang.IllegalArgumentException<wbr>:<span style="color:#FF0000;word-wrap: normal; word-break: normal;">Does not contain a valid host:port authority: local</span><br><wbr><wbr><wbr><wbr><wbr><wbr><wbr>at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:162)<br><wbr><wbr><wbr><wbr><wbr><wbr><wbr>at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:128)<br><wbr><wbr><wbr><wbr><wbr><wbr><wbr>at org.apache.hadoop.mapred.JobTracker.getAddress(JobTracker.java:2560)<br><wbr><wbr><wbr><wbr><wbr><wbr><wbr>at org.apache.hadoop.mapred.JobTracker.&lt;init&gt;(JobTracker.java:2200)<br><wbr><wbr><wbr><wbr><wbr><wbr><wbr>at org.apache.hadoop.mapred.JobTracker.&lt;init&gt;(JobTracker.java:2192)<br><wbr><wbr><wbr><wbr><wbr><wbr><wbr>at org.apache.hadoop.mapred.JobTracker.&lt;init&gt;(JobTracker.java:2186)<br><wbr><wbr><wbr><wbr><wbr><wbr><wbr>at org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:300)<br><wbr><wbr><wbr><wbr><wbr><wbr><wbr>at org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:291)<br><wbr><wbr><wbr><wbr><wbr><wbr><wbr>at org.apache.hadoop.mapred.JobTracker.main(JobTracker.java:4978)</wbr></wbr></wbr></wbr></wbr></wbr></wbr></wbr></wbr></wbr></wbr></wbr></wbr></wbr></wbr></wbr></wbr></wbr></wbr></wbr></wbr></wbr></wbr></wbr></wbr></wbr></wbr></wbr></wbr></wbr></wbr></wbr></wbr></wbr></wbr></wbr></wbr></wbr></wbr></wbr></wbr></wbr></wbr></wbr></wbr></wbr></wbr></wbr></wbr></wbr></wbr></wbr></wbr></wbr></wbr></wbr></wbr></wbr></wbr></wbr></wbr></wbr></wbr></wbr>

这是因为参数名称改变了,以下是0.21的名称:

<property>
<name>mapreduce.jobtracker.address</name>
<value>10.2.78.170:10001</value>
</property>
而1.0.3的配置为:

<property>
<name>mapred.job.tracker</name>
<value>10.2.78.170:10001</value>
</property>

<wbr></wbr>

<wbr></wbr>

要从0.21版本升级到1.0.3版本需要注意了,OK,1.0.3版本已经成功运行。

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值