Python进行简单的MapReduce(1)

本文详细介绍了如何使用Python实现MapReduce任务,并通过Hadoop集群进行执行,包括源码解析、目录创建、文件上传及任务调度过程。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

所有操作,假定hadoop集群已经正常部署。
Python源码
mapper.py

#!/usr/bin python

import sys

# input comes from STDIN (standard input)
for line in sys.stdin:
line = line.strip()
words = line.split()
for word in words:
print '%s\\t%s' % (word, 1)


reduce.py
#!/usr/bin python

from operator import itemgetter
import sys

word2count = {}

# input comes from STDIN
for line in sys.stdin:
line = line.strip()

word, count = line.split('\\t', 1)
try:
count = int(count)
word2count[word] = word2count.get(word, 0) + count
except ValueError:
# count was not a number, so silently
# ignore/discard this line
pass

sorted_word2count = sorted(word2count.items(), key=itemgetter(0))

for word, count in sorted_word2count:
print '%s\\t%s'% (word, count)


先后存储在/home/src下,然后,cd到此目录
在hdfs上建立测试目录:
ls
hadoop fs -ls /user/hdfs
mkdir
hadoop fs -mkdir /user/hdfs/test

从本地磁盘copy测试文件到hdfs
hadoop fs -copuFromLocal /home/src/*.txt /user/hdfs/test/

使用streaming.jar执行mapreduce任务
hadoop jar /usr/lib/hadoop-mapreduce/hadoop-streaming.jar -mapper mapper.py -reducer reducer.py -file mapper.py -file reducer.py -input /user/hdfs/test/* -output /user/hdfs/test/reducer -mapper cat -reducer aggregate
执行结果:
......
14/11/26 12:54:52 INFO mapreduce.Job: map 0% reduce 0%
14/11/26 12:54:59 INFO mapreduce.Job: map 100% reduce 0%
14/11/26 12:55:04 INFO mapreduce.Job: map 100% reduce 100%
14/11/26 12:55:04 INFO mapreduce.Job: Job job_1415798121952_0179 completed successfully
......
14/11/26 12:55:04 INFO streaming.StreamJob: Output directory: /user/hdfs/test/reducer
......
查看执行结果集文件
hadoop fs -ls /user/hdfs/test
......
drwxr-xr-x - root Hadoop 0 2014-11-26 12:55 /user/hdfs/test/reducer
......
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值