Big Data, MapReduce, Hadoop, and Spark with Python

本文介绍了一个使用Python实现的简易MapReduce框架。通过定义Mapper和Reducer类,实现了对输入文件的数据处理过程,包括映射、组合及规约步骤,并展示了如何运行整个流程。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

此书不错,很短,且想打通PYTHON和大数据架构的关系。

先看一次,计划把这个文档作个翻译。

先来一个模拟MAPREDUCE的东东。。。

mapper.py

class Mapper:
    def map(self, data):
        returnval = []
        counts = {}
        for line in data:
            words = line.split()
            for w in words:
                counts[w] = counts.get(w, 0) + 1
        for w, c in counts.iteritems():
            returnval.append((w, c))
        print "Mapper result:"
        print returnval
        return returnval
    

 

reducer.py

class Reducer:
    def reduce(self, d):
        returnval = []
        for k, v in d.iteritems():
            returnval.append("%s\t%s"%(k, sum(v)))
        print "Reducer result:"
        print returnval
        return returnval

 

main.py

from mapper import Mapper
from reducer import Reducer

class JobRunner:
    def run(self, Mapper, Reducer, data):
        # map
        mapper = Mapper()
        tuples = mapper.map(data)

        # combine
        combined = {}
        for k, v in tuples:
            if k not in combined:
                combined[k] = []
            combined[k].append(v)
        print "combined result:"
        print combined

        # reduce
        reducer = Reducer()
        output = reducer.reduce(combined)

        # do something with output
        for line in output:
            print line

runner = JobRunner()
runner.run(Mapper, Reducer, open("input.txt"))

转载于:https://www.cnblogs.com/aguncn/p/6049752.html

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值