Spark 下的词频计数

本文详细介绍了如何在Hadoop环境下安装Apache Spark,并通过Python和Scala进行应用实践,包括安装配置、基本操作及数据处理流程。

安装 Spark

  1. 下载 Spark 1.52 Pre-Built for hadoop 2.6 http://spark.apache.org/downloads.html。还需要预装 Java,Scala 环境。
  2. 将 Spark 目录文件放到 /opt/spark-hadoop 下,运行 ./spark-shell 会出现连接 Scale 窗口;运行 ./python/pyspark 会出现连接 Python 的窗口。这表示安装成功。
  3. 将 python 目录下 pyspark 复制到 Python 安装目录 /usr/local/lib/python2.7/dist-packages。这样才可以在程序中导入pyspark 库。

测试

#!/usr/bin/python
# -*- coding:utf-8 -*-

from pyspark import SparkConf, SparkContext
import os

os.environ["SPARK_HOME"] = "/opt/spark-hadoop"

APP_NAME = "TopKeyword"

if __name__ == "__main__":

    logFile = "./README.md"
    sc = SparkContext("local", "Simple App")
    logData = sc.textFile(logFile).cache()

    numAs = logData.filter(lambda s: 'a' in s).count()
    numBs = logData.filter(lambda s: 'b' in s).count()

    print("Lines with a: %i, lines with b: %i" % (numAs, numBs))

打印结果

Lines with a: 3, lines with b: 2

词频计数

#!/usr/bin/python
# -*- coding:utf-8 -*-

from pyspark import SparkConf, SparkContext
import os
import sys

reload(sys)
sys.setdefaultencoding("utf-8")

os.environ["SPARK_HOME"] = "/opt/spark-hadoop"


def divide_word():
    word_txt = open('question_word.txt', 'a')

    with open('question_title.txt', 'r') as question_txt:
        question = question_txt.readline()
        while(question):
            seg_list = jieba.cut(question, cut_all=False)
            line = " ".join(seg_list)
            word_txt.write(line)
            question = question_txt.readline()
    question_txt.close()
    word_txt.close()


def word_count():
    sc = SparkContext("local", "WordCount")
    text_file = sc.textFile("./question_word.txt").cache()
    counts = text_file.flatMap(lambda line: line.split(" ")) \
             .map(lambda word: (word, 1)) \
             .reduceByKey(lambda a, b: a + b)
    counts.saveAsTextFile("./wordcount_result.txt")

if __name__ == "__main__"
    word_count()

参考

http://spark.apache.org/examples.html

http://dongxicheng.org/framework-on-yarn/spark-scala-writing-application/

转载于:https://my.oschina.net/lvyi/blog/543590

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值