pyspark读写SequenceFile

完整代码如下:

# -*- coding: utf-8 -*-
import sys
reload(sys)
sys.setdefaultencoding('utf-8')
# @Author: appleyuchi
# @Date:   2018-07-19 14:59:02
# @Last Modified by:   appleyuchi
# @Last Modified time: 2018-07-20 14:59:51
import subprocess
from pyspark import SparkConf, SparkContext
from pyspark.sql import SQLContext
conf = SparkConf(). setMaster( "local"). setAppName( "My App")
sc = SparkContext( conf = conf)
lines=sc.textFile("README.md")
def g(x):
    print x


print"-----------------Example 5-20书上代码有误,误用了scala----------------------------------------------------"
print"-----------------下面先是序列化,写入SequenceFile-------------------"
rdd = sc.parallelize(["2,Fitness", "3,Footwear", "4,Apparel"])
ret = subprocess.call(["rm", "-r","testSeq"], shell=False)
rdd.map(lambda x: tuple(x.split(",", 1))).saveAsSequenceFile("testSeq")
ret = subprocess.call(["rm", "-r","testSeqNone"], shell=False)
rdd.map(lambda x: (None, x)).saveAsSequenceFile("testSeqNone")#这的意思是保留整个字符串

print"-----------------再是反序列化,读取SequenceFile-------------------"
Text = "org.apache.hadoop.io.Text"
print (sc.sequenceFile("./testSeq/part-00000", Text, Text).values().first())
print"------------------------------------"
result=sc.sequenceFile("./testSeqNone/part-00000", Text, Text).values()
print type(result)
print result.foreach(g)
print (sc.sequenceFile("./testSeqNone/part-00000", Text, Text).values().first())

 

评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值