RDD编程初级实践

一、实验目的

(1)熟悉Spark的RDD基本操作及键值对操作;

(2)熟悉使用RDD编程解决实际具体问题的方法。

、实验内容和要求

1.spark-shell交互式编程

数据集包含了某大学计算机系的成绩,数据格式如下所示:

Tom,DataBase,80

Tom,Algorithm,50

Tom,DataStructure,60

Jim,DataBase,90

Jim,Algorithm,60

Jim,DataStructure,80

……

请根据给定的实验数据,在spark-shell中通过编程来计算以下内容(根据情况修改文件路径):

  1. 该系总共有多少学生;
val lines = sc.textFile("file:///usr/local/spark/sparksqldata/data1.txt")

val par = lines.map(row=>row.split(",")(0))    

val distinct_par = par.distinct()  //去重操作

distinct_par.count  //取得总数

  1. 该系共开设来多少门课程;
val lines = sc.textFile("file:///usr/local/spark/sparksqldata/ data1.txt")

val par = lines.map(row=>row.split(",")(1))

val distinct_par = par.distinct()

distinct_par.count

  1. Tom同学的总成绩平均分是多少;
val lines = sc.textFile("file:///usr/local/spark/sparksqldata/ data1.txt")

val pare = lines.filter(row=>row.split(",")(0)=="Tom")

pare.foreach(println)

Tom,DataBase,26

Tom,Algorithm,12

Tom,OperatingSystem,16

Tom,Python,40

Tom,Software,60

pare.map(row=>(row.split(",")(0),row.split(",")(2).toInt)).mapValues(x=>(x,1)).reduceByKey((x,y) => (x._1+y._1,x._2 + y._2)).mapValues(x => (x._1 / x._2)).collect()

//res9: Array[(String, Int)] = Array((Tom,30))

  1. 求每名同学的选修的课程门数;
val lines = sc.textFile("file:///usr/local/spark/sparksqldata/ data1.txt")

 val pare = lines.map(row=>(row.split(",")(0),row.split(",")(1)))

pare.mapValues(x => (x,1)).reduceByKey((x,y) => (" ",x._2 + y._2)).mapValues(x => x._2).foreach(println)

  1. 该系DataBase课程共有多少人选修;
val lines = sc.textFile("file:///usr/local/spark/sparksqldata/ data1.txt")

val pare = lines.filter(row=>row.split(",")(1)=="DataBase")

pare.count

res1: Long = 126

  1. 各门课程的平均分是多少;
val lines = sc.textFile("file:///usr/local/spark/sparksqldata/ data1.txt")

val pare = lines.map(row=>(row.split(",")(1),row.split(",")(2).toInt))

pare.mapValues(x=>(x,1)).reduceByKey((x,y) => (x._1+y._1,x._2 + y._2)).mapValues(x => (x._1 / x._2)).collect()

res0: Array[(String, Int)] = Array((Python,57), (OperatingSystem,54), (CLanguage,50), (Software,50), (Algorithm,48), (DataStructure,47), (DataBase,50), (ComputerNetwork,51))

(7)使用累加器计算共有多少人选了DataBase这门课。

val lines = sc.textFile("file:///usr/local/spark/sparksqldata/ data1.txt")

val pare = lines.filter(row=>row.split(",")(1)=="DataBase").map(row=>(row.split(",")(1),1))

val accum = sc.longAccumulator("My Accumulator")

pare.values.foreach(x => accum.add(x))

accum.value

res19: Long = 126

2.编写独立应用程序实现数据去重

对于两个输入文件A和B,编写Spark独立应用程序,对两个文件进行合并,并剔除其中重复的内容,得到一个新文件C。下面是输入文件和输出文件的一个样例,供参考。

  实验答案参考步骤如下:

(1)假设当前目录为/home/hadoop/spark/mycode/remdup,在当前目录下新建一个目录mkdir -p src/main/scala,然后在目录/home/hadoop/spark/mycode/remdup/src/main/scala下新建一个remdup.scala,复制下面代码;
import org.apache.spark.SparkContext

import org.apache.spark.SparkContext._

import org.apache.spark.SparkConf

import org.apache.spark.HashPartitioner



object RemDup {

    def main(args: Array[String]) {

        val conf = new SparkConf().setAppName("RemDup")

        val sc = new SparkContext(conf)

        val dataFile = "file:///home/charles/data"

        val data = sc.textFile(dataFile,2)

        val res = data.filter(_.trim().length>0).map(line=>(line.trim,"")).partitionBy(new HashPartitioner(1)).groupByKey().sortByKey().keys

        res.saveAsTextFile("result")

    }

}

(2)在目录/ home/hadoop /spark/mycode/remdup目录下新建simple.sbt,复制下面代码(注意版本号和自己安装的软件包匹配)
name := "Simple Project"
version := "1.0"
scalaVersion := "2.12.15"
libraryDependencies += "org.apache.spark" %% "spark-core" % "3.2.0"

(3)在目录/ home/hadoop /spark/mycode/remdup下执行下面命令打包程序
$ sudo / home/hadoop /sbt/sbt package

4)最后在目录/ home/hadoop /spark/mycode/remdup下执行下面命令提交程序 
$ / home/hadoop /spark/bin/spark-submit --class "RemDup"  /usr/local/spark/mycode/remdup/target/scala-2.12/simple-project_2.12-1.0.jar

5)在目录/ home/hadoop /spark/mycode/remdup/result下即可得到结果文件。

3.编写独立应用程序实现求.平均值问题

每个输入文件表示班级学生某个学科的成绩,每行内容由两个字段组成,第一个是学生名字,第二个是学生的成绩;编写Spark独立应用程序求出所有学生的平均成绩,并输出到一个新文件中。下面是输入文件和输出文件的一个样例,供参考。

实验答案参考步骤如下:

(1)假设当前目录为/ home/hadoop /spark/mycode/avgscore,在当前目录下新建一个目录mkdir -p src/main/scala,然后在目录/home/hadoop /spark/mycode/avgscore/src/main/scala下新建一个avgscore.scala,复制下面代码; 
import org.apache.spark.SparkContext

import org.apache.spark.SparkContext._

import org.apache.spark.SparkConf

import org.apache.spark.HashPartitioner



object AvgScore {

    def main(args: Array[String]) {

        val conf = new SparkConf().setAppName("AvgScore")

        val sc = new SparkContext(conf)

        val dataFile = "file:///home/charles/data"

        val data = sc.textFile(dataFile,3)



       val res = data.filter(_.trim().length>0).map(line=>(line.split(" ")(0).trim(),line.split(" ")(1).trim().toInt)).partitionBy(new HashPartitioner(1)).groupByKey().map(x => {

       var n = 0

       var sum = 0.0

       for(i <- x._2){

        sum = sum + i

        n = n +1

       }

       val avg = sum/n

       val format = f"$avg%1.2f".toDouble

       (x._1,format)

     })

       res.saveAsTextFile("result")

    }

}

(2)在目录/ home/hadoop /spark/mycode/avgscore目录下新建simple.sbt,复制下面代码:
​name := "Simple Project"
version := "1.0"
scalaVersion := "2.12.15"
libraryDependencies += "org.apache.spark" %% "spark-core" % "3.2.0"

​
(3)在目录/ home/hadoop /spark/mycode/avgscore下执行下面命令打包程序
$ sudo / home/hadoop /sbt/sbt package

4)最后在目录/ home/hadoop /spark/mycode/avgscore下执行下面命令提交程序
$ / home/hadoop /spark/bin/spark-submit --class "AvgScore"  /usr/local/spark/mycode/avgscore/target/scala-2.12/simple-project_2.12-1.0.jar

  • 在目录/ home/hadoop /spark/mycode/avgscore/result下即可得到结果文件。

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值