日志数据:第一个字段为姓名,第二个字段为性别,第三个字段为上网时间
ZhangSan,male,20
Lisi,female,80
WangWu,female,60
WangMing,female,70
ZhangHua,male,50
LiHong,female,60
YinHui,male,50
LiMing,female,90
ZhuLucy,female,60
spark core scala样例
目的:统计上网时间大于1小时的男性网民信息并输出
//配置Spark应用名称
val conf = new SparkConf().setAppName("FemaleInfoCollection");
//初始化Spark Context
val sc = new SparkContext(conf)
//构建RDD,其是传入参数args(0)的指定路径
val textRDD = sc.textFile(args(0))
//筛选出female相关信息
val textFemaleRDD = textRDD.filter(_.contains("female"))
//汇总每个female上网时间
val femaleOnlineRDD:RDD[(String,Int)]=textFemaleRDD.map{line => val t = line.split(",")
(t(0),t(2).toInt)
}.reduceByKey(_+_)
//筛选出时间大于2小时的female信息并输出
val result = femaleOnlineRDD.filter{line => line._2 > 120}
result.foreach(println)
sparkSQL样例
目的:筛选出上网时间超过2小时的女性网民信息,并输出
//定义表结构,后面用来将文本数据映射为dataFrame---------Q:case class什么含义?
case class femaleInfo(Name: String, Gender: String, StayTime: Int)
//定义函数------------Q:参数args: Array[String]的含义,如何应用?
def main(args: Arry[String]){
//配置spark应用名称
val sparkConf = new SparkConf().setAppName("Female info")
//初始化spark context
val sc = new SparkContext(sparkConf)
//初始化spark sql context
val sqlContext = new org.apache.spark.sql.SQLContext(sc)
//隐式转换 Q:import操作只是导入包还是执行操作?
import sqlContext.implicits._
//构建RDD
val RDD = sc.textFile(args(0))
//将构建的RDD每行数据按照“,”切分
val mapRDD = RDD.map(_.splits(","))
//通过隐式转换将RDD转换为DataFrames,然后注册表
val dataFrame = mapRDD.map(p => femaleInfo(p(0), p(1), p(2).trim.ToInt).toDF.registerTempTable(FemaleInfoTable)
//通过SQL筛选女性上网时间数据,对相同名字进行聚合
val femaleTimeInfo = sqlContext.sql("select name,sum(stayTime) as stayTime from FemaleInfoTable where gender = 'female' group by name")
//筛选出时间大于2个小时的女性网民信息,并输出
val result = femaleTimeInfo.filter("stayTime >= 120").collect().foreach(println)
sc.stop() }