创建空的DataFrame
var ret = spark.emptyDataFrame
1. 隐式转换toDF
1)使用case-class
引入隐式转换的包
import sqlContext.implicits._
创建case-class的Rdd
val rdd: RDD[Person] = sc.parallelize(Array(
Person("fanghailiang", 29),
Person("sunyu", 28),
Person("jiaolu", 26),
Person("dingzelin", 31)
))
转换成DataFrame
val df: DataFrame = rdd.toDF()
2)将二元数组直接转换为DataFrame
val rdd2: RDD[(String, Int)] = sc.parallelize(Array(
("fanghailiang", 29),
("sunyu", 28),
("jiaolu", 26),
("dingzelin", 31)
))
val df2: DataFrame = rdd2.toDF("name2", "age3")
2.通过RowRdd+Scheme
1, 构建RowRdd
val rowRdd: RDD[Row] = sc.parallelize(Array(
("fanghailiang", 29),
("sunyu", 28),
("jiaolu", 26),
("dingzelin", 31)
)).map{
case (name, age) => {
Row(name, age)
}
}
2. 构建scheme
val schema: StructType = StructType(Array(
StructField("name", StringType, false),
StructField("age", IntegerType, false)
))
3. 创建DataFrame
val df3: DataFrame = sqlContext.createDataFrame(rowRdd, schema)
本文介绍了如何使用Spark创建DataFrame,包括隐式转换(如case-class和RDD toDF)以及通过RowRdd和Schema构建DataFrame的过程。重点讲解了两种转换方法的实战应用和Schema的重要性。
1万+

被折叠的 条评论
为什么被折叠?



