RDD并不存储数据,会在Driver端转换为Task,下发到Executor分散在多台集群上的数据
创建RDD的三种方式
1.读取文件:sc.textFile("hdfs://node:9000/wc")
2.将Driver端的scala集合转换为RDD:sc.parallelize(arr)
3.RDD的Transformation会生成一个新的RDD
读取文件
1.读取本地文件:
val inputFile = sc.textFile("file:///home/cla/spark.txt")
2.读取hdfs文件:
val inputFile = sc.textFile("hdfs://localhost:9002/input/")
保存文件
savaAsTextFile()
将scala 转换为为RDD
parallelize
scala> val arr = Array(1,2,3,4,5)
arr: Array[Int] = Array(1, 2, 3, 4, 5)
scala> val rdd = sc.parallelize(arr)
rdd: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[2] at parallelize at <console>:26
scala> val rdd2 = rdd.map(_ * 10)
rdd2: org.apache.spark.rdd.RDD[Int] = MapPartitionsRDD[3] at map at <console>:25
scala> rdd2.collect
res1: Array[Int] = Array(10, 20, 30, 40, 50)
makeRDD
`sc.makeRDD(List(1,2,3,4,5)
参考:
https://blog.youkuaiyun.com/helloxiaozhe/article/details/78480108