While loading a RDD from source data, there are two choices which look similar.
sc.textFile
SparkContext’s TextFile method, i.e., sc.textFile in Spark Shell, creates a RDD with each line as an element. If there are 10 files in movies folder, 10 partitions will be created. You can verify the number of partitions by:
sc.wholeTextFiles
SparkContext’s whole text files method, i.e., sc.wholeTextFiles in Spark Shell, creates a PairRDD with the key being the file name with a path. It’s a full path like “hdfs://m1.zettabytes.com:9000/user/hduser/movies/movie1.txt”. The value is the whole content of file in String. Here the number of partitions will be 1 or more depending upon how many executor cores you have.
本文介绍了Spark中加载数据为RDD的两种不同方式:sc.textFile和sc.wholeTextFiles。前者将文件的每一行作为RDD的一个元素,而后者则将每个文件作为一个键值对,键为文件路径,值为文件的全部内容。
940

被折叠的 条评论
为什么被折叠?



