看书时发现书中写的不一定是正确的。
eg:
scala> val line = sc.textFile("hdfs://Spark:9000/user/root/README.md")
15/03/19 20:03:04 INFO MemoryStore: ensureFreeSpace(202004) called with curMem=744765, maxMem=280248975
15/03/19 20:03:04 INFO MemoryStore: Block broadcast_4 stored as values in memory (estimated size 197.3 KB, free 266.4 MB)
15/03/19 20:03:04 INFO MemoryStore: ensureFreeSpace(16322) called with curMem=946769, maxMem=280248975
15/03/19 20:03:04 INFO MemoryStore: Block broadcast_4_piece0 stored as bytes in memory (estimated size 15.9 KB, free 266.3 MB)
15/03/19 20:03:04 INFO BlockManagerInfo: Added broadcast_4_piece0 in memory on a261.datanode.hadoop.qingdao.youku:46352 (size: 15.9 KB, free: 267.2 MB)
15/03/19 20:03:04 INFO BlockManagerMaster: Updated info of block broadcast_4_piece0
line: org.apache.spark.rdd.RDD[String] = hdfs://Spark:9000/user/root/README.md MappedRDD[11] at textFile at <console>:16
scala> val linenum = line.filter(x=>x.contains("spark"))
linenum: org.apache.spark.rdd.RDD[String] = FilteredRDD[12] at filter at <console>:18最后在执行linenum.count 得出的是什么?
其实求出的是“spark”这个字符串出现在多少行当中;一行中可能有多个“spark”字符串,切不可认为是“spark”字符串在文章中一共出现了几次。
本文探讨了在使用Spark进行数据处理时,如何通过RDD(弹性分布式数据集)进行映射、过滤等操作,并解释了内存管理和块广播机制的工作原理。通过实例展示了如何计算包含特定字符串出现次数的方法。
6185

被折叠的 条评论
为什么被折叠?



