Scala-单词计数程序,并行计算

这篇博客介绍了如何使用Scala实现单词计数程序,详细阐述了步骤,包括数组处理、单词数组转换、Map分组以及计数。同时,补充了排序的细节,通过将Map转换为List并按值排序来展示结果。此外,文章还探讨了并行计算的概念,指出fold函数支持并行操作,而reduce则不支持,并简要提到了文件I/O流的操作,如写文件、控制台读取和文件读取。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

单词计数

步骤:
1、将line数组赋值
2、将数组变成一个一个单词的数组
3、将单词数组变成(word,number)类型的map
4、将map的按key一样的分组
5、将key一样的统计出现次数
6、打印输出
补充:
排序:
1、将map转成list
2、将list按每个元素的第二个元素排序
3、打印输出

scala> val lines = List("hadoop hdfs mr hive","hdfs hive hbase storm kafka","hiv
e hbase storm kafka spark")
lines: List[String] = List(hadoop hdfs mr hive, hdfs hive hbase storm kafka, hiv
e hbase storm kafka spark)

scala> lines.flatMap(_.split(" "))
res28: List[String] = List(hadoop, hdfs, mr, hive, hdfs, hive, hbase, storm, kaf
ka, hive, hbase, storm, kafka, spark)

scala> lines.flatMap(_.split(" ")).map(x => (x,1))
res29: List[(String, Int)] = List((hadoop,1), (hdfs,1), (mr,1), (hive,1), (hdfs,
1), (hive,1), (hbase,1), (storm,1), (kafka,1), (hive,1), (hbase,1), (storm,1), (
kafka,1), (spark,1))

scala> lines.flatMap(_.split(" ")).map(x => (x,1)).groupBy(x => x._1)
res30: scala.collection.immutable.Map[String,List[(String, Int)]] = Map(storm ->
 List((storm,1), (storm,1)), kafka -> List((kafka,1), (kafka,1)), hadoop -> List
((hadoop,1)), spark -> List((spark,1)), hive -> List((hive,1), (hive,1), (hive,1
)), mr -> List((mr,1)), hbase -> List((hbase,1), (hbase,1)), hdfs -> List((hdfs,
1), (hdfs,1)))

scala> lines.flatMap(_.split(" ")).map(x => (x,1)).groupBy(x => x._1).map(x =>(x
._1,x._2.size))
res31: scala.collection.immutable.Map[String,Int] = Map(storm -> 2, kafka -> 2,
hadoop -> 1, spark -> 1, hive -> 3, mr -> 1, hbase -> 2, hdfs -> 2)

scala> lines.flatMap(_.split(" ")).map(x => (x,1)).groupBy(x => x._1).map(x =>(x
._1,x._2.size)).foreach(printlon)
<console>:13: error: not found: value printlon
       lines.flatMap(_.split(" ")).map(x => (x,1)).groupBy(x => x._1).map(x =>(x
._1,x._2.size)).foreach(printlon)

                        ^

scala> lines.flatMap(_.split(" ")).map(x => (x,1)).groupBy(x => x._1).map(x =>(x
._1,x._2.size)).foreach(println)
(storm,2)
(kafka,2)
(hadoop,1)
(spark,1)
(hive,3)
(mr,1)
(hbase,2)
(hdfs,2)

scala> lines.flatMap(_.split(" ")).map(x => (x,1)).groupBy(x => x._1).map(x =>(x
._1,x._2.size)).toList
res34: List[(String, Int)] = List((storm,2), (kafka,2), (hadoop,1), (spark,1), (
hive,3), (mr,1), (hbase,2), (hdfs,2))

scala> lines.flatMap(_.split(" ")).map(x => (x,1)).groupBy(x => x._1).map(x =>(x
._1,x._2.size)).toList.sortBy(_._2)
res35: List[(String, Int)] = List((hadoop,1), (spark,1), (mr,1), (storm,2), (kaf
ka,2), (hbase,2), (hdfs,2), (hive,3))

scala> lines.flatMap(_.split(" ")).map(x => (x,1)).groupBy(x => x._1).map(x =>(x
._1,x._2.size)).toList.sortBy(_._2).foreach(println)
(hadoop,1)
(spark,1)
(mr,1)
(storm,2)
(kafka,2)
(hbase,2)
(hdfs,2)
(hive,3)

并行计算

fold可以进行并行计算,reduce不可以

scala> val a = Array(1,2,3,4,5,6)
a: Array[Int] = Array(1, 2, 3, 4, 5, 6)

scala> a.sum
res41: Int = 21

scala> a.reduce(_+_)  //reduce调的是reduceLeft,从左往右操作
res42: Int = 21

scala> a.reduce(_-_)
res43: Int = -19


scala> a.par //转换为并行化集合
res44: scala.collection.parallel.mutable.ParArray[Int] = ParArray(1, 2, 3, 4, 5, 6)

scala> a.par.reduce(_+_)//会将集合切分为好几块然后并行计算最后汇总
res45: Int = 21

scala> a.fold(10)(_+_)  //先给初始值10,加上a中所有值之和。fold是并行计算,第一个_表示初始值或者累加过后的结果
res46: Int = 31

scala> a.par.fold(10)(_+_) //并行化之后可能就不一样了,每份都要+10
res47: Int = 51

scala> a.par.fold(0)(_+_)
res49: Int = 21

文件I/O流

1、写文件

import scala.io.Source

object Flatten {
  def main(args: Array[String]): Unit = {
    val writer=new PrintWriter("test.txt")
    writer.write("hello world")
    writer.close()
  }
}

2、从控制台读取

object Flatten {
  def main(args: Array[String]): Unit = {
    println("请输入你的姓名:")
    val line=Console.readLine();
    println("你输入的姓名是:"+line)
  }
}

3、从文件读取

import scala.io.Source

object Flatten {
  def main(args: Array[String]): Unit = {
    Source.fromFile("test.txt").getLines().foreach(println)
  }
}

 

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值