1:jdk,scala.msi
2:ssh,putty
3:IDEA
4:FILE-SETTING-PLUGINS-SCALA
5:FILE-PROJECT STRUCTURE-LIBRARIES-“+”-JAVA——-(SPARK-ASSEMBLY-HADOOP.JAR)
6:NEW-ADD jdk-ADD SCALA
7:BUILD JAR ->OUT(“D:\SPARK DISTRIBUTE\1j\out\artifacts\1j_jar”)
7:PUTTY(LINUX FILE ->HDFS FILE )-SSH(UPLOAD FILE)-“nbu@hadoop201:~/spark-1.2.0$ bin/spark-submit –master yarn-cluster://10.22.66.201:7077 –class scj /home/nbu/jhq/1j.jar” ——(class + “object NAME”)
WATCH:
1:HOST IP-NAME
2:10.22.66.201 nbu 123456
8.su -
9.流数据hdfs处理:数据必须是从hdfs文件到某个input文件夹
object CountTriangle {
def main(args: Array[String]) {
val sparkConf = new SparkConf().setAppName(“CountTriangle”)
val sc = new SparkContext(sparkConf)
val window = sc.textFile(“/jhq/”)
// var x
val edgevv=window.flatMap(_.split(” “)).map(x=>(x.split(“;”)(1),x.split(“;”)(0))).reduceByKey((x,y)=>x+” “+y).collect()
if(edgevv(0)(0)==”0”)
//val edgevvadd=edgeadd.flatMap(x => List(x.split(“,”)(0)+”,”+ x.split(“,”)(1) , x.split(“,”)(1)+”,”+ x.split(“,”)(0))).map(x=>(x.split(“,”)(0),x.split(“,”)(1)))
// edgevv.reduceByKey((x,y)=>x+”,”+y)
/*def cnstrctNeighbor(x:String,y:String):String={
return x
}*/
// window.flatMap(.split(” “)).flatMap(x => List(x.split(“,”)(0)+”,”+x.split(“,”)(1),.split(“,”)(1)+”,”+_.split(“,”)(0)))
sc.stop()
}
}
本文详细介绍如何使用Scala与Spark进行大数据处理任务,包括环境搭建、配置、编译打包及提交作业到集群的过程。通过具体示例展示了如何读取HDFS上的数据,并使用Spark Context进行流数据处理。
6054

被折叠的 条评论
为什么被折叠?



