导入spark包
在项目页“File” -> "project structure" -> "Libraries", 点“+”,选“java”,找到spark-assembly-1.2.0-hadoop2.4.0.jar导入,这样就可以编写spark的scala程序了(例子有空补)
下载spark的jar包, 在下载页面选择相应的spark版本, 包类型这里选择spark1.2.0, 和Pre-build for Hadoop 2.4进行下载spark-1.2.0-bin-hadoop2.4.tgz,解压, lib下的spark-assembly-1.2.0-hadoop2.4.0.jar是所需要的;
程序导出
如果在spark上运行程序,需要将程序导出为jar包,并将jar包上传至spark集群运行。导出jar包分为两步:
1)在项目页“File” -> "project structure" ->"Artifacts", 点“+”,选“JAR”->"from modules with dependencies ..",在弹出的Create Jar from Modules中选择Module和MainClass,点“OK”,然后分别选择到导出的Jar包名, 导出路径,导出文件,点“OK”;
2) 在项目页“Build”->"Build Artifacts",选择相应的jar包,build即可在目录下生成相应jar包
code部分
import org.apache.spark.SparkConf;
import org.apache.spark.api.java.JavaPairRDD;
import org.apache.spark.api.java.JavaRDD;
import org.apache.spark.api.java.JavaSparkContext;
import org.apache.spark.api.java.function.FlatMapFunction;
import org.apache.spark.api.java.function.Function2;
import org.apache.spark.api.java.function.PairFunction;
import scala.Tuple2;
import java.util.Arrays;
import java.util.Iterator;
public class testMain {
public static void main(String args[]){
SparkConf conf=new SparkConf();
conf.setMaster("local[4]");//线程模拟
conf.setAppName("WCApp");
JavaSparkContext context=new JavaSparkContext(conf);
JavaRDD<String> rdd=context.textFile("D:/in.txt");
JavaRDD<String> words =rdd.flatMap(new FlatMapFunction<String, String>() {
public Iterator<String> call(String s) throws Exception {
return Arrays.asList(s.split(" ")).iterator();
}
});
JavaPairRDD<String,Integer> counts=words.mapToPair(new PairFunction<String, String, Integer>() {
public Tuple2<String, Integer> call(String s) throws Exception {
return new Tuple2(s,1);
}
}).reduceByKey(new Function2<Integer, Integer, Integer>() {
public Integer call(Integer integer, Integer integer2) throws Exception {
return integer+integer2;
}
});
// rdd.flatMap();
counts.saveAsTextFile("d:/out.txt");
}
}
demo:
链接:https://pan.baidu.com/s/1VuryytIDHharn7z2-E9Itg
提取码:i7kp
复制这段内容后打开百度网盘手机App,操作更方便哦