Python版本筛选的例子
>>> lines = sc.textFile("../README.md")
>>> pythonLines = lines.filter(lambda line:"Python" in line)
>>> pythonLines.first()
u'high-level APIs in Scala, Java, Python, and R, and an optimized engine that'
Scala版本筛选的例子
scala> var lines = sc.textFile("../README.md")
var phythonLines = lines.filter(line => line.contains("Python"))
scala> phythonLines.first()
res1: String = high-level APIs in Scala, Java, Python, and R, and an optimized engine that
创建一个应用放在spark环境运行
首先建一个maven工程,在pom.xml中引入以下包
<dependency> <!-- Spark dependency -->
<groupId>org.apache.spark</groupId>
<artifactId>spark-sql_2.12</artifactId>
<version>2.4.3</version>
<scope>provided</scope>
</dependency>
新建一个main函数
import org.apache.spark.sql.Dataset;
import org.apache.spark.sql.SparkSession;
public class SimpleApp {
public static void main(String[] args) {
String logFile = "./README.md"; // Should be some file on your system
SparkSession spark = SparkSession.builder().appName("Simple Application").getOrCreate();
Dataset<String> logData = spark.read().textFile(logFile).cache();
long total = logData.count();
System.out.println("总行数: " + total);
spark.stop();
}
}
在工程目录下执行
mvn package
执行spark提交任务命令
spark-submit \
--class com.spark.demo.SimpleApp \
--master local \
./target/sparkdemo-1.0.0.jar \
./README.md ./wordcounts