Scala安装
一、官网下载安装Scala:scala-2.12.8.tgz
https://www.scala-lang.org/download/
1.将压缩包从Windows传输到Linux当前目录下
SecureCRT 【File】→【Connect SFTP Session】开启sftp操作
put C:/Users/l/Douments/scala-2.12.8.tgz
2.安装
解压安装到指定目录下/opt/module(/opt是系统自带目录,之下的/module是自己创建的)
tar -zxvf scala-2.12.8.tgz -C /opt/module
3.安装之后的目录为scala-2.12.8,修改为scala
mv scala-2.12.8 scala
4.配置环境变量
在/etc/profile文件里添加HBase安装路径的配置信息,之后用source命令使配置生效。
vi /etc/profile
export SCALA_HOME=/opt/module/scala
export PATH=$SCALA_HOME/bin:$PATH
保存并使其立即生效 source /etc/profile
测试Scala安装成功,命令scala -version
启动:scala
可以使用命令:quit
退出Scala解释器
或者,也可以直接使用“Ctrl+D”组合键,退出Scala解释器。
Spark安装
一、官网下载安装Spark:spark-2.4.2-bin-hadoop2.7.tgz
https://www.apache.org/dyn/closer.lua/spark/spark-2.4.2/spark-2.4.2-bin-hadoop2.7.tgz
1.将压缩包从Windows传输到Linux当前目录下
SecureCRT 【File】→【Connect SFTP Session】开启sftp操作
put C:/Users/l/Douments/spark-2.4.1-bin-hadoop2.7.tgz
2.安装
解压安装到指定目录下/opt/module(/opt是系统自带目录,之下的/module是自己创建的)
tar -zxvf spark-2.4.1-bin-hadoop2.7.tgz -C /opt/module
3.安装之后的目录为spark-2.4.2-bin-hadoop2.7,修改为spark
mv spark-2.4.2-bin-hadoop2.7 spark
4.配置环境变量
vi /etc/profile
export SPARK_HOME=/opt/module/spark
export PATH=$SPARK_HOME/bin:$PATH
保存并使其立即生效
source /etc/profile
5.复制spark-env.sh.template并重命名为spark-env.sh,并在文件最后添加配置内容(/opt/module/spark/conf)
export JAVA_HOME=/opt/module/jdk1.8.0_121
export SCALA_HOME=/opt/module/scala
export HADOOP_HOME=/opt/module/hadoop-2.7.3
export HADOOP_CONF_DIR=/opt/module/hadoop-2.7.3/etc/hadoop
export SPARK_MASTER_IP=192.128.153.128
export SPARK_MASTER_PORT=7077
启动spark
启动spark
①先启动hadoop 环境 start-all.sh
②启动spark环境
进入到/opt/module/spark/sbin下运行start-all.sh
/opt/module/spark/sbin/start-all.sh
[注] 如果使用start-all.sh时候会重复启动hadoop配置,需要./在当前工作目录下执行命令./start-all.sh
jps 观察进程
多出 worker 和 mater 两个进程。
查看spark的web控制页面:http://bigdata128:8080/
显示spark的端口是7070
③启动Spark Shell
此模式用于interactive programming,先进入bin文件夹后运行:spark-shell
使用Spark Shell编写代码
读取本地文件
val textFile = sc.textFile("file:///opt/module/spark/mycode/wordcount/
mywordcount.txt")
textFile.first()
读取HDFS文件
val textFile =sc.textFile("hdfs:/user/input/f1.txt")
val wordCount = textFile.flatMap(line =>
line.split(" ")).map(word => (word, 1)).reduceByKey((a, b) => a
+ b)
wordCount.collect()
用Scala语言编写Spark应用程序
用Java语言编写Spark应用程序
在eclipse下建一个maven工程
配置pom.xml
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion>
<groupId>cn.spark</groupId>
<artifactId>SparkTest</artifactId>
<version>0.0.1-SNAPSHOT</version>
<packaging>jar</packaging>
<name>SparkTest</name>
<url>http://maven.apache.org</url>
<properties> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> </properties>
<dependencies>
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>3.8.1</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-core_2.10</artifactId>
<version>1.3.0</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-sql_2.10</artifactId>
<version>1.3.0</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-hive_2.10</artifactId>
<version>1.3.0</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-streaming_2.10</artifactId>
<version>1.3.0</version>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-client</artifactId>
<version>2.4.1</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-streaming-kafka_2.10</artifactId>
<version>1.3.0</version>
</dependency>
</dependencies>
<build>
<sourceDirectory>src/main/java</sourceDirectory>
<testSourceDirectory>src/main/test</testSourceDirectory>
<plugins>
<plugin>
<artifactId>maven-assembly-plugin</artifactId>
<configuration>
<descriptorRefs>
<descriptorRef>jar-with-dependencies</descriptorRef>
</descriptorRefs>
<archive>
<manifest>
<mainClass></mainClass>
</manifest>
</archive>
</configuration>
<executions>
<execution>
<id>make-assembly</id>
<phase>package</phase>
<goals>
<goal>single</goal>
</goals>
</execution>
</executions>
</plugin>
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>exec-maven-plugin</artifactId>
<version>1.2.1</version>
<executions>
<execution>
<goals>
<goal>exec</goal>
</goals>
</execution>
</executions>
<configuration>
<executable>java</executable>
<includeProjectDependencies>true</includeProjectDependencies>
<includePluginDependencies>false</includePluginDependencies> <classpathScope>compile</classpathScope> <mainClass>cn.spark.sparktest.App</mainClass>
</configuration>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId>
<configuration>
<source>1.6</source>
<target>1.6</target>
</configuration>
</plugin>
</plugins>
</build>
</project>
按ctrl+s保存,会弹出窗口下载依赖,速度会很慢,多等就好)保存之后自动生成Maven Dependencie.
编写spark程序