Spark2.4.2编译集成cdh5.7.0

本文介绍了在生产环境中编译Spark2.4.2版本源码并集成cdh5.7.0的详细步骤,包括版本环境要求、编译依赖软件(JDK、Maven、Scala)的安装、Spark的编译过程以及编译后版本的部署,还对部署后的目录进行了详解。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

Spark2.4.2编译集成cdh5.7.0

​ 在生产环境中经常会对spark源码进行改造或者是编译源码支持不同的版本的hadoop,下面是我在编译spark2.4.2版本源码的详细步骤,其中编译依赖的hadoop为2.6.0-cdh5.7.0版本。

1.版本环境

JDK最低要求是1.8,maven最低要求是3.5.4,scala最低要求是2.11.12

2.编译依赖软件安装

2.1 JDK安装

jdk的安装比较简单,不再记录,我之前的博客有记录,网上的教程也很多。

[hadoop@hadoop001 root]$ java -version          
java version "1.8.0_45"
Java(TM) SE Runtime Environment (build 1.8.0_45-b14)
Java HotSpot(TM) 64-Bit Server VM (build 25.45-b02, mixed mode)
2.2 Maven安装

maven安装非常的简单,具体安装步骤可参考我之前的博客,需要在环境变量中添加如下配置:

export MAVEN_OPTS="-Xmx2g -XX:ReservedCodeCacheSize=512m"

#查看一下版本
[hadoop@hadoop001 root]$ mvn -version
/home/hadoop/app/apache-maven-3.6.1/bin/mvn: line 71: cd: /root: Permission denied
Apache Maven 3.6.1 (d66c9c0b3152b2e69ee9bac180bb8fcc8e6af555; 2019-04-04T15:00:29-04:00)
Maven home: /home/hadoop/app/apache-maven-3.6.1
Java version: 1.8.0_45, vendor: Oracle Corporation, runtime: /usr/java/jdk1.8.0_45/jre
Default locale: en_US, platform encoding: UTF-8
OS name: "linux", version: "3.10.0-862.el7.x86_64", arch: "amd64", family: "unix"
2.3 Scala安装

scala安装使用官网的下载包,解压然后配置环境变量即可。

[hadoop@hadoop001 root]$ scala
Welcome to Scala 2.11.8 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_45).
Type in expressions for evaluation. Or try :help.

scala> 

3.编译Spark

本次编译的spark2.4.2对应的hadoop版本是hadoop-2.6.0-cdh5.7.0,跟之前博客使用的是相同的版本。

3.1解压spark2.4.2源码包

spark2.4.2的源码包在官网都有,链接https://archive.apache.org/dist/spark/spark-2.4.2/,选择tgz包

#解压之后如下
[hadoop@hadoop001 source]$ cd spark-2.4.2/
[hadoop@hadoop001 spark-2.4.2]$ ll
total 180
-rw-r--r--  1 hadoop hadoop   2298 Apr 18 19:05 appveyor.yml
drwxr-xr-x  3 hadoop hadoop     46 Apr 18 19:05 assembly
drwxr-xr-x  2 hadoop hadoop   4096 Apr 18 19:05 bin
drwxr-xr-x  2 hadoop hadoop     79 Apr 18 19:05 build
drwxr-xr-x  9 hadoop hadoop    126 Apr 18 19:05 common
drwxr-xr-x  2 hadoop hadoop    230 Apr 18 19:05 conf
-rw-r--r--  1 hadoop hadoop    995 Apr 18 19:05 CONTRIBUTING.md
3.2配置pom文件
<!--因为使用的是cdh版本的hadoop所以需要添加cdh的版本库-->
<repository>
	<id>cloudera</id>
	<name>cloudera repository</name>
	<url>https://repository.cloudera.com/artifactory/cloudera-repos/</url>
</repository>
3.3修改ake-distribution.sh文件

在国内可能编译的速度不是很快,加上这步可以加快编译速度

[hadoop@hadoop001 dev]$ pwd
/home/hadoop/app/source/spark-2.4.2/dev
[hadoop@hadoop001 dev]$ vim make-distribution.sh 

注销如下配置
#VERSION=$("$MVN" help:evaluate -Dexpression=project.version $@ 2>/dev/null | grep -v "INFO" | tail -n 1)
#SCALA_VERSION=$("$MVN" help:evaluate -Dexpression=scala.binary.version $@ 2>/dev/null\
#	| grep -v "INFO"\
#	| tail -n 1)
#SPARK_HADOOP_VERSION=$("$MVN" help:evaluate -Dexpression=hadoop.version $@ 2>/dev/null\
#	| grep -v "INFO"\
#	| tail -n 1)
#SPARK_HIVE=$("$MVN" help:evaluate -Dexpression=project.activeProfiles -pl sql/hive $@ 2>/dev/null\
#	| grep -v "INFO"\
#	| fgrep --count "<id>hive</id>";\
#	# Reset exit status to 0, otherwise the script stops here if the last grep finds nothing\
#	# because we use "set -o pipefail"
#	echo -n)
注销后面添加如下配置:
VERSION=2.4.2
SCALA_VERSION=2.11
SPARK_HADOOP_VERSION=2.6.0-cdh5.7.0
SPARK_HIVE=1
3.4开始编译

执行如下命令,第一次编译会耗时很久,编译的时候机器尽量不要运行其他的服务。

[hadoop@hadoop001 spark-2.4.2]$ pwd
/home/hadoop/app/source/spark-2.4.2
[hadoop@hadoop001 spark-2.4.2]$ ./dev/make-distribution.sh --name 2.6.0-cdh5.7.0 --tgz  -Pyarn -Phive -Phive-thriftserver  -Phadoop-2.6 -Dhadoop.version=2.6.0-cdh5.7.0

编译命令参数介绍:

1.–name 指定的是编译后”spark-2.4.2-后缀“ 的后缀名,为了规范写成hadoop的版本,方便看懂。

2.-tgz,表示打成tar.gz包,必须有

3.-Pyarn,表示打的包支持yarn

4.-Phive -Phive-thriftserver,表示打的包支持hive的相关服务

5.-Phadoop-2.6 -Dhadoop.version=2.6.0-cdh5.7.0,表示打的包支持集成hadoop这个版本的相关服务

#编译成功
[INFO] Reactor Summary for Spark Project Parent POM 2.4.2:
[INFO] 
[INFO] Spark Project Parent POM ........................... SUCCESS [01:06 min]
[INFO] Spark Project Tags ................................. SUCCESS [ 35.581 s]
[INFO] Spark Project Sketch ............................... SUCCESS [  4.615 s]
[INFO] Spark Project Local DB ............................. SUCCESS [ 14.749 s]
[INFO] Spark Project Networking ........................... SUCCESS [ 24.583 s]
[INFO] Spark Project Shuffle Streaming Service ............ SUCCESS [ 12.118 s]
[INFO] Spark Project Unsafe ............................... SUCCESS [ 23.133 s]
[INFO] Spark Project Launcher ............................. SUCCESS [01:08 min]
[INFO] Spark Project Core ................................. SUCCESS [03:47 min]
[INFO] Spark Project ML Local Library ..................... SUCCESS [ 27.879 s]
[INFO] Spark Project GraphX ............................... SUCCESS [ 31.081 s]
[INFO] Spark Project Streaming ............................ SUCCESS [01:00 min]
[INFO] Spark Project Catalyst ............................. SUCCESS [02:28 min]
[INFO] Spark Project SQL .................................. SUCCESS [03:50 min]
[INFO] Spark Project ML Library ........................... SUCCESS [02:30 min]
[INFO] Spark Project Tools ................................ SUCCESS [ 33.308 s]
[INFO] Spark Project Hive ................................. SUCCESS [01:14 min]
[INFO] Spark Project REPL ................................. SUCCESS [  6.911 s]
[INFO] Spark Project YARN Shuffle Service ................. SUCCESS [ 38.826 s]
[INFO] Spark Project YARN ................................. SUCCESS [ 41.720 s]
[INFO] Spark Project Hive Thrift Server ................... SUCCESS [ 29.028 s]
[INFO] Spark Project Assembly ............................. SUCCESS [  4.070 s]
[INFO] Spark Integration for Kafka 0.10 ................... SUCCESS [ 16.721 s]
[INFO] Kafka 0.10+ Source for Structured Streaming ........ SUCCESS [ 23.062 s]
[INFO] Spark Project Examples ............................. SUCCESS [ 21.896 s]
[INFO] Spark Integration for Kafka 0.10 Assembly .......... SUCCESS [  6.971 s]
[INFO] Spark Avro ......................................... SUCCESS [ 17.929 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time:  15:54 min (Wall Clock)
[INFO] Finished at: 2019-05-04T04:24:54-04:00
[INFO] ------------------------------------------------------------------------

4.部署编译的spark版本

4.1解压以及添加环境变量
#解压
[hadoop@hadoop001 spark-2.4.2]$ cp spark-2.4.2-bin-2.6.0-cdh5.7.0.tgz ~/software/
[hadoop@hadoop001 spark-2.4.2]$ cd ~/software/
[hadoop@hadoop001 spark-2.4.2]$ tar -xzvf spark-2.4.2-bin-2.6.0-cdh5.7.0.tgz -C ~/app/
#添加软链,方便使用
[hadoop@hadoop001 spark]$ ln -s spark-2.4.2-bin-2.6.0-cdh5.7.0/ ~/app/spark
#添加环境变量
export SPARK_HOME=/home/hadoop/app/spark
export PATH=$SPARK_HOME/bin:$SPARK_HOME/sbin:$PATH
4.2运行spark测试一下
[hadoop@hadoop001 spark]$ spark-shell local[2]
19/05/04 04:45:26 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
Spark context Web UI available at http://hadoop001:4040
Spark context available as 'sc' (master = local[*], app id = local-1556959533652).
Spark session available as 'spark'.
Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /___/ .__/\_,_/_/ /_/\_\   version 2.4.2
      /_/
         
Using Scala version 2.11.12 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_45)
Type in expressions to have them evaluated.
Type :help for more information.

scala> 
4.3解压后目录详解

bin:客户端相关脚本,如beeline,可以删除cmd的结尾文件

conf:配置文件脚本模板,用时拷贝修改

data:存放的一些测试数据

examples:存放测试用例代码,代码非常好 强烈建议观看学习

jars:一堆jar包,所有jar包放一起,不像1.0那样就几个jar,2.0散开了(最佳实践)

LICENSE、 licenses、 NOTICE、python、README.md、RELEASE等文件夹都可以删除

sbin:服务端的相关脚本,如集群启停命令

yarn:存在yarn相关jar包

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值