安装JAVA和Python
> sudo apt-get install openjdk-8-jdk
> vim /etc/environment
# 添加export到全局env里面
> export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
# 如果不想重启,可以直接使用source
> source /etc/environment
> sudo apt-get install python python3
安装Hadoop
> wget http://apache.fayea.com/hadoop/common/hadoop-2.7.3/hadoop-2.7.3.tar.gz
> tar -zxf hadoop-2.7.3.tar.gz
> sudo mkdir input
> sudo chmod -R 777 input
> cp etc/hadoop/*.xml input/
> sudo ./bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.3.jar grep input output 'dfs[a-z.]+'
> cat output/*
安装Apache Spark
> wget http://mirrors.tuna.tsinghua.edu.cn/apache/spark/spark-2.1.1/spark-2.1.1-bin-hadoop2.7.tgz
> tar -zxf spark-2.1.1-bin-hadoop2.7.tgz
> cd spark-2.1.1-bin-hadoop2.7
关联hadoop与spark
> sudo vim /etc/environment
export LD_LIBRARY_PATH=/vagrant/hadoop-2.7.3/lib/native/:$LD_LIBRARY_PATH
安装pyspark到pip
> cd /vagrant/spark-2.1.1-bin-hadoop2.7/python
> pip install -e .
运行测试
> ./bin/spark-shell --master local[2]
# Python Version Spark Api
> ./bin/pyspark --master local[2]
note:The –master option specifies the master URL for a distributed cluster, local[N] is run locally with N threads.
Spark Standalone Cluster Deploy
> sudo ./bin/spark-submit --master spark://192.168.33.67:7077 --executor-memory 4G --deploy-mode client --packages org.apache.spark:spark-streaming-kafka-0-8_2.11:2.1.1 /vagrant/stream_kafka.py
Spark Standalone client deploy mode and cluster deploy mode:
Client:
- Driver runs on a dedicated server (Master node) inside a dedicated process. This means it has all available resources at it’s disposal to execute work.
- Driver opens up a dedicated Netty HTTP server and distributes the JAR files specified to all Worker nodes (big advantage).
- Because the Master node has dedicated resources of it’s own, you don’t need to “spend” worker resources for the Driver program.
- If the driver process dies, you need an external monitoring system to reset it’s execution.
Cluster:
- Driver runs on one of the cluster’s Worker nodes. The worker is chosen by the Master leader
- Driver runs as a dedicated, standalone process inside the Worker.
- Driver programs takes up at least 1 core and a dedicated amount of memory from one of the workers (this can be configured).
- Driver program can be monitored from the Master node using the –supervise flag and be reset in case it dies.
- When working in Cluster mode, all JARs related to the execution of your application need to be publicly available to all the workers. This means you can either manually place them in a shared place or in a folder for each of the workers.
本文详细介绍了如何在Ubuntu系统上安装配置Java、Python,并逐步安装Hadoop与Spark集群,包括环境变量设置、依赖包安装、启动测试等步骤。此外,还讲解了如何将PySpark安装到pip中以及在不同模式下运行Spark应用。
943

被折叠的 条评论
为什么被折叠?



