pyspark底层浅析

本文介绍了pyspark的底层原理,包括pyspark的初始化、SparkContext的创建过程以及如何调用JVM中的Scala类和方法。通过分析源码,揭示了pyspark如何通过反射机制调用Scala实现的复杂算法,并讨论了在不同Spark版本中使用API的注意事项。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

pyspark底层浅析

pyspark简介

pyspark是Spark官方提供的API接口,同时pyspark也是Spark中的一个程序。
在terminal中输入pyspark指令,可以打开python的shell,同时其中默认初始化了SparkConf和SparkContext
这里写图片描述
在编写Spark应用的.py文件时,可以通过import pyspark引入该模块,并通过SparkConf对Spark的启动参数进行设置。不过,如果你仅完成了Spark的安装,直接用python指令运行py文件并不能检索到pyspark模块。你可以通过pip等包管理工具安装该模块,也可以直接使用pyspark(新版本已不支持)或spark-submit直接提交.py文件的作业。

pyspark program

这里指的是spark中的bin/pyspark,github地址
实际上pyspark只不过解析了命令行中的参数,并进行了python方面的设置,然后调用spark-submit

exec "${SPARK_HOME}"/bin/spark-submit pyspark-shell-main --name "PySparkShell" "$@"

在较新一些的版本如Spark2.2中,已经不支持用pyspark运行py脚本文件,一切spark作业都应该使用spa

About This Book, Learn why and how you can efficiently use Python to process data and build machine learning models in Apache Spark 2.0Develop and deploy efficient, scalable real-time Spark solutionsTake your understanding of using Spark with Python to the next level with this jump start guide, Who This Book Is For, If you are a Python developer who wants to learn about the Apache Spark 2.0 ecosystem, this book is for you. A firm understanding of Python is expected to get the best out of the book. Familiarity with Spark would be useful, but is not mandatory., What You Will Learn, Learn about Apache Spark and the Spark 2.0 architectureBuild and interact with Spark DataFrames using Spark SQLLearn how to solve graph and deep learning problems using GraphFrames and TensorFrames respectivelyRead, transform, and understand data and use it to train machine learning modelsBuild machine learning models with MLlib and MLLearn how to submit your applications programmatically using spark-submitDeploy locally built applications to a cluster, In Detail, Apache Spark is an open source framework for efficient cluster computing with a strong interface for data parallelism and fault tolerance. This book will show you how to leverage the power of Python and put it to use in the Spark ecosystem. You will start by getting a firm understanding of the Spark 2.0 architecture and how to set up a Python environment for Spark., You will get familiar with the modules available in PySpark. You will learn how to abstract data with RDDs and DataFrames and understand the streaming capabilities of PySpark. Also, you will get a thorough overview of machine learning capabilities of PySpark using ML and MLlib, graph processing using GraphFrames, and polyglot persistence using Blaze. Finally, you will learn how to deploy your applications to the cloud using the spark-submit command., By the end of this book, you will have established a firm understanding of the Spark Python API and how it can be used to build data-intensive applications., Style and approach, This book takes a very comprehensive, step-by-step approach so you understand how the Spark ecosystem can be used with Python to develop efficient, scalable solutions. Every chapter is standalone and written in a very easy-to-understand manner, with a focus on both the hows and the whys of each concept.
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值