【pySpark教程】Introduction & 预备工作(一)

本文详细介绍如何在Windows系统上安装配置Python Spark虚拟环境,包括所需硬件条件、软件包安装步骤及虚拟机的基本操作,适合初学者快速入门。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

windows 下安装 Python Spark 虚拟环境

本博客是【pySpark教程】系列的文章。

是 Berkeley 的Python Spark公开课的学习笔记(see 原课程)。

由于个人能力有限,不免有些错误,还望各位批评指正。

更多相关博客请猛戳:http://blog.youkuaiyun.com/cyh24/article/category/6092916

如需转载,请附上本文链接:http://blog.youkuaiyun.com/cyh_24/article/details/50644959


在本系列课程中,我们会学习如下内容:

  1. Apache Spark 介绍
  2. Data Management
    • Semi-Structed Data
    • Structured Data
    • 实验二:使用 Spark 分析网络服务器日志
  3. 数据分析与机器学习
    • 数据处理
    • 数据分析
    • 机器学习
    • 实验三:文本分析与实体解析
    • 实验四:Spark 机器学习介绍

为了满足大家的需求,我们的软件开发环境是使用Virtual Machine(VM虚拟机)。你只需要按照两个软件包: VirtualBox and Vagrant,然后再下载安装制定的VM镜像就可了。本文将手把手指导你下载安装这些软件。

Note: 你所需要下载的所有东西不会超过1GB.

Hardware and Software Prerequisites

运行这些软件,你 的机器需要达到最低配置。

MINIMUM HARDWARE REQUIREMENTS

  • Free disk space: 3.5 GB
  • RAM memory: 2.5 GB (4+ GB preferred)
  • Processor: Any recent Intel or AMD multicore processor should be sufficient.

SUPPORTED OPERATING SYSTEMS

  • 64-bit (preferred) Windows 7 or later
  • 64-bit (preferred) Mac OS X 10.9.5 or later
  • 64-bit (preferred) Linux (CentOS 6 or later, or Ubuntu 14.04 or later)
  • 32-bit Windows 7 or later
  • 32-bit Linux (CentOS 6 or later, or Ubuntu 14.04 or later)

Installing the Required Software Packages

你需要安装以下两个软件包:

这两个安装都是傻瓜式的,一般不会出问题。万一在安装Vagrant的时候出现了错误提示: Installation Directory must be on a local hard drive. 这其实是权限的问题,你只要用管理员权限去安装就行了。

镜像安装

  1. 首先创建一个文件夹(例如: c:\users\marco\myvagrant)
  2. 下载这个文件 到刚刚的文件夹下,并解压。
  3. 从解压文件夹中,拷贝Vagrantfile到你创建的文件夹中。
  4. 打开命令行cmd,切换目录到你创建的文件夹下,执行命令:
    vagrant up –provider=virtualbox

使用虚拟机的一些基本指令

  1. 启动一个VM,通过DOS 命令行指令:vagrant up
  2. 停止一个VM,通过如下命令:vagrant halt
  3. 如果你要删除VM,使用:vagrant destroy
  4. 一旦一个VM处于运行中,那么可以通过浏览器:”http://localhost:8001/” 来访问IPython notebook。

Running Your First Notebook

通过运行你的第一个notebook,来测试你的环境是否安装完整。

  1. 如果你还没有运行VM,那么先开一个,通过上述的命令
  2. 通过访问”http://localhost:8001” or “http://127.0.0.1:8001/” 来进入IPython notebook
  3. 在Jupyter网页中,选择上传按钮,上传之前下载的文件中的 “lab0_student.ipynb”,这是Spark iPython notebook file
  4. 点击查看即可。

此处输入图片的描述

到此,预备工作就完成了!

About This Book, Learn why and how you can efficiently use Python to process data and build machine learning models in Apache Spark 2.0Develop and deploy efficient, scalable real-time Spark solutionsTake your understanding of using Spark with Python to the next level with this jump start guide, Who This Book Is For, If you are a Python developer who wants to learn about the Apache Spark 2.0 ecosystem, this book is for you. A firm understanding of Python is expected to get the best out of the book. Familiarity with Spark would be useful, but is not mandatory., What You Will Learn, Learn about Apache Spark and the Spark 2.0 architectureBuild and interact with Spark DataFrames using Spark SQLLearn how to solve graph and deep learning problems using GraphFrames and TensorFrames respectivelyRead, transform, and understand data and use it to train machine learning modelsBuild machine learning models with MLlib and MLLearn how to submit your applications programmatically using spark-submitDeploy locally built applications to a cluster, In Detail, Apache Spark is an open source framework for efficient cluster computing with a strong interface for data parallelism and fault tolerance. This book will show you how to leverage the power of Python and put it to use in the Spark ecosystem. You will start by getting a firm understanding of the Spark 2.0 architecture and how to set up a Python environment for Spark., You will get familiar with the modules available in PySpark. You will learn how to abstract data with RDDs and DataFrames and understand the streaming capabilities of PySpark. Also, you will get a thorough overview of machine learning capabilities of PySpark using ML and MLlib, graph processing using GraphFrames, and polyglot persistence using Blaze. Finally, you will learn how to deploy your applications to the cloud using the spark-submit command., By the end of this book, you will have established a firm understanding of the Spark Python API and how it can be used to build data-intensive applications., Style and approach, This book takes a very comprehensive, step-by-step approach so you understand how the Spark ecosystem can be used with Python to develop efficient, scalable solutions. Every chapter is standalone and written in a very easy-to-understand manner, with a focus on both the hows and the whys of each concept.
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值