What Is Apache Spark?
Apache Spark is a cluster computing platform designed to be fast and general-purpose. One of the main features Spark offers for speed is the ability to run computations in memory.
Spark is designed to be highly accessible, offering simple APIs in Python, Java, Scala, and SQL, and rich built-in libraries. It also integrates closely with other Big Data tools. In particular, Spark can run in Hadoop clusters and access any Hadoop data source, including Cassandra.
Spark Core
Spark Core is home to the API that defines resilient distributed datasets (RDDs), which are Spark’s main programming abstraction. RDDs represent a collection of items distributed across many compute nodes that can be manipulated in parallel.
Spark SQL
Spark SQL is Spark’s package for working with structured data. It allows querying data via SQL as well as the Apache Hive variant of SQL — called the Hive Query Language(HQL) .
Spark Streaming
Spark Streaming is a Spark component that enables processing of live streams of data.
MLlib
Spark comes with a library containing common machine learning (ML) functionality, called MLlib. MLlib provides multiple types of machine learning algorithms.
GraphX
GraphX is a library for manipulating graphs (e.g., a social network’s friend graph) and performing graph-parallel computations.
Cluster Managers
Under the hood, Spark is designed to efficiently scale up from one to many thousands of compute nodes. To achieve this while maximizing flexibility, Spark can run over a variety of cluster managers, including Hadoop YARN, Apache Mesos, and a simple cluster manager included in Spark itself called the Standalone Scheduler.
Apache Spark是一款通用且快速的集群计算平台,支持内存计算提高处理速度。它提供了Python、Java、Scala和SQL等多种API,并能与Hadoop等大数据工具无缝集成。Spark包含多个组件,如Spark Core用于定义RDD,Spark SQL支持SQL查询,Spark Streaming处理实时数据流,MLlib提供机器学习库,GraphX用于图数据处理。
652

被折叠的 条评论
为什么被折叠?



