Spark SQL supports most commonly used features of HiveQL. However, different HiveQL statements are executed in different manners:
-
1. DDL statements (e.g.
CREATE TABLE,DROP TABLE, etc.) and commands (e.g.SET <key> = <value>,ADD FILE,ADD JAR, etc.)2. In most cases, Spark SQL simply delegates these statements to Hive, as they don’t need to issue any distributed jobs and don’t rely on the computation engine (Spark, MR, or Tez).
-
SELECTqueries,CREATE TABLE ... AS SELECT ...statements and insertionsThese statements are executed using Spark as the execution engine.
The Hive classes packaged in the assembly jar are used to provide entry points to Hive features, for example:
- 1. HiveQL parser
- 2. Talking to Hive metastore to execute DDL statements
- 3. Accessing UDF/UDAF/UDTF
As for the differences between Hive on Spark and Spark SQL’s Hive support, please refer to this article by Reynold: https://databricks.com/blog/2014/07/01/shark-spark-sql-hive-on-spark-and-the-future-of-sql-on-spark.html

本文探讨了SparkSQL与HiveQL之间的主要区别,包括它们如何处理不同的HiveQL语句,例如DDL语句、SELECT查询及创建表等操作。此外还介绍了SparkSQL如何利用Hive的功能,如解析HiveQL、执行元数据操作和调用UDF等。
2559

被折叠的 条评论
为什么被折叠?



