表分区是一种常见的优化方式,比如Hive中就提供了表分区的特性。在一个分区表中,不同分区的数据通常存储在不同的目录中,分区列的值通常就包含在了分区目录的目录名中。Spark SQL中的Parquet数据源,支持自动根据目录名推断出分区信息。例如,如果将人口数据存储在分区表中,并且使用性别和国家作为分区列。那么目录结构可能如下所示:
tableName
|- gender=male
|- country=US
...
...
...
|- country=CN
...
|- gender=female
|- country=US
...
|- country=CH
...
|- gender=male
|- country=US
...
...
...
|- country=CN
...
|- gender=female
|- country=US
...
|- country=CH
...
如果将/tableName传入SQLContext.read.parquet()或者SQLContext.read.load()方法,那么Spark SQL就会自动根据目录结构,推断出分区信息,是gender和country。即使数据文件中只包含了两列值,name和age,但是Spark SQL返回的DataFrame,调用printSchema()方法时,会打印出四个列的值:name,age,country,gender。这就是自动分区推断的功能。
此外,分区列的数据类型,也是自动被推断出来的。目前,Spark SQL仅支持自动推断出数字类型和字符串类型。有时,用户也许不希望Spark SQL自动推断分区列的数据类型。此时只要设置一个配置即可, spark.sql.sources.partitionColumnTypeInference.enabled,默认为true,即自动推断分区列的类型,设置为false,即不会自动推断类型。禁止自动推断分区列的类型时,所有分区列的类型,就统一默认都是String。
建目录且上传parquet文件至目录下:
[root@master Software]# hadoop fs -put users.parquet /test/users/gender=male/country=US/
代码:
java版本:
package cn.spark.study.core;
import org.apache.spark.SparkConf;
import org.apache.spark.api.java.JavaSparkContext;
import org.apache.spark.sql.DataFrame;
import org.apache.spark.sql.SQLContext;
import org.apache.spark.api.java.JavaSparkContext;
import org.apache.spark.sql.DataFrame;
import org.apache.spark.sql.SQLContext;
public class parquettest2 {
public static void main(String[] args) {
SparkConf conf = new SparkConf()
.setAppName("parquettest1");
JavaSparkContext sc = new JavaSparkContext(conf);
SQLContext sqlcontext = new SQLContext(sc);
DataFrame df = sqlcontext.read().parquet("hdfs://master:9000/test/users/gender=male/country=US/users.parquet");
df.printSchema();
df.show();
}
public static void main(String[] args) {
SparkConf conf = new SparkConf()
.setAppName("parquettest1");
JavaSparkContext sc = new JavaSparkContext(conf);
SQLContext sqlcontext = new SQLContext(sc);
DataFrame df = sqlcontext.read().parquet("hdfs://master:9000/test/users/gender=male/country=US/users.parquet");
df.printSchema();
df.show();
}
}
scala版本:
package com.spark.study.core
import org.apache.spark.sql.SQLContext
import org.apache.spark.SparkConf
import org.apache.spark.SparkContext
import org.apache.spark.SparkConf
import org.apache.spark.SparkContext
object parquettest2 {
def main(args:Array[String]){
val conf = new SparkConf()
.setAppName("loadandsave")
val sc = new SparkContext(conf)
val sqlcontext = new SQLContext(sc)
val df = sqlcontext.read.parquet("hdfs://master:9000/test/users/gender=male/country=US/users.parquet")
df.printSchema()
df.show()
}
}
运行结果片段:
def main(args:Array[String]){
val conf = new SparkConf()
.setAppName("loadandsave")
val sc = new SparkContext(conf)
val sqlcontext = new SQLContext(sc)
val df = sqlcontext.read.parquet("hdfs://master:9000/test/users/gender=male/country=US/users.parquet")
df.printSchema()
df.show()
}
}
root
|-- name: string (nullable = false)
|-- favorite_color: string (nullable = true)
|-- favorite_numbers: array (nullable = false)
| |-- element: integer (containsNull = false)
|-- gender: string (nullable = true)
|-- country: string (nullable = true)
+------+--------------+----------------+------+-------+
|name|favorite_color|favorite_numbers|gender|country|
+------+--------------+----------------+------+-------+
|Alyssa| null| [3, 9, 15, 20]| male| US|
| Ben| red| []| male| US|
+------+--------------+----------------+------+-------+
来自 “ ITPUB博客 ” ,链接:http://blog.itpub.net/30541278/viewspace-2154758/,如需转载,请注明出处,否则将追究法律责任。
转载于:http://blog.itpub.net/30541278/viewspace-2154758/