sparkSql的使用案例。。。。。
1、创建DataFrame
val rdd = sc.makeRDD(List((1,“zhang”,19,“bj”,1000000), (2,“wang”,29,“sh”,100),(3,“li”,49,“sz”,999)));
val df = rdd.toDF(“id”,“name”,“age”,“addr”,“salary”);
df.show()
(1)查询
df.select(“id”,“name”).show();
(2)带条件的查询
df.select("id","id","id",“name”).where("name"==="b").show()(3)排序查询orderBy("name" === "b").show()
(3)排序查询
orderBy("name"==="b").show()(3)排序查询orderBy(“列名”) 升序排列
orderBy("列名".desc)降序排列orderBy("列名".desc) 降序排列
orderBy("列名".desc)降序排列orderBy(“列1” , "列2".desc)按两列排序df.select("列2".desc) 按两列排序
df.select("列2".desc)按两列排序df.select(“id”,"name").orderBy("name").orderBy("name").orderBy(“name”.desc).show
df.select("id","id","id",“name”).sort("name".desc).show(4)分组查询groupBy("列名",...).max(列名)求最大值groupBy("列名",...).min(列名)求最小值groupBy("列名",...).avg(列名)求平均值groupBy("列名",...).sum(列名)求和groupBy("列名",...).count()求个数groupBy("列名",...).agg可以将多个方法进行聚合valrdd=sc.makeRDD(List((1,"a","bj"),(2,"b","sh"),(3,"c","gz"),(4,"d","bj"),(5,"e","gz")));valdf=rdd.toDF("id","name","addr");df.groupBy("addr").count().show()(5)连接查询valdept=sc.parallelize(List((100,"财务部"),(200,"研发部"))).toDF("deptid","deptname")valemp=sc.parallelize(List((1,100,"张财务"),(2,100,"李会计"),(3,200,"王艳发"))).toDF("id","did","name")dept.join(emp,"name".desc).show
(4)分组查询
groupBy("列名", ...).max(列名) 求最大值
groupBy("列名", ...).min(列名) 求最小值
groupBy("列名", ...).avg(列名) 求平均值
groupBy("列名", ...).sum(列名) 求和
groupBy("列名", ...).count() 求个数
groupBy("列名", ...).agg 可以将多个方法进行聚合
val rdd = sc.makeRDD(List((1,"a","bj"),(2,"b","sh"),(3,"c","gz"),(4,"d","bj"),(5,"e","gz")));
val df = rdd.toDF("id","name","addr");
df.groupBy("addr").count().show()
(5)连接查询
val dept=sc.parallelize(List((100,"财务部"),(200,"研发部"))).toDF("deptid","deptname")
val emp=sc.parallelize(List((1,100,"张财务"),(2,100,"李会计"),(3,200,"王艳发"))).toDF("id","did","name")
dept.join(emp,"name".desc).show(4)分组查询groupBy("列名",...).max(列名)求最大值groupBy("列名",...).min(列名)求最小值groupBy("列名",...).avg(列名)求平均值groupBy("列名",...).sum(列名)求和groupBy("列名",...).count()求个数groupBy("列名",...).agg可以将多个方法进行聚合valrdd=sc.makeRDD(List((1,"a","bj"),(2,"b","sh"),(3,"c","gz"),(4,"d","bj"),(5,"e","gz")));valdf=rdd.toDF("id","name","addr");df.groupBy("addr").count().show()(5)连接查询valdept=sc.parallelize(List((100,"财务部"),(200,"研发部"))).toDF("deptid","deptname")valemp=sc.parallelize(List((1,100,"张财务"),(2,100,"李会计"),(3,200,"王艳发"))).toDF("id","did","name")dept.join(emp,“deptid” === "did").showdept.join(emp,"did").show
dept.join(emp,"did").showdept.join(emp,“deptid” === "did","left").showdept.join(emp,"did","left").show
dept.join(emp,"did","left").showdept.join(emp,“deptid” === "did","right").show(6)执行运算valdf=sc.makeRDD(List(1,2,3,4,5)).toDF("num");df.select("did","right").show
(6)执行运算
val df = sc.makeRDD(List(1,2,3,4,5)).toDF("num");
df.select("did","right").show(6)执行运算valdf=sc.makeRDD(List(1,2,3,4,5)).toDF("num");df.select(“num” * 100).show
(7)使用列表
val df = sc.makeRDD(List((“zhang”,Array(“bj”,“sh”)),(“li”,Array(“sz”,“gz”)))).toDF(“name”,“addrs”)
df.selectExpr(“name”,“addrs[0]”).show
(8)使用结构体
{“name”:“陈晨”,“address”:{“city”:“西安”,“street”:“南二环甲字1号”}}
{“name”:“娜娜”,“address”:{“city”:“西安”,“street”:“南二环甲字2号”}}
val df = sqlContext.read.json(“file:///root/work/users.json”)
dfs.select(“name”,“address.street”).show
(9)其他
df.count//获取记录总数
val row = df.first()//获取第一条记录
val value = row.getString(1)//获取该行指定列的值
df.collect //获取当前df对象中的所有数据为一个Array 其实就是调用了df对象对应的底层的rdd的collect方法
直接利用sql来操作DataFrame
(0)创建表
创建临时表 - 会话结束表被删除
df.registerTempTable("tabName")
在spark-2.2.1 里面改变城成了下面的方式。
createGlobalTempView createOrReplaceGlobalTempView createOrReplaceTempView createTempView
创建持久表 - 会话结束表也不删除
df.saveAsTable("tabName")
(1)查询
val sqlContext = new org.apache.spark.sql.SQLContext(sc);
val df = sc.makeRDD(List((1,"a","bj"),(2,"b","sh"),(3,"c","gz"),(4,"d","bj"),(5,"e","gz"))).toDF("id","name","addr");
df.registerTempTable("stu");
sqlContext.sql("select * from stu").show()
(2)带条件的查询
val df = sc.makeRDD(List((1,"a","bj"),(2,"b","sh"),(3,"c","gz"),(4,"d","bj"),(5,"e","gz"))).toDF("id","name","addr");
df.registerTempTable("stu");
sqlContext.sql("select * from stu where addr = 'bj'").show()
(3)排序查询
val sqlContext = new org.apache.spark.sql.SQLContext(sc);
val df = sc.makeRDD(List((1,"a","bj"),(2,"b","sh"),(3,"c","gz"),(4,"d","bj"),(5,"e","gz"))).toDF("id","name","addr");
df.registerTempTable("stu");
sqlContext.sql("select * from stu order by addr").show()
(4)分组查询
val sqlContext = new org.apache.spark.sql.SQLContext(sc);
val df = sc.makeRDD(List((1,"a","bj"),(2,"b","sh"),(3,"c","gz"),(4,"d","bj"),(5,"e","gz"))).toDF("id","name","addr");
df.registerTempTable("stu");
sqlContext.sql("select addr,count(*) from stu group by addr").show()
(5)连接查询
val sqlContext = new org.apache.spark.sql.SQLContext(sc);
val dept=sc.parallelize(List((100,"财务部"),(200,"研发部"))).toDF("deptid","deptname")
val emp=sc.parallelize(List((1,100,"张财务"),(2,100,"李会计"),(3,200,"王艳发"))).toDF("id","did","name")
dept.registerTempTable("deptTab");
emp.registerTempTable("empTab");
sqlContext.sql("select deptname,name from deptTab inner join empTab on deptTab.deptid = empTab.did").show()
(6)执行运算
val sqlContext = new org.apache.spark.sql.SQLContext(sc);
val df = sc.makeRDD(List(1,2,3,4,5)).toDF("num");
df.registerTempTable("tabx")
sqlContext.sql("select num * 100 from tabx").show();
(7)分页查询
val sqlContext = new org.apache.spark.sql.SQLContext(sc);
val df = sc.makeRDD(List(1,2,3,4,5)).toDF("num");
df.registerTempTable("tabx")
sqlContext.sql("select * from tabx limit 3").show();
(8)查看表
val sqlContext = new org.apache.spark.sql.SQLContext(sc);
val df1 = sc.makeRDD(List(1,2,3,4,5)).toDF("num");
df1.registerTempTable("tabx1")
val df2 = sc.makeRDD(List(1,2,3,4,5)).toDF("num");
df1.saveAsTable("tabx2")
val sqlContext = new org.apache.spark.sql.SQLContext(sc);
sqlContext.sql("show tables").show
(9)类似hive方式的操作
val hiveContext = new org.apache.spark.sql.hive.HiveContext(sc)
hiveContext.sql("CREATE TABLE IF NOT EXISTS zzz (key INT, value STRING) row format delimited fields terminated by '|'")
hiveContext.sql("LOAD DATA LOCAL INPATH 'file:///root/work/hdata.txt' INTO TABLE zzz")
val df5 = hiveContext.sql("select key,value from zzz")