子查询需要取表名

博客探讨了在SQL查询中,子查询为何需要指定表别名。通过一个例子说明,未使用别名的子查询会报错,而添加别名后能够正常执行。原因是from后面的子查询被视为一个临时表,因此需要名字。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

我们先来看一个例子:
没有取别名的:

hive> select user_id,type,cnt,content,
    > row_number() over(partition by user_id,type order by cnt desc) as rn,
    > SUBSTR(FROM_UNIXTIME(UNIX_TIMESTAMP()),1,10) ,dt from(
    > select user_id,'visit_ip' as type,sum(pv) as cnt,visit_ip as content,dt from qfbap_dwd.dwd_user_pc_pv group by user_id,visit_ip,dt
    > union all
    > select user_id,'cookie_id' as type,sum(pv) as cnt,cookie_id as content,dt from qfbap_dwd.dwd_user_pc_pv group by user_id,cookie_id,dt
    > union all
    > select user_id,'browser_name' as type,sum(pv) as cnt,browser_name as content,dt from qfbap_dwd.dwd_user_pc_pv group by browser_name,user_id,dt
    > union all
    > select user_id,'visit_os' as type,sum(pv) as cnt,visit_os as content,dt from qfbap_dwd.dwd_user_pc_pv group by user_id,visit_os,dt
    > );  
NoViableAltException(-1@[207:51: ( KW_AS )?])
	at org.antlr.runtime.DFA.noViableAlt(DFA.java:158)
	at org.antlr.runtime.DFA.predict(DFA.java:144)
	at org.apache.hadoop.hive.ql.parse.HiveParser_FromClauseParser.subQuerySource(HiveParser_FromClauseParser.java:5475)
	at org.apache.hadoop.hive.ql.parse.HiveParser_FromClauseParser.fromSource(HiveParser_FromClauseParser.java:3931)
	at org.apache.hadoop.hive.ql.parse.HiveParser_FromClauseParser.joinSource(HiveParser_FromClauseParser.java:1798)
	at org.apache.hadoop.hive.ql.parse.HiveParser_FromClauseParser.fromClause(HiveParser_FromClauseParser.java:1456)
	at org.apache.hadoop.hive.ql.parse.HiveParser.fromClause(HiveParser.java:40285)
	at org.apache.hadoop.hive.ql.parse.HiveParser.singleSelectStatement(HiveParser.java:38146)
	at org.apache.hadoop.hive.ql.parse.HiveParser.selectStatement(HiveParser.java:37831)
	at org.apache.hadoop.hive.ql.parse.HiveParser.regularBody(HiveParser.java:37768)
	at org.apache.hadoop.hive.ql.parse.HiveParser.queryStatementExpressionBody(HiveParser.java:36975)
	at org.apache.hadoop.hive.ql.parse.HiveParser.queryStatementExpression(HiveParser.java:36851)
	at org.apache.hadoop.hive.ql.parse.HiveParser.execStatement(HiveParser.java:1332)
	at org.apache.hadoop.hive.ql.parse.HiveParser.statement(HiveParser.java:1030)
	at org.apache.hadoop.hive.ql.parse.ParseDriver.parse(ParseDriver.java:199)
	at org.apache.hadoop.hive.ql.parse.ParseDriver.parse(ParseDriver.java:166)
	at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:417)
	at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:335)
	at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1026)
	at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1091)
	at org.apache.hadoop.hive.ql.Driver.run(Driver.java:962)
	at org.apache.hadoop.hive.ql.Driver.run(Driver.java:952)
	at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:269)
	at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:221)
	at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:431)
	at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:800)
	at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:694)
	at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:633)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
FAILED: ParseException line 11:1 cannot recognize input near '<EOF>' '<EOF>' '<EOF>' in subquery source

执行说11行那个地方报错了,看了看感觉没有什么错误。。。
而在后面加上别名之后

hive> select user_id,type,cnt,content,
    > row_number() over(partition by user_id,type order by cnt desc) as rn,
    > SUBSTR(FROM_UNIXTIME(UNIX_TIMESTAMP()),1,10) ,dt from(
    > select user_id,'visit_ip' as type,sum(pv) as cnt,visit_ip as content,dt from qfbap_dwd.dwd_user_pc_pv group by user_id,visit_ip,dt
    > union all
    > select user_id,'cookie_id' as type,sum(pv) as cnt,cookie_id as content,dt from qfbap_dwd.dwd_user_pc_pv group by user_id,cookie_id,dt
    > union all
    > select user_id,'browser_name' as type,sum(pv) as cnt,browser_name as content,dt from qfbap_dwd.dwd_user_pc_pv group by browser_name,user_id,dt
    > union all
    > select user_id,'visit_os' as type,sum(pv) as cnt,visit_os as content,dt from qfbap_dwd.dwd_user_pc_pv group by user_id,visit_os,dt
    > )t1 limit 10;
Total jobs = 5
Launching Job 1 out of 5
Number of reduce tasks not specified. Estimated from input data size: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapreduce.job.reduces=<number>
Starting Job = job_1545674752744_0068, Tracking URL = http://mini03:8088/proxy/application_1545674752744_0068/
Kill Command = /home/hadoop/install/hadoop-2.5.0-cdh5.3.6/bin/hadoop job  -kill job_1545674752744_0068

只是加了一个表名t1,它就能跑了。。。。
那么是为什么呢???
因为from 后面的查询是作为一个表来用的,所以要给表起一个名字
这很语法!

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值