感觉会用户,然后 查看hive表具体信息的时候,可以用 desc extended tablename; (by run)
通常用户在HIVE中用SELECT语句出来结果,无法确定结果是来自哪个文件或者具体位置信息
,HIVE中考虑到了这点,在Virtual Column虚列
中可以指定三个静态
列:
1.
INPUT__FILE__NAME map任务
读入File的全路径
2.
BLOCK__OFFSET__INSIDE__FILE 如果是RCFile或者是SequenceFile块压缩格式文件则
显示Block file
Offset,也就是当前快在文件
的第一个字偏移量,
如果是TextFile,显示当前行的第一个
字节在文件中的偏移量
3.
ROW__OFFSET__INSIDE__BLOCK RCFile和SequenceFile显示row number, textfile显示为0
注:
若要显示
ROW__OFFSET__INSIDE__BLOCK
,必须设置
set hive.exec.rowoffset=true;
测试:
1.
table:
test_virtual_columns
InputFormat:
org.apache.hadoop.mapred.TextInputFormat
query:
select a, INPUT__FILE__NAME,BLOCK__OFFSET__INSIDE__FILE,ROW__OFFSET__INSIDE__BLOCK from test_virtual_columns;
result:
- qweqwe hdfs://10.2.6.102/user/hive/warehouse/tmp.db/test_virtual_columns/t3.txt 0 0
- dfdf hdfs://10.2.6.102/user/hive/warehouse/tmp.db/test_virtual_columns/t3.txt 7 0
- sdafsafsaf hdfs://10.2.6.102/user/hive/warehouse/tmp.db/test_virtual_columns/t3.txt 12 0
- dfdffd hdfs://10.2.6.102/user/hive/warehouse/tmp.db/test_virtual_columns/t3.txt 23 0
- dsf hdfs://10.2.6.102/user/hive/warehouse/tmp.db/test_virtual_columns/t3.txt 30 0
- 1 hdfs://10.2.6.102/user/hive/warehouse/tmp.db/test_virtual_columns/t1.txt 0 0
- 2 hdfs://10.2.6.102/user/hive/warehouse/tmp.db/test_virtual_columns/t1.txt 2 0
- 3 hdfs://10.2.6.102/user/hive/warehouse/tmp.db/test_virtual_columns/t1.txt 4 0
- 4 hdfs://10.2.6.102/user/hive/warehouse/tmp.db/test_virtual_columns/t1.txt 6 0
- 5 hdfs://10.2.6.102/user/hive/warehouse/tmp.db/test_virtual_columns/t1.txt 8 0
- 6 hdfs://10.2.6.102/user/hive/warehouse/tmp.db/test_virtual_columns/t1.txt 10 0
- 7 hdfs://10.2.6.102/user/hive/warehouse/tmp.db/test_virtual_columns/t1.txt 12 0
- 8 hdfs://10.2.6.102/user/hive/warehouse/tmp.db/test_virtual_columns/t2.txt 0 0
- 9 hdfs://10.2.6.102/user/hive/warehouse/tmp.db/test_virtual_columns/t2.txt 2 0
- 10 hdfs://10.2.6.102/user/hive/warehouse/tmp.db/test_virtual_columns/t2.txt 4 0
- 11 hdfs://10.2.6.102/user/hive/warehouse/tmp.db/test_virtual_columns/t2.txt 7 0
2.
table: nginx
InputFormat:
org.apache.hadoop.hive.ql.io.RCFileInputFormat
query:
select hostname, INPUT__FILE__NAME,BLOCK__OFFSET__INSIDE__FILE,ROW__OFFSET__INSIDE__BLOCK from nginx where dt='2013-09-01' limit 10;
result:
- 10.1.2.162 hdfs://10.2.6.102/share/data/log/nginx_rcfile/2013-09-01/000000_0 537155468 0
- 10.1.2.162 hdfs://10.2.6.102/share/data/log/nginx_rcfile/2013-09-01/000000_0 537155468 1
- 10.1.2.162 hdfs://10.2.6.102/share/data/log/nginx_rcfile/2013-09-01/000000_0 537155468 2
- 10.1.2.162 hdfs://10.2.6.102/share/data/log/nginx_rcfile/2013-09-01/000000_0 537155468 3
- 10.1.2.162 hdfs://10.2.6.102/share/data/log/nginx_rcfile/2013-09-01/000000_0 537155468 4
- 10.1.2.162 hdfs://10.2.6.102/share/data/log/nginx_rcfile/2013-09-01/000000_0 537155468 5
- 10.1.2.162 hdfs://10.2.6.102/share/data/log/nginx_rcfile/2013-09-01/000000_0 537155468 6
- 10.1.2.162 hdfs://10.2.6.102/share/data/log/nginx_rcfile/2013-09-01/000000_0 537155468 7
- 10.1.2.162 hdfs://10.2.6.102/share/data/log/nginx_rcfile/2013-09-01/000000_0 537155468 8
- 10.1.2.162 hdfs://10.2.6.102/share/data/log/nginx_rcfile/2013-09-01/000000_0 537155468 9
如果碰到有脏数据或者结果异常的时候,可以通过select这三个值来定位出错的原始文件和具体所在位置,很方便。