1、在hive中创建以下student,course,sc三个表。
create table student(Sno int,Sname string,Sex string,Sage int,Sdept string)row format delimited fields terminated by ','stored as textfile;
create table course(Cno int,Cname string) row format delimited fields terminated by ',' stored as textfile;
create table sc(Sno int,Cno int,Grade int)row format delimited fields terminated by ',' stored as textfile;
数据:
student.txt:
95001,AA,F,20,CS
95002,BB,M,19,IS
95003,CC,M,22,MA
95004,DD,F,19,IS
95005,EE,F,18,MA
95006,FF,F,23,CS
95007,GG,M,19,MA
95008,HH,M,18,CS
95009,II,M,18,MA
95010,JJ,F,19,CS
95011,KK,F,18,MA
95012,LL,M,20,CS
95013,MM,F,21,CS
95014,NN,M,19,CS
95015,OO,F,18,MA
95016,PP,F,21,MA
95017,QQ,M,18,IS
95018,RR,M,19,IS
95019,SS,M,19,IS
95020,TT,F,21,IS
95021,UU,F,17,MA
95022,VV,F,20,MA
sc.txt
95001,1,81
95001,2,85
95001,3,88
95001,4,70
95002,2,90
95002,3,80
95002,4,71
95002,5,60
95003,1,82
95003,3,90
95003,5,100
95004,1,80
95004,2,92
95004,4,91
95004,5,70
95005,1,70
95005,2,92
95005,3,99
95005,6,87
95006,1,72
95006,2,62
95006,3,100
95006,4,59
95006,5,60
95006,6,98
95007,3,68
95007,4,91
95007,5,94
95007,6,78
95008,1,98
95008,3,89
95008,6,91
95009,2,81
95009,4,89
95009,6,100
95010,2,98
95010,5,90
95010,6,80
95011,1,81
95011,2,91
95011,3,81
95011,4,86
95012,1,81
95012,3,78
95012,4,85
95012,6,98
95013,1,98
95013,2,58
95013,4,88
95013,5,93
95014,1,91
95014,2,100
95014,4,98
95015,1,91
95015,3,59
95015,4,100
95015,6,95
95016,1,92
95016,2,99
95016,4,82
95017,4,82
95017,5,100
95017,6,58
95018,1,95
95018,2,100
95018,3,67
95018,4,78
95019,1,77
95019,2,90
95019,3,91
95019,4,67
95019,5,87
95020,1,66
95020,2,99
95020,5,93
95021,2,93
95021,5,91
95021,6,99
95022,3,69
95022,4,93
95022,5,82
95022,6,100
course.txt
1,hadoop
2,hive
3,ruby
4,java
5,python
6,php
load data local inpath '/home/hadoop/bigdata/hadoop-1.0.4/testdata/sc.txt' overwrite into table sc;
load data local inpath '/home/hadoop/bigdata/hadoop-1.0.4/testdata/course.txt' overwrite into table course;
①,是针对全局排序的,即对所有的reduce输出是有序的:
②,在hive.mapred.mode=strict模式下,必须强制limit,以减少reducer数据规模
strict:
2,sortby:当有多个reduce时,只能保证单个reduce输出有序,不能保证全局有序
3,Distribute by
根据distribute by指定的内容将数据分到同一个reducer
4,Cluster by
除了具有Distribute by的功能外,还会对该字段进行排序。因此,常常认为cluster by = distribute by + sort by
参考链接:
①,https://cwiki.apache.org/confluence/display/Hive/LanguageManual+SortBy
②,http://blog.youkuaiyun.com/kntao/article/details/7828154
③, http://blog.sina.com.cn/s/blog_9f48885501017aib.html