1、Create/Drop/Alter/Use Database
Create Databases
CREATE (DATABASE|SCHEMA) [IF NOT EXISTS] database_name
[COMMENT database_comment]
[LOCATION hdfs_path]
[WITH DBPROPERTIES (property_name=property_value, ...)];
eg:>> create database one comment 'my first hive warehouse';
Drop Database
DROP Database (DATABASE|SCHEMA) [IF EXISTS] database_name [RESTRICT|CASCADE];
Alter Database
ALTER (DATABASE|SCHEMA) database_name SET DBPROPERTIES (property_name=property_value, ...); -- (Note: SCHEMA added in Hive 0.14.0)
ALTER (DATABASE|SCHEMA) database_name SET OWNER [USER|ROLE] user_or_role; -- (Note: Hive 0.13.0 and later; SCHEMA added in Hive 0.14.0)
ALTER (DATABASE|SCHEMA) database_name SET LOCATION hdfs_path; -- (Note: Hive 2.2.1, 2.4.0 and later)
Use Database
USE database_name;
USE DEFAULT
2、Create/Drop/Truncate Table
primitive_type
: TINYINT (1位有符号整数)
| SMALLINT (2位有符号整数)
| INT (4位有符号整数)
| BIGINT (8位有符号整数)
| BOOLEAN (TRUE/FALSE)
| FLOAT (单精度浮点数)
| DOUBLE (双精度浮点数)
| DOUBLE PRECISION
| STRING
| BINARY (字节数组)
| TIMESTAMP ()
| DECIMAL
| DECIMAL(precision, scale)
| DATE
| VARCHAR
| CHAR
data_type
: primitive_type
| array_type Array('lyh','yy') 可以通过 数组名[1]访问第二个元素
| map_type map('name1','lyh','name2','yy') map['name1']='lyh'
| struct_type struct{first string,second string} s('lyh','yy') => s.first = 'lyh'
CREATE [TEMPORARY] [EXTERNAL] TABLE [IF NOT EXISTS] [db_name.]table_name -- (Note: TEMPORARY available in Hive 0.14.0 and later)
[(col_name data_type [COMMENT col_comment], ... [constraint_specification])]
[COMMENT table_comment]
[PARTITIONED BY (col_name data_type [COMMENT col_comment], ...)]
[CLUSTERED BY (col_name, col_name, ...) [SORTED BY (col_name [ASC|DESC], ...)] INTO num_buckets BUCKETS]
[SKEWED BY (col_name, col_name, ...) -- (Note: Available in Hive 0.10.0 and later)]
ON ((col_value, col_value, ...), (col_value, col_value, ...), ...)
[STORED AS DIRECTORIES]
[
[ROW FORMAT row_format]
[STORED AS file_format]
| STORED BY 'storage.handler.class.name' [WITH SERDEPROPERTIES (...)] -- (Note: Available in Hive 0.6.0 and later)
]
[LOCATION hdfs_path]
[TBLPROPERTIES (property_name=property_value, ...)] -- (Note: Available in Hive 0.6.0 and later)
[AS select_statement]; -- (Note: Available in Hive 0.5.0 and later; not supported for external tables)
>> create table one.t1 (name string comment 'user name',age int comment 'user age') comment '1111';
>> create table two.t2 (name string comment 'user name',age int comment 'user age') comment '2222';
>>如果在one数据库中可以使用第二条语句在two数据中建立表
>> create table one.t3 (name string comment 'user name',age int comment 'user age') comment '3333' tblproperties ('chuangjianzhe'='wo');
>>大多数情况下tblproperties还是以键值对的形式为表提供额外的文档说明
>> 通过 describe extended tableName 获取某个表的详细信息
CREATE [TEMPORARY] [EXTERNAL] TABLE [IF NOT EXISTS] [db_name.]table_name
LIKE existing_table_or_view_name [LOCATION hdfs_path];
>> create table t4liket1 like t1
>>拷贝一个和t1一样表结构的t4liket1表
创建一个外部表并把HDFS系统中 /hiveData下的所有数据都加载进表里面(按照以,分割的形式)
create external table users (name string,age int,height int,weight float) comment 'my users table' row format delimited fields terminated by ',' location '/hiveData'
3、向Hive中装载数据
Hive中没有行级的新增/修改/删除,唯一的方式就是大量的装载数据或者把数据放到指定的位置
LOAD DATA [LOCAL] INPATH 'filepath' [OVERWRITE] INTO TABLE tablename [PARTITION (partcol1=val1, partcol2=val2 ...)]
>> load data local inpath '/root/user3.txt' overwrite into table users;
>>从本地目录加载数据到Hive中
>>users表是一个外部表,数据放在HDFS的/hiveData目录下,如果把这个目录下的文件删除后这个表对应的数据也就没有了
>>内部表删除后 元数据被删除,数据也被删除
>>内部表删除后 元数据被删除,数据还在
4、查询
4.1Hive版本的WorldCount程序
1、编写文本文件wc.txt
hello java
hello c
hello c++
hello pyhton
hello php
hello java
hello c
hello c++
hello pyhton
hello php
2、将文件放入HDFS系统 hadoop fs -put ./1.txt /hiveData
3、创建一张表 create table wc (line string)
4、加载数据到这张表中 load data inpath '/hiveData/1.txt' into table wc
5、使用类SQL执行WorldCount程序
>> select world,count(world) as t from (select explode(split(line,' ')) as world from wc) w group by world
>> 其中split(line,' ')函数是把line列按照空格分割为一个数组(Hive的内置格式)
>> explode(array)函数是把一个数组中的每一项都拿出来作为单独的一行
>> 然后把按照每个单词分组并求出其出现的个数
6、执行SQL语句时你会看到如下的信息
这就是MapReduce啊,对没错Hive的作用就是通过类SQL语言转换成MapReduce(但又不限于 MapReduce程序)程序,再查询数据
4.2 limit子句
select line from wc limit 2 //显示最前面的两行
select line from wc limit 1,2 //显示从第二行开始(包括)的后面2行
4.3 case ... when ... then ... else .. end子句 类似于if子句
select case line when 'hello java' then 'A' else line end from wc
5、关于其他的查询和SQL相当类似,这里就不一一列举了