今天,我继续学习了Sqoop。首先,先是复习了之前学过的内容,然后,学习了MySql-to-Hive、MySql-to-Hive-总结、MySql-to-HBase、MySql-to-HBase-总结、MySql-to-Hive-原理、HDFS-Hive-to-MySql、Hive和HBase整合集成、重新编译hive-hbase-handler-1.2.1.jar、hive-hbase-案例、HBase-To-MySql、脚本、常用命令、公有参数、命令参数-import、命令参数-export-codegen、命令参数-create-hive-table、命令参数-eval、命令参数-import-all-tables、命令参数-job、命令参数-list-databases、命令参数-list-tables、命令参数-merge、命令参数-metastore。
总结一下:
1.mysql to Hive
启动hdfs
启动yarn
方式1
创建表然后再导入数据
方式2
导入数据,然后再创建表
方式3
导入数据,表自动创建
1)sqoop的conf目录需要hive-site.xml的配置文件
2)hive的元数据需要配置在mysql里面,而非derby数据库
3)sqoop语句
[alex@hadoop102 sqoop-1.4.7.bin__hadoop-2.6.0]$ bin/sqoop import
–connect
jdbc:mysql://hadoop103:3306/company
–username root
–password 000000
–table staff
–num-mappers 1
–fields-terminated-by “\t”
–hive-import
–hive-overwrite
–hive-table staff_hive \
create table staff_hive(id int,name string,sex
string) row format delimited fields terminated by ‘\t’ ;
create table staff_hive1(id int,name string,sex string) row format delimited fields terminated by ‘\t’ ;
[alex@hadoop102 sqoop-1.4.7.bin__hadoop-2.6.0]$ bin/sqoop import
–connect
jdbc:mysql://hadoop103:3306/company
–username root
–password 000000
–table staff
–num-mappers 1
–fields-terminated-by “\t”
–hive-import
–hive-overwrite
–hive-table staff_hive
2.mysql to hbase
1、启动服务
1)启动hdfs,因为hbase数据存在hdfs上面
2)启动yarn,因为sqoop命令翻译成mapreduce运行在yarn上面
3)启动zookeeper,因为hbase使用外置zookeeper
4)启动mysql
5)启动hbase
2、mysql数据准备
3、执行sqoop命令
[victor@node1 conf]$ bin/sqoop import
–connect jdbc:mysql://hadoop103:3306/db_library
–username root
–password 000000
–table book
–columns “id,name,price”
–column-family “info”
–hbase-create-table
–hbase-row-key “id”
–hbase-table “hbase_book”
–num-mappers 1
–split-by id
[alex@hadoop102 sqoop-1.4.7]$ bin/sqoop import
–connect jdbc:mysql://hadoop103:3306/company
–username root
–password 000000
–table staff
–columns “id,name,sex”
–column-family “info”
–hbase-create-table
–hbase-row-key “id”
–hbase-table “hbase_staff”
–num-mappers 1
–split-by id
[alex@hadoop102 sqoop-1.4.7]$ bin/sqoop import
–connect
jdbc:mysql://hadoop103:3306/company
–username root
–password 000000
–table staff
–columns “id,name,sex”
–column-family “info”
–hbase-row-key “id”
–hbase-table “mk1”
–num-mappers 1
–split-by id
3.Hive/HDFS to Mysql
[victor@node1 sqoop-1.4.7]$ bin/sqoop export
–connect jdbc:mysql://hadoop103:3306/company
–username root
–password 000000
–table staff
–num-mappers 1
–export-dir /user/hive/warehouse/staff_hive3
–input-fields-terminated-by “\t”
中文乱码问题解决
[alex@hadoop102 sqoop-1.4.7]$ bin/sqoop export
–connect jdbc:mysql://hadoop103:3306/company?characterEncoding=UTF-8
–username root
–password 000000
–table staff
–num-mappers 1
–export-dir /user/hive/warehouse/staff_hive3
–input-fields-terminated-by “\t”