1.sqoop安装
CDH之 sqoop 下载 地址:http://archive.cloudera.com/cdh5/cdh/5
一.下载及环境配置
wget http://archive.cloudera.com/cdh5/cdh/5/sqoop-1.4.6-cdh5.7.0.tar.gz
tar -zxvf /sqoop-1.4.6-cdh5.7.0.tar.gz -C ~/app
[hadoop@hadoop001 conf]$ vim ~/.bash_profile
[hadoop@hadoop001 conf]$ source ~/.bash_profile
二.sqoop参数配置
[hadoop@hadoop001 bin]$ cp sqoop-env-template.sh sqoop-env.sh
[hadoop@hadoop001 bin]$ vim sqoop-env.sh
拷贝mysql驱动到$SQOOP_HOME/lib
三.启动并验证
四.查看命令帮助
[hadoop@hadoop001 bin]$ sqoop help
Available commands:
codegen Generate code to interact with database records
create-hive-table Import a table definition into Hive
eval Evaluate a SQL statement and display the results
export Export an HDFS directory to a database table
help List available commands
import Import a table from a database to HDFS
import-all-tables Import tables from a database to HDFS
import-mainframe Import datasets from a mainframe server to HDFS
job Work with saved jobs
list-databases List available databases on a server
list-tables List available tables in a database
merge Merge results of incremental imports
metastore Run a standalone Sqoop metastore
version Display version information
2.sqoop导出
一.HDFS 导入到MySQL
1) 查看HDFS文件数据
2) MySQL
数据库创建emp
表
3) HDFS导入MySQL 数据库emp 表
参数解析:
#数据库连接路径
- - connect jdbc : mysql : / / localhost : 3306/ test
#用户名
- - username root
#密码
- - password root
#MapTask作业名
- - mapreduce - job - name FromHDFSToMySQL1
#导入表名
- - table emp
#MapTask的并发数(默认4)
-m 3
#hdfs文件路径
- - export - dir /user/hive/warehouse/hive_data2 . db/emp/emp *
#列分割(不指定hive默认\001分割,根据实际情况设置)
- - fields - terminated - by '\t'
# 查看emp表:
遇到错误:
4) HDFS导入MySQL 数据库emp 表(指定字段导入和条件)
#指定导入字段
--columns 'EMPNO,ENAME,JOB'
#string类型的NUll和非string类型的null转换
- - null - string '' - - null - non - string '0'
3.sqoop导入
一.MySQL导入到HDFS
#查询当前的数据库
#查询数据库下的所有的表
#MySQL导入到hdfs
#import条件导入
分析:
提示错误:
解决方法:
1.数据库emp_2表添加主键
2.添加 --split-by 参数;指定切分字段 --split-by empno
3.指定-m 为 1
原理:
二.MySQL导入到Hive
提示错误:
解决方法:
cp hive-common-1.1.0-cdh5.7.0.jar $SQOOP_HOME/lib
cp hive-shims* $SQOOP_HOME/lib
CDH之 sqoop 下载 地址:http://archive.cloudera.com/cdh5/cdh/5
一.下载及环境配置
wget http://archive.cloudera.com/cdh5/cdh/5/sqoop-1.4.6-cdh5.7.0.tar.gz
tar -zxvf /sqoop-1.4.6-cdh5.7.0.tar.gz -C ~/app
[hadoop@hadoop001 conf]$ vim ~/.bash_profile
- export SQOOP_HOME=/home/hadoop/app/sqoop-1.4.6
- export PATH=$SQOOP_HOME/bin:$PATH
二.sqoop参数配置
[hadoop@hadoop001 bin]$ cp sqoop-env-template.sh sqoop-env.sh
[hadoop@hadoop001 bin]$ vim sqoop-env.sh
- #配置hadoop 和 hive绝对径
- #Set path to where bin/hadoop is available
- export HADOOP_COMMON_HOME=/home/hadoop/app/hadoop-2.8.1
- #Set path to where hadoop-*-core.jar is available
- export HADOOP_MAPRED_HOME=/home/hadoop/app/hadoop-2.8.1
- #set the path to where bin/hbase is available
- #export HBASE_HOME=
- #Set the path to where bin/hive is available
- export HIVE_HOME=/home/hadoop/app/hive-1.1.0
三.启动并验证
[hadoop@hadoop001 bin]$ sqoop
Warning: /home/hadoop/app/sqoop-1.4.6/../hbase does not HBase imports will fail.
Please set $HBASE_HOME to the root of your HBase installation.
Warning: /home/hadoop/app/sqoop-1.4.6/../hcatalog does not HCatalog jobs will fail.
Please set $HCAT_HOME to the root of your HCatalog installation.
Warning: /home/hadoop/app/sqoop-1.4.6/../accumulo does not Accumulo imports will fail.
Please set $ACCUMULO_HOME to the root of your Accumulo installation.
Warning: /home/hadoop/app/sqoop-1.4.6/../zookeeper does not Accumulo imports will fail.
Please set $ZOOKEEPER_HOME to the root of your Zookeeper installation.
Try 'sqoop help' for usage.
#Warning警告可以忽略...........................................
Warning: /home/hadoop/app/sqoop-1.4.6/../hbase does not HBase imports will fail.
Please set $HBASE_HOME to the root of your HBase installation.
Warning: /home/hadoop/app/sqoop-1.4.6/../hcatalog does not HCatalog jobs will fail.
Please set $HCAT_HOME to the root of your HCatalog installation.
Warning: /home/hadoop/app/sqoop-1.4.6/../accumulo does not Accumulo imports will fail.
Please set $ACCUMULO_HOME to the root of your Accumulo installation.
Warning: /home/hadoop/app/sqoop-1.4.6/../zookeeper does not Accumulo imports will fail.
Please set $ZOOKEEPER_HOME to the root of your Zookeeper installation.
Try 'sqoop help' for usage.
#Warning警告可以忽略...........................................
四.查看命令帮助
[hadoop@hadoop001 bin]$ sqoop help
Available commands:
codegen Generate code to interact with database records
create-hive-table Import a table definition into Hive
eval Evaluate a SQL statement and display the results
export Export an HDFS directory to a database table
help List available commands
import Import a table from a database to HDFS
import-all-tables Import tables from a database to HDFS
import-mainframe Import datasets from a mainframe server to HDFS
job Work with saved jobs
list-databases List available databases on a server
list-tables List available tables in a database
merge Merge results of incremental imports
metastore Run a standalone Sqoop metastore
version Display version information
- [hadoop@hadoop001 bin]$ sqoop help export
- [hadoop@hadoop001 bin]$ sqoop help import
2.sqoop导出
一.HDFS 导入到MySQL
1) 查看HDFS文件数据
- [hadoop@hadoop001 ~]$ hadoop fs -cat /user/hive/warehouse/hive_data2.db/emp/emp*
- 7369 SMITH CLERK 7902 1980-12-17 800.00 20
- 7499 ALLEN SALESMAN 7698 1981-2-20 1600.00 300.00 30
- 7521 WARD SALESMAN 7698 1981-2-22 1250.00 500.00 30
- 7566 JONES MANAGER 7839 1981-4-2 2975.00 20
- 7654 MARTIN SALESMAN 7698 1981-9-28 1250.00 1400.00 30
- 7698 BLAKE MANAGER 7839 1981-5-1 2850.00 30
- 7782 CLARK MANAGER 7839 1981-6-9 2450.00 10
- 7788 SCOTT ANALYST 7566 1987-4-19 3000.00 20
- 7839 KING PRESIDENT 1981-11-17 5000.00 10
- 7844 TURNER SALESMAN 7698 1981-9-8 1500.00 0.00 30
- 7876 ADAMS CLERK 7788 1987-5-23 1100.00 20
- 7900 JAMES CLERK 7698 1981-12-3 950.00 30
- 7902 FORD ANALYST 7566 1981-12-3 3000.00 20
- 7934 MILLER CLERK 7782 1982-1-23 1300.00 10
- 8888 HIVE PROGRAM 7839 1988-1-23 10300.00
- create table emp
- (empno int,
- ename varchar(255),
- job varchar(255),
- mgr int,
- hiredate varchar(255),
- salary double,
- comm double,
- deptno int)
3) HDFS导入MySQL 数据库emp 表
- sqoop export \
- --connect jdbc:mysql://localhost:3306/test \
- --username root --password root \
- --mapreduce-job-name FromHDFSToMySQL1 \
- --table emp \
- -m 3 \
- --export-dir /user/hive/warehouse/hive_data2.db/emp/emp* \
- --fields-terminated-by '\t'
参数解析:
#数据库连接路径
- - connect jdbc : mysql : / / localhost : 3306/ test
#用户名
- - username root
#密码
- - password root
#MapTask作业名
- - mapreduce - job - name FromHDFSToMySQL1
#导入表名
- - table emp
#MapTask的并发数(默认4)
-m 3
#hdfs文件路径
- - export - dir /user/hive/warehouse/hive_data2 . db/emp/emp *
#列分割(不指定hive默认\001分割,根据实际情况设置)
- - fields - terminated - by '\t'
# 查看emp表:
- mysql> select * from emp;
- +-------+--------+-----------+------+------------+--------+------+--------+
- | empno | ename | job | mgr | hiredate | salary | comm | deptno |
- +-------+--------+-----------+------+------------+--------+------+--------+
- | 7369 | SMITH | CLERK | 7902 | 1980-12-17 | 800 | NULL | 20 |
- | 7499 | ALLEN | SALESMAN | 7698 | 1981-2-20 | 1600 | 300 | 30 |
- | 7521 | WARD | SALESMAN | 7698 | 1981-2-22 | 1250 | 500 | 30 |
- | 7566 | JONES | MANAGER | 7839 | 1981-4-2 | 2975 | NULL | 20 |
- | 7654 | MARTIN | SALESMAN | 7698 | 1981-9-28 | 1250 | 1400 | 30 |
- | 7698 | BLAKE | MANAGER | 7839 | 1981-5-1 | 2850 | NULL | 30 |
- | 7782 | CLARK | MANAGER | 7839 | 1981-6-9 | 2450 | NULL | 10 |
- | 7788 | SCOTT | ANALYST | 7566 | 1987-4-19 | 3000 | NULL | 20 |
- | 7839 | KING | PRESIDENT | NULL | 1981-11-17 | 5000 | NULL | 10 |
- | 7844 | TURNER | SALESMAN | 7698 | 1981-9-8 | 1500 | 0 | 30 |
- | 7876 | ADAMS | CLERK | 7788 | 1987-5-23 | 1100 | NULL | 20 |
- | 7900 | JAMES | CLERK | 7698 | 1981-12-3 | 950 | NULL | 30 |
- | 7902 | FORD | ANALYST | 7566 | 1981-12-3 | 3000 | NULL | 20 |
- | 7934 | MILLER | CLERK | 7782 | 1982-1-23 | 1300 | NULL | 10 |
- | 8888 | HIVE | PROGRAM | 7839 | 1988-1-23 | 10300 | NULL | NULL |
- +-------+--------+-----------+------+------------+--------+------+--------+
- 15 rows in set (0.00 sec)
遇到错误:
- Exception in thread "main" java.lang.NoClassDefFoundError: org/json/JSONObject
- 解决办法:
- 拷贝java-json.jar添加到$sqoop_home/lib目录下
-
- java-json.jar下载地址:
- http://www.java2s.com/Code/Jar/j/Downloadjavajsonjar.htm
4) HDFS导入MySQL 数据库emp 表(指定字段导入和条件)
- sqoop export \
- --connect jdbc:mysql://localhost:3306/test \
- --username root --password root \
- --mapreduce-job-name FromHDFSToMySQL1 \
- --table emp_2 \
- -m 3 \
- --export-dir /user/hive/warehouse/hive_data2.db/emp/emp* \
- --columns 'EMPNO,ENAME,JOB' \
- --fields-terminated-by '\t' \
- --null-string '' --null-non-string '0'
--columns 'EMPNO,ENAME,JOB'
#string类型的NUll和非string类型的null转换
- - null - string '' - - null - non - string '0'
- mysql> select * from emp_2;
- +-------+--------+-----------+------+----------+--------+------+--------+
- | empno | ename | job | mgr | hiredate | salary | comm | deptno |
- +-------+--------+-----------+------+----------+--------+------+--------+
- | 7876 | ADAMS | CLERK | NULL | NULL | NULL | NULL | NULL |
- | 7900 | JAMES | CLERK | NULL | NULL | NULL | NULL | NULL |
- | 7902 | FORD | ANALYST | NULL | NULL | NULL | NULL | NULL |
- | 7934 | MILLER | CLERK | NULL | NULL | NULL | NULL | NULL |
- | 8888 | HIVE | PROGRAM | NULL | NULL | NULL | NULL | NULL |
- | 7369 | SMITH | CLERK | NULL | NULL | NULL | NULL | NULL |
- | 7499 | ALLEN | SALESMAN | NULL | NULL | NULL | NULL | NULL |
- | 7521 | WARD | SALESMAN | NULL | NULL | NULL | NULL | NULL |
- | 7566 | JONES | MANAGER | NULL | NULL | NULL | NULL | NULL |
- | 7654 | MARTIN | SALESMAN | NULL | NULL | NULL | NULL | NULL |
- | 7698 | BLAKE | MANAGER | NULL | NULL | NULL | NULL | NULL |
- | 7782 | CLARK | MANAGER | NULL | NULL | NULL | NULL | NULL |
- | 7788 | SCOTT | ANALYST | NULL | NULL | NULL | NULL | NULL |
- | 7839 | KING | PRESIDENT | NULL | NULL | NULL | NULL | NULL |
- | 7844 | TURNER | SALESMAN | NULL | NULL | NULL | NULL | NULL |
3.sqoop导入
一.MySQL导入到HDFS
#查询当前的数据库
- sqoop list-databases \
- --connect jdbc:mysql://localhost:3306 \
- --username root --password root
结果:
information_schema
hive_data
mysql
performance_schema
test
information_schema
hive_data
mysql
performance_schema
test
- sqoop list-tables \
- --connect jdbc:mysql://localhost:3306/test \
- --username root
- 结果:
- emp
emp_2
- #import导入
- [hadoop@hadoop001 bin]$ sqoop import \
- > --connect jdbc:mysql://localhost:3306/test \
- > --username root --password root \
- > --table emp_2
-
- #查看导入文件
- [hadoop@hadoop001 sqoop-1.4.6]$ hadoop fs -cat emp_2/part*
7369,SMITH,CLERK,null,null,null,null,null
7499,ALLEN,SALESMAN,null,null,null,null,null
7521,WARD,SALESMAN,null,null,null,null,null
7566,JONES,MANAGER,null,null,null,null,null
- 分析:
-
--默认导入hdfs路径为/user/hadoop/表名/文件
--默认mapreduce作业名为表名
--默认并发TapTask数-m为4个
--默认列的分割符为逗号
#import条件导入
- [hadoop@hadoop001 bin]$ sqoop import \
> --connect jdbc:mysql://localhost:3306/test \
> --username root --password root \
> --table emp -m 5 \
> --mapreduce-job-name FromMySQLToHDFS5 \
> --delete-target-dir \
> --columns "EMPNO,ENAME,JOB,SALARY,COMM" \
> --target-dir EMP_COLUMN_WHERE \
> --fields-terminated-by '\t' \
> --null-string '' --null-non-string '0' \
> --where 'SALARY>3000'
- #查看导入文件
- [hadoop@hadoop001 sqoop-1.4.6]$ hadoop fs -cat EMP_COLUMN_WHERE/part*
18/06/13 10:20:48 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
7839 KING PRESIDENT 5000.0 0
8888 HIVE PROGRAM 10300.0 0
分析:
-
-
--默认导入hdfs路径为/user/hadoop/表名/文件 ; --target-dir 修改存放路径表名命名
--默认mapreduce作业名为表名 --mapreduce-job-name 指定作业名
--默认并发 TapTas k数 -m为4个 -m 修改MapTask并发数
--默认列的分割符为逗号 --fields-terminated-by 修改列的分隔符
--columns 指定导入字段
--null-string --null-non-string | string类型和非string类型的的null的类型转换
--where | 参数的条件
#import查询结果集导入
[hadoop@hadoop001 bin]$ sqoop import \
> --connect jdbc:mysql://localhost:3306/test \
> --username root --password root \
> -m 2 \
> --mapreduce-job-name FromMySQLToHDFS \
> --delete-target-dir \
> --target-dir EMP_COLUMN_QUERY \
> --fields-terminated-by '\t' \
> --null-string '' --null-non-string '0' \
> --query "SELECT * FROM emp WHERE EMPNO>=7900 AND \$CONDITIONS" \
> --split-by 'EMPNO'
#查看导入文件
[hadoop@hadoop001 ~]$ hadoop fs -text EMP_COLUMN_QUERY/part*
7900 JAMES CLERK 7698 1981-12-3 950.0 0 30
7902 FORD ANALYST 7566 1981-12-3 3000.0 0 20
7934 MILLER CLERK 7782 1982-1-23 1300.0 0 10
8888 HIVE PROGRAM 7839 1988-1-23 10300.0 0 0
分析:
--指定了--query不能指定--table表名,查询条件可能是多表关联
--query 单引号不需要转义符,双引号需要在$CONDITIONS前添加\转义符
-
- Error during import: No primary key could be found for table emp_2. Please specify one with --split-by or perform a sequential import with '-m 1'.
1.数据库emp_2表添加主键
2.添加 --split-by 参数;指定切分字段 --split-by empno
3.指定-m 为 1
原理:
- Sqoop通可以过–split-by指定切分的字段,–m设置mapper的数量。通过这两个参数分解生成m个where子句,进行分段查询。
- split-by 根据不同的参数类型有不同的切分方法,如表共有100条数据其中id为int类型,并且我们指定–split-by id,我们不设置map数量使用默认的为四个,首先Sqoop会取获取切分字段的MIN()和MAX()即(–split -by),再根据map数量进行划分,这是字段值就会分为四个map:(1-25)(26-50)(51-75)(75-100)。
- 根据MIN和MAX不同的类型采用不同的切分方式支持有Date,Text,Float,Integer, Boolean,NText,BigDecimal等等。
- 所以,若导入的表中没有主键,将-m 设置称1或者设置split-by,即只有一个map运行,缺点是不能并行map录入数据。(注意,当-m 设置的值大于1时,split-by必须设置字段) 。
- split-by即便是int型,若不是连续有规律递增的话,各个map分配的数据是不均衡的,可能会有些map很忙,有些map几乎没有数据处理的情况
二.MySQL导入到Hive
-
- #import导入Hive
- --connect jdbc:mysql://localhost:3306/test \
- --username root --password root \
- --table emp -m 2 \
- --mapreduce-job-name FromMySQLToHive0 \
- --delete-target-dir \
- --create-hive-table \
- --hive-database hive_data2 \
- --hive-table emp_sqoop111 \
- --hive-import
-
- #查看导入Hive表数据
- hive> select * from emp_sqoop111;
OK
7369 SMITH CLERK 7902 1980-12-17 800.0 NULL 20
7499 ALLEN SALESMAN 7698 1981-2-20 1600.0 300.0 30 -
- 分析:
-
create-hive-table参数来自动创建表的方法会发现表里面的字段类型和我们所预期的不一样,所以生产上一般先把Hive创建好,再来导入
- #import导入Hive表数据(带分区)
-
- 第一步:
- #hive创建分区表和分区字段
- create table emp_partition
(empno int, ename string, job string, mgr int, hiredate string, salary double, comm double, deptno int)
partitioned by (pt string)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY '\t' -
- 第二步:
- #指定hive的库及导入表和默认分区值
- sqoop import \
--connect jdbc:mysql://localhost:3306/test \
--username root --password root \
--table emp -m 2 \
--mapreduce-job-name FromMySQLToHive1 \
--delete-target-dir \
--target-dir EMP_PARTITION \
--hive-database hive_data2 \
--hive-table emp_partition \
--hive-import \
--hive-partition-key 'pt' \
--hive-partition-value '2018-06-19' \
--fields-terminated-by '\t'
--hive-overwrite -
- 第三步:
- #查询导入结果和分区值
7369 SMITH CLERK 7902 1980-12-17 800.0 NULL 20 2018-06-19
7499 ALLEN SALESMAN 7698 1981-2-20 1600.0 300.0 30 2018-06-19
7521 WARD SALESMAN 7698 1981-2-22 1250.0 500.0 30 2018-06-19
7566 JONES MANAGER 7839 1981-4-2 2975.0 NULL 20 2018-06-19
7654 MARTIN SALESMAN 7698 1981-9-28 1250.0 1400.0 30 2018-06-19
-
分析:
--hive-database 导入hive指向的数据名
--hive-table 导入hive的表名
--hive-import 导入hive必须参数
--hive-partition-key 分区字段
--hive-partition-value 分区默认值值
--fields-terminated-by 列分隔
--hive-overwrite 表数据覆盖
提示错误:
- ERROR hive.HiveConfig: Could not load org.apache.hadoop.hive.conf.HiveConf. Make sure HIVE_CONF_DIR is set correctly.
cp hive-common-1.1.0-cdh5.7.0.jar $SQOOP_HOME/lib
cp hive-shims* $SQOOP_HOME/lib
来自 “ ITPUB博客 ” ,链接:http://blog.itpub.net/31441024/viewspace-2156087/,如需转载,请注明出处,否则将追究法律责任。
转载于:http://blog.itpub.net/31441024/viewspace-2156087/