sqoop使用

本文介绍如何使用Sqoop工具进行不同数据库之间的数据迁移,包括从Oracle、MySQL等关系型数据库导入数据到Hive和HBase,以及从Hive导出数据到MySQL。详细展示了sqoop命令的具体用法,如列出数据库、表,执行SQL语句,创建Hive表及数据导入导出等。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

  1. list-databases 列出数据库
    sqoop list-databases –connect jdbc:oracle:thin:@172.21.202.4:1521:TJZHCSDEV –username tjzhcs –password tjzhcs

  2. list-tables 列出某数据库下的表
    sqoop list-tables–connect jdbc:oracle:thin:@172.21.202.4:1521:TJZHCSDEV –username tjzhcs –password tjzhcs

  3. eval 可以快速地使用SQL语句对关系数据库进行操作,这可以使得在使用import这种工具进行数据导入的时候,可以预先了解相关的SQL语句是否正确,并能将结果显示在控制台
    sqoop eval –connect jdbc:oracle:thin:@172.21.202.4:1521:TJZHCSDEV –username tjzhcs –password tjzhcs -e “select * from b_code”

  4. create-hive-table 生成与关系数据库表的表结构对应的HIVE表
    sqoop create-hive-table –connect jdbc:oracle:thin:@172.21.202.4:1521:TJZHCSDEV –username tjzhcs –password tjzhcs -table B_CODE -hive-table h_b_code(注:oracle导入的时候表名需要使用大写!!!)

  5. 导入oracle表到hive中
    sqoop import –connect jdbc:oracle:thin:@172.21.202.4:1521:TJZHCSDEV –username tjzhcs –password tjzhcs –table B_CODE –hive-table h_b_code –hive-import
    Hive arguments: Argument Description
    --hive-home <dir> Override $HIVE_HOME
    --hive-import Import tables into Hive (Uses Hive’s default delimiters if none are set.)
    --hive-overwrite Overwrite existing data in the Hive table.
    --create-hive-table If set, then the job will fail if the target hive
    table exits. By default this property is false.
    --hive-table <table-name> Sets the table name to use when importing to Hive.
    --hive-drop-import-delims Drops \n, \r, and \01 from string fields when importing to Hive.
    --hive-delims-replacement Replace \n, \r, and \01 from string fields with user defined string when importing to Hive.
    --hive-partition-key Name of a hive field to partition are sharded on
    --hive-partition-value <v> String-value that serves as partition key for this imported into hive in this job.
    --map-column-hive <map> Override default mapping from SQL type to Hive type for configured columns.

  6. sqoop list-databases –connect jdbc:mysql://172.21.80.123:3306/ –username root –password 1234

sqoop import --connect jdbc:mysql://172.21.80.123:3306/hadoop --username root --password 1234 --table lsemp --hbase-table lsemp --hbase-create-table --hbase-row-key empno --column-family info
HBase arguments:```

Argument    Description
--column-family <family>    Sets the target column family for the import
--hbase-create-table    If specified, create missing HBase tables
--hbase-row-key <col>   Specifies which input column to use as the row key
In case, if input table contains composite
key, then <col> must be in the form of a
comma-separated list of composite key
attributes
--hbase-table <table-name>  Specifies an HBase table to use as the target instead of HDFS
--hbase-bulkload    Enables bulk loading
mysql相关
1.执行sql
sqoop eval --connect jdbc:mysql://localhost:3306/hive --username root --password 123456 -e 'show tables'
2.列出库
sqoop list-databases --connect jdbc:mysql://localhost:3306/ --username root --password 123456
3.导入mysql数据到hive
sqoop import --connect jdbc:mysql://localhost:3306/test --username root --password 123456 --table student --hive-table student  --hive-import --hive-database test
4.导出hive数据到mysql
sqoop export --connect jdbc:mysql://localhost:3306/test --username root --password 123456  --table word_count -export-dir  /user/hive/warehouse/test.db/word_count --input-fields-terminated-by '\001'
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值