hbase——3. 导入导出表

启动hadoop:start-all.sh

启动hbase:start-hbase.sh

 

导出表:

导出到hdfs:(也可以导出到本地)

[root@hadoop01 ~]# hbase org.apache.hadoop.hbase.mapreduce.Export table1 hdfs://hadoop01:9000/test/hbase

查看结果:

[root@hadoop01 ~]# hadoop fs -ls /test/hbase

Found 2 items

-rw-r--r-- 1 root supergroup 0 2019-04-21 11:52 /test/hbase/_SUCCESS

-rw-r--r-- 1 root supergroup 280 2019-04-21 11:52 /test/hbase/part-m-00000

[root@hadoop01 ~]#

 

 

导入表:

[root@hadoop01 ~]# hbase org.apache.hadoop.hbase.mapreduce.Driver import table1 hdfs://hadoop01:9000/test/hbase/*

 

 

用这两个命令可以看到hadoop自己就去跑mapreduce了。

 

 

 

遇到错误:

1.yarn内存不足

修改yarn-site.xml,配置不检查物理内存和虚拟内存,提高虚拟内存和物理内存的比值

<property>

<name>yarn.nodemanager.vmem-check-enabled</name>

<value>false</value>

<description>Whether virtual memory limits will be enforced for containers</description>

</property>

<property>

<name>yarn.nodemanager.pmem-check-enabled</name>

<value>false</value>

</property>

<property>

<name>yarn.nodemanager.vmem-pmem-ratio</name>

<value>4</value>

<description>Ratio between virtual memory to physical memory when setting memory limits for containers</description>

</property>

2.hadoop 找不到或无法加载主类

org.apache.hadoop.mapreduce.v2.app.MRAppMaster

思路:找不到class path

解决:

查看class path,把得到的结果配置到yarn-site.xml中

[root@hadoop01 ~]# hadoop classpath

yarn-site.xml

<property>

<name>yarn.application.classpath</name> <value>/export/servers/hadoop/etc/hadoop:/export/servers/hadoop/share/hadoop/common/lib/*:/export/servers/hadoop/share/hadoop/common/*:/export/servers/hadoop/share/hadoop/hdfs:/export/servers/hadoop/share/hadoop/hdfs/lib/*:/export/servers/hadoop/share/hadoop/hdfs/*:/export/servers/hadoop/share/hadoop/mapreduce/*:/export/servers/hadoop/share/hadoop/yarn:/export/servers/hadoop/share/hadoop/yarn/lib/*:/export/servers/hadoop/share/hadoop/yarn/*</value>

</property>

 

3.导出表已经成功了,但报错枚举类型错误:

2019-04-21 12:16:23,023 INFO [main] mapreduce.Job: Job job_1555820103234_0001 completed successfully

Exception in thread "main" java.lang.IllegalArgumentException: No enum constant org.apache.hadoop.mapreduce.TaskCounter.MAP_PHYSICAL_MEMORY_BYTES_MAX

at java.lang.Enum.valueOf(Enum.java:238)

这是由于hbase 使用的hadoop-client 与集群的hadoop版本不一致造成的

应该问题不大

参考博客:https://blog.youkuaiyun.com/abccheng/article/details/53066420?utm_source=blogxgwz5

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值