1,问题现象
hadoop2.7.3+hbase1.2.5配置完成,往HBASE导入数据以后,正常运行,重启hbase+hadoop以后出现下面异常。
rg.apache.hadoop.util.NativeCrc32.nativeVerifyChunkedSums(IILjava/nio/ByteBuffer;ILjava/nio/ByteBuffer;IILjava/lang/String;J)V
at org.apache.hadoop.util.NativeCrc32.nativeVerifyChunkedSums(Native Method)
at org.apache.hadoop.util.NativeCrc32.verifyChunkedSums(NativeCrc32.java:57)
at org.apache.hadoop.util.DataChecksum.verifyChunkedSums(DataChecksum.java:291)
at org.apache.hadoop.hdfs.BlockReaderLocal.doByteBufferRead(BlockReaderLocal.java:338)
at org.apache.hadoop.hdfs.BlockReaderLocal.fillSlowReadBuffer(BlockReaderLocal.java:388)
at org.apache.hadoop.hdfs.BlockReaderLocal.read(BlockReaderLocal.java:408)
at org.apache.hadoop.hdfs.DFSInputStream$ByteArrayStrategy.doRead(DFSInputStream.java:642)
at org.apache.hadoop.hdfs.DFSInputStream.readBuffer(DFSInputStream.java:698)
at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:752)
at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:793)
at java.io.DataInputStream.read(DataInputStream.java:149)
at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:192)
at org.apache.hadoop.hbase.util.FSUtils.getVersion(FSUtils.java:495)
at org.apache.hadoop.hbase.util.FSUtils.checkVersion(FSUtils.java:582)
at org.apache.hadoop.hbase.master.MasterFileSystem.checkRootDir(MasterFileSystem.java:460)
at org.apache.hadoop.hbase.master.MasterFileSystem.createInitialFileSystemLayout(MasterFileSystem.java:151)
at org.apache.hadoop.hbase.master.MasterFileSystem.<init>(MasterFileSystem.java:128)
at org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.java:790)
at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:603)
at java.lang.Thread.run(Thread.java:744)
2,解决过程
(1)网上有见过类似问题,但是没有看到解决办法,大约知道是程序版本不匹配导致,hbase+hadoop都是我自己本地编译出来的,编译hbase时也在pom.xml中指定了hadoop的版本。
(2)在hbase的lib下面找ls *hadoop* ,,然后从hadoop 的share目录下找到同名的包替换,成功解决问题。
原因不清楚。
hadoop2.7.3+hbase1.2.5配置完成,往HBASE导入数据以后,正常运行,重启hbase+hadoop以后出现下面异常。
rg.apache.hadoop.util.NativeCrc32.nativeVerifyChunkedSums(IILjava/nio/ByteBuffer;ILjava/nio/ByteBuffer;IILjava/lang/String;J)V
at org.apache.hadoop.util.NativeCrc32.nativeVerifyChunkedSums(Native Method)
at org.apache.hadoop.util.NativeCrc32.verifyChunkedSums(NativeCrc32.java:57)
at org.apache.hadoop.util.DataChecksum.verifyChunkedSums(DataChecksum.java:291)
at org.apache.hadoop.hdfs.BlockReaderLocal.doByteBufferRead(BlockReaderLocal.java:338)
at org.apache.hadoop.hdfs.BlockReaderLocal.fillSlowReadBuffer(BlockReaderLocal.java:388)
at org.apache.hadoop.hdfs.BlockReaderLocal.read(BlockReaderLocal.java:408)
at org.apache.hadoop.hdfs.DFSInputStream$ByteArrayStrategy.doRead(DFSInputStream.java:642)
at org.apache.hadoop.hdfs.DFSInputStream.readBuffer(DFSInputStream.java:698)
at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:752)
at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:793)
at java.io.DataInputStream.read(DataInputStream.java:149)
at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:192)
at org.apache.hadoop.hbase.util.FSUtils.getVersion(FSUtils.java:495)
at org.apache.hadoop.hbase.util.FSUtils.checkVersion(FSUtils.java:582)
at org.apache.hadoop.hbase.master.MasterFileSystem.checkRootDir(MasterFileSystem.java:460)
at org.apache.hadoop.hbase.master.MasterFileSystem.createInitialFileSystemLayout(MasterFileSystem.java:151)
at org.apache.hadoop.hbase.master.MasterFileSystem.<init>(MasterFileSystem.java:128)
at org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.java:790)
at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:603)
at java.lang.Thread.run(Thread.java:744)
2,解决过程
(1)网上有见过类似问题,但是没有看到解决办法,大约知道是程序版本不匹配导致,hbase+hadoop都是我自己本地编译出来的,编译hbase时也在pom.xml中指定了hadoop的版本。
(2)在hbase的lib下面找ls *hadoop* ,,然后从hadoop 的share目录下找到同名的包替换,成功解决问题。
原因不清楚。