Caused by: java.io.IOException: Added a key not lexically larger than previous.

本文介绍了一个用于HBase的批量加载程序实现细节,包括Mapper和Reducer的具体实现方式,并强调了在使用LoadIncrementalHFiles进行批量加载时,数据必须按行键排序的重要性。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

为了重复这个实验,遇到不少坑

https://www.iteblog.com/archives/1889.html

/**
 * Created by Administrator on 2017/8/18.
 */
public class IteblogBulkLoadDriver {
    public static class IteblogBulkLoadMapper extends Mapper<LongWritable, Text, StringWriter, Put> {
        protected void map(LongWritable key, Text value, Context context) throws InterruptedException, IOException {
            if(value==null) {
                return;
            }

            String line = value.toString();

            String[] items = line.split("\\^");
            if(items.length<3){
                items = line.split("\\^");
            }
            if(items.length<3){
                System.out.println("================less 3");
                return;
            }
            System.out.println(line);
            String rowKey = items[0]+items[1];
            Put put = new Put(Bytes.toBytes(items[0]));   //ROWKEY
            put.addColumn("cf".getBytes(), "url".getBytes(), items[1].getBytes());
            put.addColumn("cf".getBytes(), "name".getBytes(), items[2].getBytes());
            context.write(new StringWriter().append(rowKey), put);
        }
    }



    public static class HBaseHFileReducer extends
            Reducer<StringWriter, Put, ImmutableBytesWritable, Put> {
        protected void reduce(StringWriter key, Iterable<Put> values,
                              Context context) throws IOException, InterruptedException {
            String value = "";
            ImmutableBytesWritable k  = new ImmutableBytesWritable(key.toString().getBytes());

            Put val = values.iterator().next();
            context.write(k, val);
        }


    }

    public static void main(String[] args) throws IOException, ClassNotFoundException, InterruptedException {
//              String SRC_PATH= "hdfs:/slave1:8020/maats5/pay/logdate=20170906";
//              String DESC_PATH= "hdfs:/slave1:8020/maats5_test/pay/logdate=20170906";
            String SRC_PATH= args[0];
            String DESC_PATH=args[1];
            Configuration conf = HBaseConnectionFactory.config;
            conf.set("fs.hdfs.impl", "org.apache.hadoop.hdfs.DistributedFileSystem");
            Job job=Job.getInstance(conf);
            job.setJarByClass(IteblogBulkLoadDriver.class);
            job.setMapperClass(IteblogBulkLoadMapper.class);
            job.setMapOutputKeyClass(StringWriter.class);
            job.setMapOutputValueClass(Put.class);
            job.setReducerClass(HBaseHFileReducer.class);
            job.setOutputFormatClass(HFileOutputFormat2.class);
            HTable table = new HTable(conf,"maatstest");
            HFileOutputFormat2.configureIncrementalLoad(job,table,table.getRegionLocator());
            FileInputFormat.addInputPath(job,new Path(SRC_PATH));
            FileOutputFormat.setOutputPath(job,new Path(DESC_PATH));

            System.exit(job.waitForCompletion(true)?0:1);
        }
    }

 

 

 

When using the bulkloader (LoadIncrementalHFiles, doBulkLoad) you can only add items that are "lexically ordered", ie. you need to make sure that the items you add are sorted by the row-id.

https://stackoverflow.com/questions/25860114/hfile-creation-added-a-key-not-lexically-larger-than-previous-key

 http://ganliang13.iteye.com/blog/1884921

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值