MapReduce链接不同来源的数据

在关系型数据库中 join是非常常见的操作,各种优化手段已经到了极致。在海量数据的环境下,不可避免的也会碰到这种类型的需求,例如在数据分析时需要连接从不同的数据源中获取到的数据。不同于传统的单机模式,在分布式存储的下采用MapReduce 编程模型,也有相应的处理措施和优化方法。

Reduce sidejoin

  HadoopMapReduce中的主要过程依次是读取数据分块,map操作,shuffle操作,reduce操作,然后输出结果。简单来说,其本质在于大而化小,分拆处理。显然我们想到的是将两个数据表中键值相同的元组放到同一个reduce结点进行,关键问题在于如何做到?具体处理方法是将map操作输出的key值设为两表的连接键,那么在shuffle阶段,Hadoop中默认的partitioner会将相同key值的map输出发送到同一个reduce结点。reduceside join是一种最简单的join方式,其主要思想如下:

  map阶段,map函数同时读取两个文件File1File2,为了区分两种来源的key/value数据对,对每条数据打一个标签tag),比如:tag=0表示来自文件File1tag=1表示来自文件File2。即:map阶段的主要任务是对不同文件中的数据打标签。

  reduce阶段,reduce函数获取key相同的来自File1File2文件的valuelist然后对于同一个key,对File1File2中的数据进行join(笛卡尔乘积)。即:reduce阶段进行实际的连接操作。以如下两个txt文件的合并举例:

file1.txt:

1,Stephanie Leung,555-555-5555

2,Edward Kim,123-456-7890

3,Jose Madriz,281-330-800

4,David Stork,408-555-0000          

file2.txt: 

3,A,12.95,02-Jun-2008

2,C,32.00,30-Nov-2007

3,D,25.02,22-Jan-2009

(1)  Map阶段

     public static class MapClass extends DataJoinMapperBase{

           //得到hdfs上的每个源文件路径

       protected Text generateInputTag(String inputFile) {

           String datasource = inputFile.split("-")[0];

           return newText(datasource);

       }

//获取每个文件的key

       protected Text generateGroupKey(TaggedMapOutput aRecord){

           String line = ((Text) aRecord.getData()).toString();

           String[] tokens = line.split(",");

           String groupKey = tokens[0];

           return new Text(groupKey);

       }

//给每一行数据加标签,标签为源文件路径地址,即generateInputTag中的hdfs上文件地址。它早先已由generate-InputTag()计算并存在this.inputTag中。

       protected TaggedMapOutput generateTaggedMapOutput(Object value){

          TaggedWritable retv = new TaggedWritable((Text) value);

          retv.setTag(this.inputTag);

          return retv;

      }

   }

(2)  Reduce阶段

   public static class Reduce extends DataJoinReducerBase {

//若某个key对应的值小于2,则不进行后续操作;否则把key相同而标签不同的值进行笛卡尔积。

       protected TaggedMapOutput combine(Object[] tags, Object[] values){

              if (tags.length < 2) return null;

              String joinedStr = "";

              for (int i=0; i<values.length; i++) {

                     if (i > 0) joinedStr += ",";

                     TaggedWritable tw = (TaggedWritable) values[i];

                     String line = ((Text) tw.getData()).toString();

                     String[] tokens = line.split(",", 2);

                     joinedStr += tokens[1];

              }

              TaggedWritable retv = new TaggedWritable(newText(joinedStr));

              retv.setTag((Text) tags[0]);

              return retv;

       }

   }

(3)  数据打标签的类实现方法

   public static class TaggedWritable extends TaggedMapOutput{

       private Writable data;

 

       public TaggedWritable() {

           this.tag = new Text();

       }

 

       public TaggedWritable(Writable data) {

           this.tag = new Text("");

           this.data = data;

       }

 

       public Writable getData() {

           return data;

       }

 

       public void setData(Writable data) {

           this.data = data;

       }

 

       public void write(DataOutput out) throws IOException {

           this.tag.write(out);

           out.writeUTF(this.data.getClass().getName());

           this.data.write(out);

       }

 

       public void readFields(DataInput in) throws IOException{

           this.tag.readFields(in);

           String dataClz = in.readUTF();

           if (this.data == null

                   || !this.data.getClass().getName().equals(dataClz)) {

               try {

                                 this.data = (Writable) ReflectionUtils.newInstance(

                                        Class.forName(dataClz), null);

                          } catch (ClassNotFoundException e) {

                                 // TODO Auto-generated catch block

                                 e.printStackTrace();

                          }

           }

           this.data.readFields(in);

       }

   }

(4)  主类中的设置方法

   public int run(String[] args) throws Exception {

            Configuration conf = getConf();

            JobConf job = new JobConf(conf, DataJoin.class);

            Path in = new Path(args[0]);

            Path out = new Path(args[1]);

            FileInputFormat.setInputPaths(job, in);

            FileOutputFormat.setOutputPath(job, out);

            job.setJobName("DataJoin");

            job.setMapperClass(MapClass.class);

            job.setReducerClass(Reduce.class);

            job.setInputFormat(TextInputFormat.class);

            job.setOutputFormat(TextOutputFormat.class);

            job.setOutputKeyClass(Text.class);

            job.setOutputValueClass(TaggedWritable.class);

            job.set("mapred.textoutputformat.separator", ",");

            JobClient.runJob(job);

            return 0;

      }

 

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值