Hadoop学习笔记(十)---自定义分区

本文详细介绍了如何使用自定义分区和Hadoop MapReduce对数据进行排序,包括实现步骤、代码解析及运行流程。通过自定义分区器,数据被分为正方形和长方形类别,分别进行排序,最终输出排序后的数据。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

所谓的自定义分区,就是规定reduce任务的数量,例如下面的数据:

1 2
1 1
3 2
2 2
5 1

假设上面的数据分别对应矩形的长跟宽,你会发现里面有正方形跟长方形,现在我们按照面积大小从大到小排序,一个文件输出的是长方形的数据,一个输出的是正方形的数据,这里我们就要自定义一个分区:

package cn.edu.bjut.model;

import org.apache.hadoop.io.NullWritable;
import org.apache.hadoop.mapreduce.Partitioner;

public class MyPatitioner extends Partitioner<DataSortable, NullWritable> {

    @Override
    public int getPartition(DataSortable key, NullWritable value, int arg2) {

        if(key.getFirst() == key.getSecond()) {
            return 0;   //如果是正方形就在第一个分区里面执行
        } else {
            return 1;   //矩形就在分区二里面执行
        }

    }

}

然后主程序里面就是这样的,加上我们自定义的分区和reduce任务的数量:

package cn.edu.bjut.model;
import java.io.IOException;
import java.net.URI;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.NullWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;


public class NumSort {

    static final String INPUT_DIR = "hdfs://172.21.15.189:9000/input";
    static final String OUTPUT_DIR = "hdfs://172.21.15.189:9000/output";

    public static void main(String[] args) throws Exception {

        Configuration conf = new Configuration();

        Path path = new Path(OUTPUT_DIR);

        FileSystem fileSystem = FileSystem.get(new URI(OUTPUT_DIR), conf);

        if(fileSystem.exists(path)) {

            fileSystem.delete(path, true);

        }

        Job job = new Job(conf, "NumSort");

        FileInputFormat.setInputPaths(job, INPUT_DIR); //设置输入路径
        FileOutputFormat.setOutputPath(job, path);  //设置输出路径

        job.setJarByClass(DataSortable.class);

        job.setMapperClass(MyMapper.class); //设置自定义的mapper类
        job.setMapOutputKeyClass(DataSortable.class);
        job.setMapOutputValueClass(NullWritable.class);

        job.setReducerClass(MyReducer.class);  //设置自定义的reduce类
        job.setOutputKeyClass(LongWritable.class);  //设置输出的key的类型
        job.setOutputValueClass(LongWritable.class);  //设置输出的value类型

        job.setPartitionerClass(MyPatitioner.class); //自定义分区
        job.setNumReduceTasks(2); // 两个reduce任务

        job.waitForCompletion(true);  //开始执行

    }


    /**
     * 自定义的map类
     * @author Gary
     *
     */
    static class MyMapper extends Mapper<LongWritable, Text, DataSortable, NullWritable> {

        @Override
        protected void map(
                LongWritable key,
                Text value,
                Mapper<LongWritable, Text, DataSortable,  NullWritable>.Context context)
                throws IOException, InterruptedException {

            String[] nums = value.toString().split(" ");

            DataSortable dataSortable = new DataSortable(nums[0], nums[1]);

            context.write(dataSortable, NullWritable.get());

        }



    }

    /**
     * 自定义的reduce类
     * @author Gary
     *
     */
    static class MyReducer extends Reducer<DataSortable, NullWritable, LongWritable, LongWritable> {

        @Override
        protected void reduce(
                DataSortable key,
                Iterable<NullWritable> value,
                Reducer<DataSortable, NullWritable, LongWritable, LongWritable>.Context context)
                throws IOException, InterruptedException {

            context.write(new LongWritable(key.getFirst()), new LongWritable(key.getSecond()));

        }


    }

}

但是需要注意的是,直接运行这个程序是会报错的,必须打成jar包来运行,步骤如下:

这里写图片描述

然后选择jar文件:

这里写图片描述

点击next,选择输出路径:

这里写图片描述

再次点击next — next,到下面的界面,选择你的主方法:

这里写图片描述

将jar包ftp上传到linux,然后切换到该文件所在目录,执行命令hadoop jar data.jar

[root@localhost Public]# hadoop jar data.jar
Warning: $HADOOP_HOME is deprecated.

15/06/02 08:44:07 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
15/06/02 08:44:07 INFO input.FileInputFormat: Total input paths to process : 1
15/06/02 08:44:07 INFO util.NativeCodeLoader: Loaded the native-hadoop library
15/06/02 08:44:07 WARN snappy.LoadSnappy: Snappy native library not loaded
15/06/02 08:44:07 INFO mapred.JobClient: Running job: job_201506011333_0001
15/06/02 08:44:08 INFO mapred.JobClient:  map 0% reduce 0%
15/06/02 08:44:15 INFO mapred.JobClient:  map 100% reduce 0%
15/06/02 08:44:23 INFO mapred.JobClient:  map 100% reduce 16%
15/06/02 08:44:24 INFO mapred.JobClient:  map 100% reduce 33%
15/06/02 08:44:25 INFO mapred.JobClient:  map 100% reduce 100%
15/06/02 08:44:26 INFO mapred.JobClient: Job complete: job_201506011333_0001
15/06/02 08:44:26 INFO mapred.JobClient: Counters: 29
15/06/02 08:44:26 INFO mapred.JobClient:   Job Counters 
15/06/02 08:44:26 INFO mapred.JobClient:     Launched reduce tasks=2
15/06/02 08:44:26 INFO mapred.JobClient:     SLOTS_MILLIS_MAPS=6376
15/06/02 08:44:26 INFO mapred.JobClient:     Total time spent by all reduces waiting after reserving slots (ms)=0
15/06/02 08:44:26 INFO mapred.JobClient:     Total time spent by all maps waiting after reserving slots (ms)=0
15/06/02 08:44:26 INFO mapred.JobClient:     Launched map tasks=1
15/06/02 08:44:26 INFO mapred.JobClient:     Data-local map tasks=1
15/06/02 08:44:26 INFO mapred.JobClient:     SLOTS_MILLIS_REDUCES=19748
15/06/02 08:44:26 INFO mapred.JobClient:   File Output Format Counters 
15/06/02 08:44:26 INFO mapred.JobClient:     Bytes Written=20
15/06/02 08:44:26 INFO mapred.JobClient:   FileSystemCounters
15/06/02 08:44:26 INFO mapred.JobClient:     FILE_BYTES_READ=102
15/06/02 08:44:26 INFO mapred.JobClient:     HDFS_BYTES_READ=121
15/06/02 08:44:26 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=168973
15/06/02 08:44:26 INFO mapred.JobClient:     HDFS_BYTES_WRITTEN=20
15/06/02 08:44:26 INFO mapred.JobClient:   File Input Format Counters 
15/06/02 08:44:26 INFO mapred.JobClient:     Bytes Read=20
15/06/02 08:44:26 INFO mapred.JobClient:   Map-Reduce Framework
15/06/02 08:44:26 INFO mapred.JobClient:     Map output materialized bytes=102
15/06/02 08:44:26 INFO mapred.JobClient:     Map input records=5
15/06/02 08:44:26 INFO mapred.JobClient:     Reduce shuffle bytes=102
15/06/02 08:44:26 INFO mapred.JobClient:     Spilled Records=10
15/06/02 08:44:26 INFO mapred.JobClient:     Map output bytes=80
15/06/02 08:44:26 INFO mapred.JobClient:     Total committed heap usage (bytes)=191762432
15/06/02 08:44:26 INFO mapred.JobClient:     CPU time spent (ms)=3190
15/06/02 08:44:26 INFO mapred.JobClient:     Combine input records=0
15/06/02 08:44:26 INFO mapred.JobClient:     SPLIT_RAW_BYTES=101
15/06/02 08:44:26 INFO mapred.JobClient:     Reduce input records=5
15/06/02 08:44:26 INFO mapred.JobClient:     Reduce input groups=5
15/06/02 08:44:26 INFO mapred.JobClient:     Combine output records=0
15/06/02 08:44:26 INFO mapred.JobClient:     Physical memory (bytes) snapshot=336629760
15/06/02 08:44:26 INFO mapred.JobClient:     Reduce output records=5
15/06/02 08:44:26 INFO mapred.JobClient:     Virtual memory (bytes) snapshot=2209480704
15/06/02 08:44:26 INFO mapred.JobClient:     Map output records=5

成功后查看一下output文件夹里面的数据,你会发现现在两个输出文件,最下面的两个:

[root@localhost Public]# hadoop fs -ls /output
Warning: $HADOOP_HOME is deprecated.

Found 4 items
-rw-r--r--   1 root supergroup          0 2015-06-02 08:44 /output/_SUCCESS
drwxr-xr-x   - root supergroup          0 2015-06-02 08:44 /output/_logs
-rw-r--r--   1 root supergroup          8 2015-06-02 08:44 /output/part-r-00000
-rw-r--r--   1 root supergroup         12 2015-06-02 08:44 /output/part-r-00001

查看一下文件内容:

[root@localhost Public]# hadoop fs -cat /output/p*0
Warning: $HADOOP_HOME is deprecated.

1   1
2   2
[root@localhost Public]# hadoop fs -cat /output/p*1
Warning: $HADOOP_HOME is deprecated.

1   2
5   1
3   2
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值