mapreduce框架在记录到达reducer之前按照键值对记录排序,但是键所对应的值并没有排序。甚至在不同的执行轮次中,这些值的排序也不固定,因为他们来自不同的map任务且这些map任务在不同的轮次中完成的时间各不相同。一般说来,大多数MapReduce会避免让reduce函数依赖值得排序,但是也有需要通过特定的方法对键进行拍讯和分组已实现对值得排序。
现有需求如下:
user_id,server_name,price
10000,ruozedata001,100
10000,ruozedata002,200
10000,ruozedata003,300
10001,hadoop000,1000
10001,hadoop001,2000
10001,hadoop003,3000
10002,jepson001,1000
10002,jepson002,2000
10002,jepson003,3000
求每个userid中消费最多的服务器名称 ==> 分组Top N
题目解析
1.将数据信息封装成bean,以bean对象作为map的key,利用hadoop的key自动排序特性。
2.实现WritableComparable接口,bean以userId升序,价格price降序实现比较接口,这样数据在map后进入shuffle阶段会实现自定义规则自排序。
3.map阶段,将bean作为key,需要的字段,比如serverName作为key输出。
3.reduce阶段,如果不做任何处理数据将呈现将按订单升序,价格降序。但是订单id相同,价格不同的订单将不能使用同一个reduce函数,也不能求解topN。
4.使用reduce阶段分组特性接口WritableComparator(辅助排序),在reduce归并前,将对数据进行分组,以决定什么样的数据进入同一分组里,即同一个reduce里。
5.在实现WritableComparator的类中,以bean为基础,我们将userid作为比较项忽略价格price因素,实现同一id,进入同一个分组,价格从高到低已经在bean里排序实现,shuffle阶段也遵循了这个原则,所以在reduce阶段不考虑价格排序问题。
6.在Reduce端利用groupingComparator将userId相同的kv聚合成组,每次迭代value的时候,通过for循环取出要求value中所学字段的作为reduce的value输出,reduce输入进来的key作为输出的key,然后依照需求要去遍历相应的N次,即可输出需要的值。OrderBean.java
package com.wxx.bigdata.homework.homework2019082503;
import org.apache.hadoop.io.WritableComparable;
import java.io.DataInput;
import java.io.DataOutput;
import java.io.IOException;
public class OrderBean implements WritableComparable<OrderBean> {
private String userId;
private Double price;
/**
* 按照价格进行排序
* @param o
* @return
*/
@Override
public int compareTo(OrderBean o) {
//需要先比较我们的userId,如果userId相同的,我们再按照金额进行排序
//如果userId不相同,没有可比性
int result = this.userId.compareTo(o.userId);
if(result ==0){
//如果userId相同,继续比较价格,按照价格进行排序,
//如果userId不相同没有可比性
result = this.price.compareTo(o.price);
return -result;
}
return result;
}
@Override
public void write(DataOutput out) throws IOException {
out.writeUTF(userId);
out.writeDouble(price);
}
@Override
public void readFields(DataInput in) throws IOException {
this.userId= in.readUTF();
this.price = in.readDouble();
}
public String getUserId() {
return userId;
}
public void setUserId(String userId) {
this.userId = userId;
}
public Double getPrice() {
return price;
}
public void setPrice(Double price) {
this.price = price;
}
@Override
public String toString() {
return "OrderBean{" +
"userId='" + userId + '\'' +
", price=" + price +
'}';
}
}
MyGroupComparator.java
import org.apache.hadoop.io.WritableComparable;
import org.apache.hadoop.io.WritableComparator;
public class MyGroupComparator extends WritableComparator {
public MyGroupComparator() {
super(OrderBean.class,true);
}
/**
* 将相同订userid的数据作为一组进行比较,因为OrderBean重写了compareTo方法,所以会进行价格比较排序
* @param a
* @param b
* @return
*/
@Override
public int compare(WritableComparable a, WritableComparable b) {
OrderBean first = (OrderBean) a;
OrderBean second = (OrderBean) b;
return first.getUserId().compareTo(second.getUserId());
}
}
TopNAPP.java,如下代码是取top2
import com.wxx.bigdata.hadoop.utils.FileUtils;
import com.wxx.bigdata.homework.homework2019082502.MyGroupComparator;
import com.wxx.bigdata.homework.homework2019082502.OrderBean;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.DoubleWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import java.io.IOException;
public class TopNAPP {
public static void main(String[] args) throws IOException, ClassNotFoundException, InterruptedException {
// 1 获取job对象
Configuration configuration = new Configuration();
Job job = Job.getInstance(configuration);
String input = "D:\\ruoze_g7\\ruoze\\wxx-hadoop\\data\\0825.txt";
String output = "D:\\ruoze_g7\\ruoze\\wxx-hadoop\\data\\out\\home2019082502B";
//判断路径是否存在,存在就先删除
FileUtils.deleteTarget(output, configuration);
// 2 设置jar的相关信息
job.setJarByClass(TopNAPP.class);
//3 设置自定义的mapper和reducer
job.setMapperClass(GroupMapper.class);
job.setReducerClass(GroupReducer.class);
job.setGroupingComparatorClass(MyGroupComparator.class);
// 4 设置Mapper阶段输出的key和value类型
job.setMapOutputKeyClass(OrderBean.class);
job.setMapOutputValueClass(Text.class);
// 5 设置Reducer阶段输出的key和value类型
job.setOutputKeyClass(OrderBean.class);
job.setOutputValueClass(Text.class);
//6 设置输入和输出路径
FileInputFormat.setInputPaths(job, new Path(input));
FileOutputFormat.setOutputPath(job, new Path(output));
// 7 提交job
boolean result = job.waitForCompletion(true);
System.exit(result ? 0 :1);
}
public static class GroupMapper extends Mapper<LongWritable,Text,OrderBean,Text> {
@Override
protected void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
String[] split = value.toString().split(",");
OrderBean orderBean = new OrderBean();
orderBean.setOrderId(split[0]);
orderBean.setPrice(Double.valueOf(split[2]));
DoubleWritable doubleWritable = new DoubleWritable(Double.valueOf(split[2]));
context.write(orderBean,new Text(split[1] + "," + split[2]));
}
}
public static class GroupReducer extends Reducer<OrderBean,Text,OrderBean,Text> {
@Override
protected void reduce(OrderBean key, Iterable<Text> values, Context context) throws IOException, InterruptedException {
//context.write(key,NullWritable.get());
int i = 0;
for (Text value : values) {
i++;
if(i <= 2){
String str = value.toString();
String[] sp = str.split(",");
context.write(key,new Text(sp[0]));
}else{
break;
}
}
}
}
}
最终输出结果:
OrderBean{userId='10000', price=300.0} ruozedata003
OrderBean{userId='10000', price=200.0} ruozedata002
OrderBean{userId='10001', price=3000.0} hadoop003
OrderBean{userId='10001', price=2000.0} hadoop001
OrderBean{userId='10002', price=3000.0} jepson003
OrderBean{userId='10002', price=2000.0} jepson002