HIVE UDAF和UDTF实现group by后获取top值

本文介绍了一个自定义Hive UDAF(Top4GroupBy)用于获取指定数量的顶级条目,以及一个UDTF(ExplodeMap)用于解析聚合结果,通过实例展示了如何使用这两个函数来获取top URL。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

作者:liuzhoulong 发表于2012-7-26 14:52:57 原文链接

先自定义一个UDAF,由于udaf是多输入一条输出的聚合,所以结果拼成字符串输出,代码如下:

public class Top4GroupBy extends UDAF {

    //定义一个对象用于存储数据
    public static class State {
        private Map<Text, IntWritable> counts;
        private int limit;

    }

    /**
     * 累加数据,判断map的key中是否存在该字符串,如果存在累加,不存在放入map中
     * @param s
     * @param o
     * @param i
     */
    private static void increment(State s, Text o, int i) {
        if (s.counts == null) {
            s.counts = new HashMap<Text, IntWritable>();
        }
        IntWritable count = s.counts.get(o);
        if (count == null) {
            Text key = new Text();
            key.set(o);
            s.counts.put(key, new IntWritable(i));
        } else {
            count.set(count.get() + i);
        }

    }

    public static class Top4GroupByEvaluator implements UDAFEvaluator {

        private final State state;

        public Top4GroupByEvaluator() {
            state = new State();
        }

        @Override
        public void init() {
            if (state.counts != null) {
                state.counts.clear();
            }
            if (state.limit == 0) {
                state.limit = 100;
            }
        }

        public boolean iterate(Text value, IntWritable limits) {
            if (value == null || limits == null) {
                return false;
            } else {
                state.limit = limits.get();
                increment(state, value, 1);
            }
            return true;
        }

        public State terminatePartial() {
            return state;
        }

        public boolean merge(State other) {
            if (state == null || other == null) {
                return false;
            }
            state.limit = other.limit;
            for (Map.Entry<Text, IntWritable> e : other.counts.entrySet()) {
                increment(state, e.getKey(), e.getValue().get());
            }
            return true;
        }

        public Text terminate() {
            if (state == null || state.counts.size() == 0) {
                return null;
            }
            Map<Text, IntWritable> it = sortByValue(state.counts, true);
            StringBuffer str = new StringBuffer();
            int i = 0;
            for (Map.Entry<Text, IntWritable> e : it.entrySet()) {
                ++i;
                if (i > state.limit) {//只输出传入条数的结果,并拼成字符串
                    break;
                }
                str.append(e.getKey().toString()).append("$@").append(e.getValue().get()).append("$*");
            }
            return new Text(str.toString());
        }

        /*
         * 实现一个map按值的排序算法
         */
        @SuppressWarnings("unchecked")
        public static Map sortByValue(Map map, final boolean reverse) {
            List list = new LinkedList(map.entrySet());
            Collections.sort(list, new Comparator() {
                public int compare(Object o1, Object o2) {
                    if (reverse) {
                        return -((Comparable) ((Map.Entry) o1).getValue()).compareTo(((Map.Entry) o2).getValue());
                    }
                    return ((Comparable) ((Map.Entry) o1).getValue()).compareTo(((Map.Entry) o2).getValue());
                }
            });

            Map result = new LinkedHashMap();
            for (Iterator it = list.iterator(); it.hasNext();) {
                Map.Entry entry = (Map.Entry) it.next();
                result.put(entry.getKey(), entry.getValue());
            }
            return result;
        }

    }

}

还需要自定义一个UDTF,安装分隔符将字符串切分,将字符串转化为多行的列表输出:

public class ExplodeMap extends GenericUDTF {

    @Override
    public void close() throws HiveException {

    }

    @Override
    public StructObjectInspector initialize(ObjectInspector[] args) throws UDFArgumentException {
        if (args.length != 1) {
            throw new UDFArgumentLengthException("ExplodeMap takes only one argument");
        }
        if (args[0].getCategory() != ObjectInspector.Category.PRIMITIVE) {
            throw new UDFArgumentException("ExplodeMap takes string as a parameter");
        }
        ArrayList<String> fieldNames = new ArrayList<String>();
        ArrayList<ObjectInspector> fieldOIs = new ArrayList<ObjectInspector>();
        fieldNames.add("col1");
        fieldOIs.add(PrimitiveObjectInspectorFactory.javaStringObjectInspector);
        fieldNames.add("col2");
        fieldOIs.add(PrimitiveObjectInspectorFactory.javaStringObjectInspector);
        fieldNames.add("col3");
        fieldOIs.add(PrimitiveObjectInspectorFactory.javaStringObjectInspector);

        return ObjectInspectorFactory.getStandardStructObjectInspector(fieldNames, fieldOIs);
    }

    @Override
    public void process(Object[] args) throws HiveException {
        String input = args[0].toString();
        String[] test = input.split("\\$\\*");
        for (int i = 0; i < test.length; i++) {
            try {
                String[] result  = new String[3];
                String[] sp= test[i].split("\\$\\@");
                result[0] =sp[0];
                result[1] =sp[1];
                result[2] = String.valueOf(i + 1);
                forward(result);
            } catch (Exception e) {
                continue;
            }
        }

    }

}

 两个函数分别以top_group和explode_map为函数名加入到hive函数库中,应用例子如下(获取前100个landingrefer的top url 100)

hive -e "select t.landingrefer, mytable.col1, mytable.col2,mytable.col3 from (select landingrefer, top_group(url,100) pro, count(sid) s from pvlog  where dt=20120719 and depth=1 group by landingrefer order by s desc limit 100) t lateral view explode_map(t.pro)
mytable as col1, col2, col3;"> test



### Hive UDF、UDAF UDTF 的实际应用案例 #### 1. 用户自定义函数 (UDF) 用户自定义函数用于处理单个输入并返回单个输出。下面是一个将字符串转换为大写的例子: ```java import org.apache.hadoop.hive.ql.exec.Description; import org.apache.hadoop.hive.ql.exec.UDF; @Description(name = "to_upper", value = "Converts input string to upper case") public class ToUpperUDF extends UDF { public String evaluate(String input) { if (input == null) { return null; } return input.toUpperCase(); } } ``` 此代码实现了 `evaluate` 方法,当传入一个字符串时会将其转成大写形式[^2]。 为了使上述 UDF 可用,在 Hive 中需加载 jar 文件以及注册该函数: ```sql ADD JAR /path/to/your/jarfile.jar; CREATE TEMPORARY FUNCTION to_upper AS 'com.example.ToUpperUDF'; SELECT to_upper('hello world') FROM your_table LIMIT 1; ``` 以上命令首先添加了包含 UDF 类的 jar 文件到当前 session;接着创建了一个临时函数名为 `to_upper` 并关联至 Java 类路径;最后查询表中任意一行数据测试新函数的效果。 #### 2. 聚合函数 (UDAF) 聚合函数通常用来计算一组记录上的汇总统计量,如平均数、总等。这里展示如何编写一个简单的求 UDAF 函数: ```java package com.example.udaf; import org.apache.hadoop.hive.ql.metadata.HiveException; import org.apache.hadoop.hive.ql.parse.SemanticException; import org.apache.hadoop.hive.ql.udf.generic.GenericUDAFEvaluator; import org.apache.hadoop.hive.serde2.objectinspector.ObjectInspector; import org.apache.hadoop.hive.serde2.objectinspector.primitive.PrimitiveObjectInspectorFactory; public final class SumEvaluator extends GenericUDAFEvaluator { private transient DoubleWritable result = new DoubleWritable(); static class Buffer implements AggregationBuffer { double sum = 0d; } @Override protected ObjectInspector initMode(ObjectInspector[] args) throws SemanticException { for (int i = 0; i < args.length; ++i) { PrimitiveObjectInspectorFactory.writableDoubleObjectInspector; } return PrimitiveObjectInspectorFactory.javaDoubleObjectInspector; } @Override public void iterate(AggregationBuffer agg, Object[] parameters) throws HiveException { ((SumEvaluator.Buffer)agg).sum += ((Number)parameters[0]).doubleValue(); } @Override public Object terminatePartial(AggregationBuffer agg) throws HiveException { return ((SumEvaluator.Buffer)agg).sum; } @Override public void merge(AggregationBuffer agg, Object partial) throws HiveException { ((SumEvaluator.Buffer)agg).sum += ((Number)partial).doubleValue(); } @Override public Object terminate(AggregationBuffer agg) throws HiveException { this.result.set(((SumEvaluator.Buffer)agg).sum); return this.result; } } ``` 这段代码展示了怎样构建一个基本的求运算逻辑,并且遵循了 UDAF 接口的要求。需要注意的是,这只是一个简化版本的例子,实际上还需要实现更多方法来支持分布式环境下的中间结果传递等功能[^3]。 同样地,在使用之前也需要先引入相应的 jar 包并将这个新的聚集函数映射给 Hive SQL 关键字: ```sql ADD JAR /path/to/your/jarfile.jar; CREATE AGGREGATE FUNCTION my_sum AS 'com.example.udaf.SumEvaluator'; SELECT my_sum(column_name) FROM table_name GROUP BY another_column; ``` #### 3. 表生成函数 (UDTF) 这类特殊类型的 UDF 将单一输入拆分成多条输出行。例如,我们可以设计一个分割逗号分隔列表的功能: ```java package com.example.udtf; import java.util.StringTokenizer; import org.apache.hadoop.hive.ql.exec.UDTF; import org.apache.hadoop.io.Text; public class ExplodeCSV extends UDF { Text outText = new Text(); public boolean process(Object[] forwardArgs) throws Exception { StringTokenizer st = new StringTokenizer((String)forwardArgs[0], ","); while(st.hasMoreTokens()){ outText.set(st.nextToken()); forward(forward(outText)); } return true; } } ``` 在此基础上,可以通过如下方式调用它来进行 CSV 字符串解析操作: ```sql ADD JAR /path/to/your/jarfile.jar; CREATE TEMPORARY FUNCTION explode_csv AS 'com.example.udtf.ExplodeCSV'; WITH sample_data AS ( SELECT 'apple,banana,cherry' as fruits UNION ALL SELECT 'dog,cat,bird' ) SELECT fruit FROM sample_data LATERAL VIEW explode_csv(fruits) exploded_fruit AS fruit; ``` 这样就可以把每一项都单独提取出来作为独立的一列显示出来了。
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值