之前做的Demo太无聊了,决心改造一下~~
1. 输入格式。
之前的程序,StatMapper莫名其妙被输入了一堆key,value,应该是一种默认的输入格式,找了一下,原来是这个: org.apache.hadoop.mapred.InputFormatBase, 继承了InputFormat接口。接口里面有一个
FileSplit[] getSplits(FileSystem fs, JobConf job, int numSplits)
throws IOException;
看来所有输入输出都必须以文件为单位存放了,就像Lucene一样。一般输入数据都是按照行来分隔的,看来一般用这个InputFormatBase就可以了。
2. 输入数据。
这东东本来就是用来高效处理海量数据的,于是我想到了那iHome的ActionLog....,加起来好几百个M的,符合要求吧。这里统计一下这几天,指令被调用的次数。
3. 修改程序。
StatMapper.java:
public void map(WritableComparable key, Writable value, OutputCollector output, Reporter reporter)
throws IOException
{
String[] token = value.toString().split(" ");
String id = token[6];
String act = token[7];
output.collect(new UTF8(act), new LongWritable(1));
}
StatReducer.java:
public void reduce(WritableComparable key, Iterator values, OutputCollector output, Reporter reporter)
throws IOException
{
long sum = 0;
while (values.hasNext())
{
sum += ((LongWritable)values.next()).get();
}
System.out.println("Action: " + key + ", Count: " + sum);
output.collect(key, new LongWritable(sum));
}
4. 运行。
这回日志看清楚了:
...
060328 162626 E:/workground/opensource/hadoop-nightly/tmp/input/action_log.txt.2006-03-21:0+21357898
060328 162626 map 8% reduce 0%
060328 162627 E:/workground/opensource/hadoop-nightly/tmp/input/action_log.txt.2006-03-21:0+21357898
060328 162627 map 22% reduce 0%
060328 162628 E:/workground/opensource/hadoop-nightly/tmp/input/action_log.txt.2006-03-21:0+21357898
060328 162628 map 37% reduce 0%
060328 162629 E:/workground/opensource/hadoop-nightly/tmp/input/action_log.txt.2006-03-21:0+21357898
060328 162629 map 52% reduce 0%
060328 162630 E:/workground/opensource/hadoop-nightly/tmp/input/action_log.txt.2006-03-21:0+21357898
060328 162631 E:/workground/opensource/hadoop-nightly/tmp/input/action_log.txt.2006-03-21:0+21357898
060328 162631 map 80% reduce 0%
060328 162632 E:/workground/opensource/hadoop-nightly/tmp/input/action_log.txt.2006-03-21:0+21357898
060328 162632 map 92% reduce 0%
060328 162632 E:/workground/opensource/hadoop-nightly/tmp/input/action_log.txt.2006-03-21:0+21357898
...
060328 162813 map 100% reduce 0%
...
060328 162816 reduce > append > build/test/mapred/local/map_hjcxj9.out/reduce_z97f6i
060328 162816 map 100% reduce 29%
060328 162817 reduce > append > build/test/mapred/local/map_8qdis7.out/reduce_z97f6i
060328 162817 map 100% reduce 35%
060328 162818 reduce > append > build/test/mapred/local/map_m19cmw.out/reduce_z97f6i
060328 162818 map 100% reduce 40%
060328 162819 reduce > append > build/test/mapred/local/map_kx1fnb.out/reduce_z97f6i
060328 162819 map 100% reduce 44%
060328 162820 reduce > append > build/test/mapred/local/map_87oxwt.out/reduce_z97f6i
060328 162820 map 100% reduce 49%
060328 162821 reduce > sort
060328 162822 reduce > sort
060328 162822 map 100% reduce 50%
060328 162823 reduce > sort
060328 162824 reduce > sort
060328 162826 reduce > sort
060328 162827 reduce > sort
060328 162828 reduce > sort
060328 162830 reduce > sort
060328 162832 reduce > sort
060328 162833 reduce > sort
060328 162835 reduce > sort
060328 162837 reduce > sort
060328 162838 reduce > reduce
060328 162839 map 100% reduce 75%
060328 162839 reduce > reduce
060328 162840 map 100% reduce 78%
060328 162840 reduce > reduce
060328 162841 map 100% reduce 82%
Action: ACTION, Count: 1354644
060328 162841 reduce > reduce
060328 162842 map 100% reduce 85%
060328 162842 reduce > reduce
060328 162843 map 100% reduce 89%
060328 162843 reduce > reduce
060328 162844 map 100% reduce 93%
060328 162844 reduce > reduce
060328 162845 map 100% reduce 96%
060328 162845 reduce > reduce
060328 162846 reduce > reduce
...
060328 162846 map 100% reduce 100%
060328 162846 Job complete: job_2pn9y8
简单分析一下。首先程序用几个线程来并发处理输入,然后将第一次输出保存到临时目录下面。接着程序读取第一次输出,执行聚集操作,对Key相同的数据传输到Reducer进行处理和排序,Reducer再把输出保存到output目录下面。