基于hadoop源码开发环境搭建
在开发hadoop的MR,以及研究hadoop源码,都需要将hadoop源码java部署到开发工具中,
例如常用的eclipse,具体做法如下:
第一步:在Eclipse新建一个Java项目
第二步:将Hadoop程序src下core, hdfs, mapred, tools几个目录copy到上述新建项目的src目录
第三步:修改将Java Build Path,删除src,添加src/core, src/hdfs....几个源码目录
第四步:为Java Build Path添加项目依赖jar,可以导入Hadoop程序的lib下所有jar包(别漏掉其子目录jar包),导入ant程序lib下所有jar包。
第五步:理论上第四步就OK了,但是可能会报大量如下错误:
Access restriction: The method arrayBaseOffset(Class) from the type Unsafe is not accessible due to restriction on required library C:\Program Files\JDK\jre\lib\rt.jar xxx.java xxxx line 141 Java Problem
解决办法是:右键项目“propertiyes” > "Java Build Path" > "Libraries",展开"JRE System Library",双击"Access rules",点击"Add"按钮,在"Resolution"下拉框选择"Accessible","Rule Pattern"填写"**/*",保存后就OK了。
测试wordcount实例:
reduce:
package test;
import java.io.IOException;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Reducer;
public class MyReduce extends Reducer<Text, IntWritable, Text, IntWritable> {
public void reduce(Text key, Iterable<IntWritable> values, Context context)
throws IOException, InterruptedException {
int sum = 0;
for (IntWritable val : values) {
sum += val.get();
}
context.write(key, new IntWritable(sum));
}
}
map:
package test;
import java.io.IOException;
import java.util.StringTokenizer;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;
public class MyMap extends Mapper<Object, Text, Text, IntWritable> {
private final static IntWritable one = new IntWritable(1);
private Text word;
public void map(Object key, Text value, Context context)throws IOException, InterruptedException {
String line = value.toString();
StringTokenizer tokenizer = new StringTokenizer(line);
while (tokenizer.hasMoreTokens()) {
word = new Text();
word.set(tokenizer.nextToken());
context.write(word, one);
}
}
}
driver:
package test;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.input.TextInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat;
public class MyDriver {
public static void main(String[] args) throws Exception, InterruptedException {
Configuration conf = new Configuration();
conf.set("dfs.permissions", "flase");
Job job = new Job(conf, "Hello Hadoop");
job.setJarByClass(MyDriver.class);
job.setMapOutputKeyClass(Text.class);
job.setMapOutputValueClass(IntWritable.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
job.setMapperClass(MyMap.class);
job.setCombinerClass(MyReduce.class);
job.setReducerClass(MyReduce.class);
job.setInputFormatClass(TextInputFormat.class);
job.setOutputFormatClass(TextOutputFormat.class);
FileInputFormat.setInputPaths(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job, new Path(args[1]));
// JobClient.runJob(conf);
job.waitForCompletion(true);
}
}
配置响应的运行参数,运行即可。