有两种方法:方法一需要借用eclipse自己编写代码,优点是有助于理解mapreduce,缺点复杂。方法二可以直接调用Hadoop本身自带的jar包,优点是方便,缺点是无法深刻理解mapreduce的过程。建议都实验一下。
方法一
需要先配置好eclipse,关于eclipse上Hadoop的配置请参考:
大数据入门(七)win10上eclipse使用Hadoop的配置
上传文件到hdfs
可以直接右击upload也可以参考:大数据入门(六)win10对Hadoop hdfs的基本操作(传送门)
java project
新建java project:
其中WordCount.java【这个是参考了windows10上使用Eclipse配置Hadoop开发环境详细步骤+WordCount示例】:
package word_count_pag;
import java.io.IOException;
import java.util.StringTokenizer;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.util.GenericOptionsParser;
public class WordCount {
public static class TokenizerMapper extends Mapper<Object, Text, Text, IntWritable> {
private final static IntWritable one = new IntWritable(1);
private Text word = new Text();
public void map(Object key, Text value, Context context) throws IOException, InterruptedException {
StringTokenizer itr = new StringTokenizer(value.toString());
while (itr.hasMoreTokens()) {
word.set(itr.nextToken());
context.write(word, one);
}
}
}
public static class IntSumReducer extends Reducer<Text, IntWritable, Text, IntWritable> {
private IntWritable result = new IntWritable();
public void reduce(Text key, Iterable<IntWritable> values, Context context)
throws IOException, InterruptedException {
int sum = 0;
for (IntWritable val : values) {
sum += val.get();
}
result.set(sum);
context.write(key, result);
}
}
public static void main(String[] args) throws Exception {
Configuration conf = new Configuration();
String[] otherArgs = new GenericOptionsParser(conf, args).getRemainingArgs();
if (otherArgs.length != 2) {
System.err.println("Usage: wordcount <in> <out>");
System.exit(2);
}
@SuppressWarnings("deprecation")
Job job = new Job(conf, "word count");
job.setJarByClass(WordCount.class);
job.setMapperClass(TokenizerMapper.class);
job.setCombinerClass(IntSumReducer.class);
job.setReducerClass(IntSumReducer.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
FileInputFormat.addInputPath(job, new Path(otherArgs[0]));
FileOutputFormat.setOutputPath(job, new Path(otherArgs[1]));
System.exit(job.waitForCompletion(true) ? 0 : 1);
}
}
log4j.properties(在src下)【这个是参考了windows10上使用Eclipse配置Hadoop开发环境详细步骤+WordCount示例】:
# Configure logging for testing:optionally with log file
#log4j.rootLogger=debug,appender
log4j.rootLogger=info,appender
#log4j.rootLogger=error,appender
#\u8F93\u51FA\u5230\u63A7\u5236\u53F0
log4j.appender.appender=org.apache.log4j.ConsoleAppender
#\u6837\u5F0F\u4E3ATTCCLayout
log4j.appender.appender.layout=org.apache.log4j.TTCCLayout
选中wordcount.java 然后:
1、run-》run as -》java application
2、run-》run configuration
成功:
方法二
管理员的方式启动cmd(否则之后使用wordcount会失败),中输入:start-all.cmd
启动Hadoop,jps
检查
传送门:
【查看运行状态的传送门】
【查看结果的传送门】
参考
windows10上使用Eclipse配置Hadoop开发环境详细步骤+WordCount示例
系列:
大数据入门(一)环境搭建,VMware15+CentOS8.1 配置
https://blog.youkuaiyun.com/qq_34391511/article/details/104874044
大数据入门(二)Centos8,JDK 配置
https://blog.youkuaiyun.com/qq_34391511/article/details/104893587
大数据入门(三)CentOS 网络配置
https://blog.youkuaiyun.com/qq_34391511/article/details/104895498
大数据入门(四)Hadoop 集群搭建
https://blog.youkuaiyun.com/qq_34391511/article/details/104885278
大数据入门(五)windows 上搭建单机版 Hadoop2.8(踩坑记录)
https://blog.youkuaiyun.com/qq_34391511/article/details/104948319
大数据入门(六)win10 对 Hadoop hdfs 的基本操作
https://blog.youkuaiyun.com/qq_34391511/article/details/105070955
大数据入门(七)win10 上 eclipse 使用 Hadoop 的配置
https://blog.youkuaiyun.com/qq_34391511/article/details/105066667
大数据入门(八)win10 下的 wordcount
https://blog.youkuaiyun.com/qq_34391511/article/details/105073076
大数据入门(九)基于 win10 的 Hadoop,java 代码进行 hdfs 操作
https://blog.youkuaiyun.com/qq_34391511/article/details/105145380