博由
上一篇简单介绍了通过Docker简单搭建Hadoop集群环境,终于可以简单试试开发环境的操作,因此先感受一下Hadoop的开发。通过简单的单词统计来展示,还未正式学习Hadoop代码库,先直观感受一下。
环境
[1] 系统:Mac Osx
[2] hadoop环境:本地单机环境 version:2.7.1
[3] 语言: java
[4] 开发工具:idea + maven
WordCount操作步骤
搭建本地Hadoop环境
配置core-site.xml
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>file:/Users/wangzhiping/hadoop-2.7.1/tmp</value>
</property>
</configuration>
配置hdfs-site.xml
<configuration>
<!-- 这里配置1就行,因为只是单机环境 -->
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/Users/wangzhiping/hadoop-2.7.1/namenodedir</value>
</property>
<property>
<name>dfs.datanode.name.dir</name>
<value>file:/Users/wangzhiping/hadoop-2.7.1/datanodedir</value>
</property>
</configuration>
配置mapred-site.xml
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>localhost:9001</value>
</property>
</configuration>
启动本地环境
运行:start-all.sh 通过jps命令查看运行情况是否启动成功
% jps !10061
96434 Launcher
95812 NameNode
96004 SecondaryNameNode
96551 Jps
96118 ResourceManager
95898 DataNode
93917 RemoteMavenServer
96206 NodeManager
这样就标识已经成功启动本地环境。
搭建idea+maven环境
使用idea创建maven项目即可,命名为: hadoop-hello(按照新建的步骤来进行即可)
配置pom.xml
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>hadoop-learn</groupId>
<artifactId>hello</artifactId>
<version>1.0-SNAPSHOT</version>
<dependencies>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-mapreduce-client-common</artifactId>
<version>2.7.1</version>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-common</artifactId>
<version>2.7.1</version>
</dependency>
<!-- https://mvnrepository.com/artifact/org.apache.hadoop/hadoop-core -->
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-core</artifactId>
<version>1.2.1</version>
</dependency>
</dependencies>
</project>
在这个过程中,遇到一个小插曲,见:
Exception in thread "main" java.io.IOException: Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.
at org.apache.hadoop.mapreduce.Cluster.initialize(Cluster.java:120)
at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:82)
at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:75)
at org.apache.hadoop.mapreduce.Job$9.run(Job.java:1238)
at org.apache.hadoop.mapreduce.Job$9.run(Job.java:1234)
解决方式是:hadoop-mapreduce-client-common 这个包没有导入导致,只需要导入包即可(在maven添加下载完成即可)
具体代码实现
package com.hadoop.hello.wordcount;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapred.MapReduceBase;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.tools.ant.filters.TokenFilter;
import java.io.IOException;
import java.util.StringTokenizer;
/**
* 思路:
* 第一步:将每一行的文本进行分词,并且形成(word, 1)的元素形式; Mapper过程
* 第二步:combineKey 将相同的word做聚合操作,value相加即可; Reducer过程
* Created by wangzhiping on 17/1/5.
*/
/**
* 分词Mapper
* 读取每一行的数据 输入 Text, Text 输出 Text IntWritable (单词,次数)的形式
*/
class WordTokenizerMapper extends Mapper<Object, Text, Text, IntWritable>{
// notice: Mapper的四个参数,前两个是输入参数(key,value),后两个参数是输出(key,value)
// 在这个例子中,第一参数是每行的字符位置索引,value是每一行文本
@Override
protected void map(Object key, Text value, Context context) throws IOException, InterruptedException {
System.out.println("key: " + key + ",value: " + value);
// 逗号分词
StringTokenizer st = new StringTokenizer(value.toString(), " ");
// 循环取词
while(st.hasMoreTokens()){
// 设置每个单词(word, 1)的形式
context.write(new Text(st.nextToken()), new IntWritable(1));
}
}
}
/**
* reduce分词结果
* 输入:List(<Text, IntWritable>)
* 输出:Set(<Text, IntWritable>)
*/
class WordCountReducer extends Reducer<Text, IntWritable, Text, IntWritable> {
/**
* 计算单词个数
* @param key mapper过程中的输出key
* @param values combine相同key的value
* @param context
* @throws IOException
* @throws InterruptedException
*/
@Override
protected void reduce(Text key, Iterable<IntWritable> values, Context context) throws IOException, InterruptedException {
int count = 0;
for (IntWritable value : values){
count += value.get();
}
context.write(key, new IntWritable(count));
}
}
public class WordCount {
public static void main(String[] args) throws Exception{
Configuration conf = new Configuration();
// 看到这一串都懵了,目前刚开始接触,反正从这个实现,个人觉得太烂了,
// 之后看看是不是有好的实现方式
Job job = Job.getInstance(conf, "word count");
job.setJarByClass(WordCount.class);
job.setMapperClass(WordTokenizerMapper.class);
job.setReducerClass(WordCountReducer.class);
job.setCombinerClass(WordCountReducer.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
// 设置输入文件地址
FileInputFormat.addInputPath(job, new Path("/Users/wangzhiping/workspace/hadoop-hello/src/main/resources/word.txt"));
// 设置输出文件地址
FileOutputFormat.setOutputPath(job, new Path("/Users/wangzhiping/workspace/hadoop-hello/src/main/resources/output"));
System.exit(job.waitForCompletion(true) ? 0 : 1);
}
}
运行
输入文件 word.txt
hello world hello world hello world
hello world hello world
hello world
hello world hello world hello world hello world
输出文件output
输出结果:part-r-00000 文件内容
hello 10
world 10
参考
[1] http://blog.youkuaiyun.com/panguoyuan/article/details/38727273