本地开发使用windows Intellij,没有搭建hadoop,搭建教程使用:这个
本地使用pom.xml添加依赖:
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>com.vincent</groupId>
<artifactId>hadoop-pro</artifactId>
<packaging>jar</packaging>
<version>1.0-SNAPSHOT</version>
<properties>
<!-- 定义Hadoop版本 -->
<hadoop-version>2.6.0-cdh5.15.1</hadoop-version>
<java.version>1.8</java.version>
<maven.compiler.source>${java.version}</maven.compiler.source>
<maven.compiler.target>${java.version}</maven.compiler.target>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
</properties>
<!-- 引入CDH仓库 -->
<repositories>
<repository>
<id>cloudera</id>
<url>https://repository.cloudera.com/artifactory/cloudera-repos</url>
</repository>
</repositories>
<dependencies>
<!-- 添加Hadoop依赖包 -->
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-client</artifactId>
<version>${hadoop-version}</version>
</dependency>
<!-- 添加Junit依赖包 -->
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>4.10</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>com.alibaba</groupId>
<artifactId>fastjson</artifactId>
<version>1.2.41</version>
</dependency>
</dependencies>
<build>
<plugins>
<!-- Java Compiler -->
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<version>3.3</version>
<configuration>
<source>${java.version}</source>
<target>${java.version}</target>
<encoding>utf8</encoding>
</configuration>
</plugin>
</plugins>
</build>
</project>
编写MapReduce:
public class ETLApp {
public static void main(String[] args) throws IOException, ClassNotFoundException, InterruptedException {
System.setproperty("hadoop.home.dir","C:/Users/user/Desktop/tools/winutils")
Configuration configuration = new Configuration();
FileSystem fileSystem = FileSystem.get(configuration);
Path outputPath = new Path("projectData/input/etl");
if(fileSystem.exists(outputPath)) {
fileSystem.delete(outputPath, true);
}
Job job = Job.getInstance(configuration);
job.setJarByClass(ETLApp.class);
job.setMapperClass(ETLApp.MyMapper.class);
job.setMapOutputKeyClass(NullWritable.class);
job.setMapOutputValueClass(Text.class);
FileInputFormat.setInputPaths(job, new Path("projectData/raw/trackinfo_20130721.data"));
FileOutputFormat.setOutputPath(job, new Path("projectData/input/etl"));
job.waitForCompletion(true);
}
static class MyMapper extends Mapper<LongWritable, Text, NullWritable, Text> {
@Override
protected void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
String log = value.toString();
Map<String, String> info = logParser.parse(log);
String ip = info.get("ip");
String country = info.get("country");
String province = info.get("province");
String city = info.get("city");
String url = info.get("url");
String time = info.get("time");
String pageId = ContentUtils.getPageId(url);
StringBuilder builder = new StringBuilder();
builder.append(ip).append("\t");
builder.append(country).append("\t");
builder.append(province).append("\t");
builder.append(city).append("\t");
builder.append(url).append("\t");
builder.append(time).append("\t");
builder.append(pageId).append("\t");
context.write(NullWritable.get(), new Text(builder.toString()));
}
}
}
本文详细介绍了在Windows环境下使用IntelliJ进行Hadoop本地开发的配置过程,包括POM文件的依赖设置、CDH仓库的引入以及如何编写MapReduce程序处理数据。通过具体示例展示了如何解析日志并进行ETL处理。
669

被折叠的 条评论
为什么被折叠?



