Java connot reduce_hadoop错误:org.apache.hadoop.mapreduce.lib.input.FileSplit cannot be cast t...

运行自己写的mapreduce程序,出现下列错误

java.lang.Exception:

java.lang.ClassCastException: org.apache.hadoop.mapreduce.lib.input.FileSplit cannot be

cast to org.apache.hadoop.mapred.FileSplit

at

org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:354)

Caused by: java.lang.ClassCastException:

org.apache.hadoop.mapreduce.lib.input.FileSplit cannot be cast to

org.apache.hadoop.mapred.FileSplit

at

MyMatrix$MatrixMapper.map(MyMatrix.java:36)

at

MyMatrix$MatrixMapper.map(MyMatrix.java:1)

at

org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)

at

org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)

at

org.apache.hadoop.mapred.MapTask.run(MapTask.java:364)

at

org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:223)

at

java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)

at

java.util.concurrent.FutureTask.run(FutureTask.java:262)

at

java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)

at

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)

at

java.lang.Thread.run(Thread.java:745)

出现错误:org.apache.hadoop.mapreduce.lib.input.FileSplit

cannot be cast to

org.apache.hadoop.mapred.FileSplit

原因是:

包引入错误,mapreduce包下的类是新版API,mapred是旧版API,换成mapreduce包下的类就可以了

例如,我这个程序里有

import org.apache.hadoop.mapred.FileSplit;

于是我将这句去掉,换成下面这句:

import org.apache.hadoop.mapreduce.lib.input.*;

就不再报这个错了

package mergeData.mapreduce; import org.apache.commons.lang.StringUtils; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.FileSystem; import org.apache.hadoop.fs.Path; import org.apache.hadoop.io.IntWritable; import org.apache.hadoop.io.LongWritable; import org.apache.hadoop.io.NullWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Job; import org.apache.hadoop.mapreduce.Mapper; import org.apache.hadoop.mapreduce.Reducer; import org.apache.hadoop.mapreduce.lib.input.FileInputFormat; import org.apache.hadoop.mapreduce.lib.input.FileSplit; import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat; import java.io.BufferedReader; import java.io.FileInputStream; import java.io.IOException; import java.io.InputStreamReader; import java.util.HashMap; public class MergeDataDriver { /********** Begin **********/ public static class Map extends Mapper{LongWritable, Text, NullWritable,Text>{} /********** End **********/ } /********** Begin **********/ public static class Reduce extends Reducer{ } /********** End **********/ public static void main(String[] args) throws Exception { //创建配置信息 Configuration conf = new Configuration(); // 创建任务 Job job = Job.getInstance(conf); //如果输出目录存在,我们就删除 String outputpath = "/root/files"; Path path = new Path(outputpath); FileSystem fileSystem = path.getFileSystem(conf); if (fileSystem.exists(path)) { fileSystem.delete(path, true); } /********** Begin **********/ //设置执行类 //设置自定义Mapper类 //设置自定义Reducer类(若没用reduce可删除) //设置map函数输出数据的key和value的类型 //设置reduce函数输出数据的key和value的类型(若没用reduce可删除) //设置输入输出路径 /********** End **********/ //提交作业,若成功返回true,失败返回falase boolean b = job.waitForCompletion(true); if (b
最新发布
03-21
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值