用hadoop api 编写的文件读写demo,用maven打包后运行,报错
Exception in thread "main" java.io.IOException: No FileSystem for scheme: file
然后找找找,找到下面的文章,讲的好像很有道理,记录下。
http://stackoverflow.com/questions/17265002/hadoop-no-filesystem-for-scheme-file
以下是原文截取
Why this happened to us
Differents JARs (hadoop-commons for LocalFileSystem, hadoop-hdfs for DistributedFileSystem) each contain a different file called org.apache.hadoop.fs.FileSystem in their META-INFO/services directory. This file lists the canonical classnames of the filesystem implementations they want to declare (This is called a Service Provider Interface, see org.apache.hadoop.FileSystem line 2116).
When we use maven-assembly, it merges all our JARs into one, and all META-INFO/services/org.apache.hadoop.fs.FileSystem overwrite each-other. Only one of these files remains (the last one that was added). In this case, the Filesystem list from hadoop-commons overwrites the list from hadoop-hdfs, so DistributedFileSystem was no longer declared.
How we fixed it
After loading the hadoop configuration, but just before doing anything Filesystem-related, we call this:
hadoopConfig.set("fs.hdfs.impl",
org.apache.hadoop.hdfs.DistributedFileSystem.class.getName()
);
hadoopConfig.set("fs.file.impl",
org.apache.hadoop.fs.LocalFileSystem.class.getName()
);
Hadoop FileSystem错误解决
本文介绍了使用Maven打包Hadoop应用程序时遇到的“NoFileSystemforscheme:file”错误,并详细解释了该问题的原因:不同JAR包中的FileSystem实现相互覆盖导致DistributedFileSystem未被正确声明。最后提供了解决方案,通过在加载Hadoop配置后显式设置FileSystem的实现。
621

被折叠的 条评论
为什么被折叠?



