"Could not find or load main class" in Hadoop or Java using Maven

最近在按照《hadoop权威指南》里的例子来学习hadoop,里面的例子通过eclipse可以运行,但一使用命令行就无法运行。。然后硬着头皮往下看,又出现了mvn命令,又看不懂了,问了下学长,他给我了本《maven实战》,这本书学起来比hadoop的书容易很多!!!在会使用基本的mvn命令后返回学习hadoop,结果编译后运行还是出现:Could not find main class的错误,然后上stackoverflow查资料,发现这是一个很基础的java问题。。。


Could not find main class问题分析原文:

http://stackoverflow.com/questions/18093928/what-does-could-not-find-or-load-main-class-mean#

我一直出现main class not found的原因是没有在命令行给类名前加包名:
原来的命令:

$ hadoop ConfigurationPrinter
结果:

Error: Could not find or load main class ConfigurationPrinter


修改过的命令:
$ hadoop printer.ConfigurationPrinter
结果:

mapreduce.jobtracker.address=local
yarn.resourcemanager.scheduler.monitor.policies=org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.ProportionalCapacityPreemptionPolicy
dfs.namenode.resource.check.interval=5000
mapreduce.jobhistory.client.thread-count=10
yarn.app.mapreduce.am.containerlauncher.threadpool-initial-size=10
mapred.child.java.opts=-Xmx200m
mapreduce.jobtracker.retiredjobs.cache.size=1000
dfs.client.https.need-auth=false
yarn.admin.acl=*
yarn.app.mapreduce.am.job.committer.cancel-timeout=60000
mapreduce.job.emit-timeline-data=false
fs.ftp.host.port=21



zlx@zlx-virtual-machine:/usr/local/spark/mycode/rdd$ /usr/local/spark/bin/spark-submit SparkOperateHBase.py /usr/local/hadoop/libexec/hadoop-functions.sh: line 2360: HADOOP_ORG.APACHE.HADOOP.HBASE.UTIL.GETJAVAPROPERTY_USER: bad substitution /usr/local/hadoop/libexec/hadoop-functions.sh: line 2455: HADOOP_ORG.APACHE.HADOOP.HBASE.UTIL.GETJAVAPROPERTY_OPTS: bad substitution Error: Could not find or load main class org.apache.hadoop.hbase.util.GetJavaProperty 25/05/20 20:24:51 WARN Utils: Your hostname, zlx-virtual-machine resolves to a loopback address: 127.0.1.1; using 192.168.134.144 instead (on interface ens33) 25/05/20 20:24:51 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address 25/05/20 20:24:51 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 25/05/20 20:24:55 ERROR Converter: Failed to load converter: org.apache.spark.examples.pythonconverters.StringToImmutableBytesWritableConverter Traceback (most recent call last): File "/usr/local/spark/mycode/rdd/SparkOperateHBase.py", line 13, in <module> sc.parallelize(rawData).map(lambda x: (x[0],x.split(&#39;,&#39;))).saveAsNewAPIHadoopDataset(conf=conf,keyConverter=keyConv,valueConverter=valueConv) File "/usr/local/spark/python/lib/pyspark.zip/pyspark/rdd.py", line 1410, in saveAsNewAPIHadoopDataset File "/usr/local/spark/python/lib/py4j-0.10.7-src.zip/py4j/java_gateway.py", line 1257, in __call__ File "/usr/local/spark/python/lib/py4j-0.10.7-src.zip/py4j/protocol.py", line 328, in get_return_value py4j.protocol.Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.saveAsHadoopDataset. : java.lang.ClassNotFoundException: org.apache.spark.examples.pythonconverters.StringToImmutableBytesWritableConverter at java.net.URLClassLoader.findClass(URLClassLoader.java:381) at java.lang.ClassLoader.loadClass(ClassLoader.java:424) at java.lang.ClassLoader.loadClass(ClassLoader.java:357) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:348) at org.apache.spark.util.Utils$.classForName(Utils.scala:238) at org.apache.spark.api.python.Converter$$anonfun$getInstance$1$$anonfun$1.apply(PythonHadoopUtil.scala:46) at org.apache.spark.api.python.Converter$$anonfun$getInstance$1$$anonfun$1.apply(PythonHadoopUtil.scala:45) at scala.util.Try$.apply(Try.scala:192) at org.apache.spark.api.python.Converter$$anonfun$getInstance$1.apply(PythonHadoopUtil.scala:45) at org.apache.spark.api.python.Converter$$anonfun$getInstance$1.apply(PythonHadoopUtil.scala:44) at scala.Option.map(Option.scala:146) at org.apache.spark.api.python.Converter$.getInstance(PythonHadoopUtil.scala:44) at org.apache.spark.api.python.PythonRDD$.getKeyValueConverters(PythonRDD.scala:470) at org.apache.spark.api.python.PythonRDD$.convertRDD(PythonRDD.scala:483) at org.apache.spark.api.python.PythonRDD$.saveAsHadoopDataset(PythonRDD.scala:580) at org.apache.spark.api.python.PythonRDD.saveAsHadoopDataset(PythonRDD.scala) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244) at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357) at py4j.Gateway.invoke(Gateway.java:282) at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132) at py4j.commands.CallCommand.execute(CallCommand.java:79) at py4j.GatewayConnection.run(GatewayConnection.java:238) at java.lang.Thread.run(Thread.java:748)
05-21
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值