scala> rdd1.toDebugString
14/07/20 09:42:05 INFO Client: Retrying connect to server: mycluster/202.106.199.34:8020. Already tried 0 time(s); maxRetries=45
14/07/20 09:42:25 WARN Client: Address change detected. Old: mycluster/202.106.199.34:8020 New: mycluster:8020
14/07/20 09:42:25 INFO Client: Retrying connect to server: mycluster:8020. Already tried 0 time(s); maxRetries=45
java.io.IOException: Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "master/192.168.1.202"; destination host is: "mycluster":8020;
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:764)
at org.apache.hadoop.ipc.Client.call(Client.java:1351)
at org.apache.hadoop.ipc.Client.call(Client.java:1300)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
at com.sun.proxy.$Proxy14.getFileInfo(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
at com.sun.proxy.$Proxy14.getFileInfo(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:651)
at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java
14/07/20 09:42:05 INFO Client: Retrying connect to server: mycluster/202.106.199.34:8020. Already tried 0 time(s); maxRetries=45
14/07/20 09:42:25 WARN Client: Address change detected. Old: mycluster/202.106.199.34:8020 New: mycluster:8020
14/07/20 09:42:25 INFO Client: Retrying connect to server: mycluster:8020. Already tried 0 time(s); maxRetries=45
java.io.IOException: Failed on local exception: java.io.IOException: Couldn't set up IO streams; Host Details : local host is: "master/192.168.1.202"; destination host is: "mycluster":8020;
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:764)
at org.apache.hadoop.ipc.Client.call(Client.java:1351)
at org.apache.hadoop.ipc.Client.call(Client.java:1300)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
at com.sun.proxy.$Proxy14.getFileInfo(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
at com.sun.proxy.$Proxy14.getFileInfo(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:651)
at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java

在使用Spark读写HDFS时,遇到配置问题。通过在Spark的`spark-env.sh`中添加`HADOOP_CONF_DIR`指向Hadoop配置文件路径,确保Spark能继承集群配置,特别是处理`dfs.nameservices`设置,解决在高可用(HA)环境下Spark无法解析集群名的问题。非HA配置下不需要此步骤,因为只有一个NameNode。
最低0.47元/天 解锁文章
4008

被折叠的 条评论
为什么被折叠?



