HDFS格式化失败,java.Net.UnknownHostException异常处理方法

本文详细解析了HDFS格式化失败及start-dfs.sh执行时出现“hadoop000:ssh:Couldnotresolvehostnamehadoop000:Temporaryfailureinn”的常见错误,提供了修改hosts文件及设置链接库的具体解决方案。

一、HDFS格式化失败

报错代码:

WARN net.DNS: Unable to determine local hostname -falling back to "localhost"
java.net.UnknownHostException: hadoop000: hadoop000
	at java.net.InetAddress.getLocalHost(InetAddress.java:1496)
	at org.apache.hadoop.net.DNS.resolveLocalHostname(DNS.java:264)
	at org.apache.hadoop.net.DNS.<clinit>(DNS.java:57)
	at org.apache.hadoop.hdfs.server.namenode.NNStorage.newBlockPoolID(NNStorage.java:966)
	at org.apache.hadoop.hdfs.server.namenode.NNStorage.newNamespaceInfo(NNStorage.java:575)
	at org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:144)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1041)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1479)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1606)
Caused by: java.net.UnknownHostException: hadoop000
	at java.net.Inet4AddressImpl.lookupAllHostAddr(Native Method)
	at java.net.InetAddress$1.lookupAllHostAddr(InetAddress.java:922)
	at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1316)
	at java.net.InetAddress.getLocalHost(InetAddress.java:1492)
	... 8 more
20/04/15 19:11:25 WARN net.DNS: Unable to determine address of the host-falling back to "localhost" address
java.net.UnknownHostException: hadoop000: hadoop000
	at java.net.InetAddress.getLocalHost(InetAddress.java:1496)
	at org.apache.hadoop.net.DNS.resolveLocalHostIPAddress(DNS.java:287)
	at org.apache.hadoop.net.DNS.<clinit>(DNS.java:58)
	at org.apache.hadoop.hdfs.server.namenode.NNStorage.newBlockPoolID(NNStorage.java:966)
	at org.apache.hadoop.hdfs.server.namenode.NNStorage.newNamespaceInfo(NNStorage.java:575)
	at org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:144)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1041)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1479)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1606)
Caused by: java.net.UnknownHostException: hadoop000
	at java.net.Inet4AddressImpl.lookupAllHostAddr(Native Method)
	at java.net.InetAddress$1.lookupAllHostAddr(InetAddress.java:922)
	at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1316)
	at java.net.InetAddress.getLocalHost(InetAddress.java:1492)
	... 8 more
20/04/15 19:11:25 INFO namenode.FSImage: Allocated new BlockPoolId: BP-737304661-127.0.0.1-1586949085020
20/04/15 19:11:25 INFO common.Storage: Storage directory /home/hadoop/tmp/dfs/name has been successfully formatted.
20/04/15 19:11:25 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
20/04/15 19:11:25 INFO util.ExitUtil: Exiting with status 0
20/04/15 19:11:25 INFO namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at java.net.UnknownHostException: hadoop000: hadoop000

从报错内容中可以看出,错误出现在
java.Net.UnknownHostException hadoop000
编辑hosts内容

vim /etc/hosts

原来的内容如下

128.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

根据错误提示,更改如下:

128.0.0.1   hadoop000 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

在保存的时候可能会提示没有权限更改

需要修改权限,直接:

sudo chown hadoop000:hadoop000 /etc/hosts

然后再执行命令:

hadoop namenode -format
start-dfs.sh

可以看到问题成功解决:

6101 SecondaryNameNode
5924 DataNode
6216 Jps
5800 NameNode

二、start-dfs.sh报错:“hadoop000: ssh: Could not resolve hostname hadoop000: Temporary failure in n”

错误原因有两个:

1. 链接库设置问题;(刚开始我认为是ssh问题,然后重新下载ssh,问题并没有被解决)

2.HDFS格式化失败。

解决办法

1.针对原因1的解决办法

export HADOOP_COMMON_LIB_NATIVE_DIR = $HADOOP_HOME/lib/native`

export HADOOP_OPTS =-Djava.library.path=$HADOOP_HOME/lib” 

2.针对原因2的解决办法就在上面。

25/07/07 01:47:35 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=hadoop0:2181, hadoop1:2181,hadoop2:2181 sessionTimeout=5000 watcher=org.apache.hadoop.ha.ActiveStandbyElector$WatcherWithClientRef@dd8ba08 25/07/07 01:47:36 FATAL tools.DFSZKFailoverController: Got a fatal error, exiting now java.net.UnknownHostException: hadoop1: Name or service not known at java.net.Inet4AddressImpl.lookupAllHostAddr(Native Method) at java.net.InetAddress$2.lookupAllHostAddr(InetAddress.java:929) at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1343) at java.net.InetAddress.getAllByName0(InetAddress.java:1295) at java.net.InetAddress.getAllByName(InetAddress.java:1205) at java.net.InetAddress.getAllByName(InetAddress.java:1127) at org.apache.zookeeper.client.StaticHostProvider.<init>(StaticHostProvider.java:61) at org.apache.zookeeper.ZooKeeper.<init>(ZooKeeper.java:445) at org.apache.zookeeper.ZooKeeper.<init>(ZooKeeper.java:380) at org.apache.hadoop.ha.ActiveStandbyElector.createZooKeeper(ActiveStandbyElector.java:652) at org.apache.hadoop.ha.ActiveStandbyElector.connectToZooKeeper(ActiveStandbyElector.java:631) at org.apache.hadoop.ha.ActiveStandbyElector.createConnection(ActiveStandbyElector.java:786) at org.apache.hadoop.ha.ActiveStandbyElector.<init>(ActiveStandbyElector.java:229) at org.apache.hadoop.ha.ZKFailoverController.initZK(ZKFailoverController.java:351) at org.apache.hadoop.ha.ZKFailoverController.doRun(ZKFailoverController.java:191) at org.apache.hadoop.ha.ZKFailoverController.access$000(ZKFailoverController.java:61) at org.apache.hadoop.ha.ZKFailoverController$1.run(ZKFailoverController.java:172) at org.apache.hadoop.ha.ZKFailoverController$1.run(ZKFailoverController.java:168) at org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:415) at org.apache.hadoop.ha.ZKFailoverController.run(ZKFailoverController.java:168) at org.apache.hadoop.hdfs.tools.DFSZKFailoverController.main(DFSZKFailoverController.java:181)
最新发布
07-08
### HBase HA 集群启动时 `java.net.UnknownHostException` 错误解决方案 当遇到 `java.net.UnknownHostException` 错误时,通常意味着存在网络配置或主机名解析方面的问题。对于HBase HA集群而言,确保所有节点能够正确解析彼此的主机名至关重要。 #### 1. 检查并同步主机文件 确认 `/etc/hosts` 文件中的条目已正确定义了所有参与HA部署机器的IP地址与对应的主机名映射关系[^1]。这一步骤有助于防止因本地DNS服务不可靠而导致的名字解析失败情况发生。 #### 2. 复制必要的Hadoop配置至HBase环境 由于HBase依赖于底层HDFS存储系统工作,因此需要将Hadoop的相关配置文件(特别是 `core-site.xml` 和 `hdfs-site.xml`)复制到HBase安装路径下的 `conf` 目录内[^4]。这样做可以使得HBase客户端和服务端都能够获取到正确的分布式文件系统的访问参数设置。 #### 3. 修改HBase配置文件 编辑 `$HBASE_HOME/conf/hbase-env.sh` 及其他相关属性文件,确保其中涉及到网络通信的部分指向实际存在的服务器资源,并且这些名称可以在整个集群范围内被唯一识别出来。例如: ```bash export JAVA_OPTS="$JAVA_OPTS -Djava.security.krb5.conf=/path/to/krb5.conf" ``` 如果启用了Kerberos认证,则还需指定 Kerberos 的配置位置;另外还需要注意调整 JVM 启动选项来适应具体应用场景的需求。 #### 4. 更新 ZooKeeper 设置 考虑到ZooKeeper作为协调者在整个架构里的作用,应该仔细核对其配置项是否合理有效。比如,在 `zoo.cfg` 中定义的数据目录、监听端口以及成员列表等都应当保持一致性和准确性。 #### 5. 清理缓存重试操作 完成上述更改之后,建议重启涉及的服务组件以使新的设定生效。在此之前最好先清理掉可能残留的日志记录或其他临时数据,从而减少干扰因素的影响范围。 通过以上措施往往能有效地解决由 `java.net.UnknownHostException` 所引发的一系列问题,保障HBase HA集群稳定可靠地运行起来。
评论 1
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值