Connection Refused hadoop遇见的问题,纪念一下

Connection Refused


You get a ConnectionRefused Exception when there is a machine at the address specified, but there is no program listening on the specific TCP port the client is using -and there is no firewall in the way silently dropping TCP connection requests. If you do not know what a TCP connection request is, please consult the specification.


Unless there is a configuration error at either end, a common cause for this is the Hadoop service isn't running.


This stack trace is very common when the cluster is being shut down -because at that point Hadoop services are being torn down across the cluster, which is visible to those services and applications which haven't been shut down themselves. Seeing this error message during cluster shutdown is not anything to worry about.


If the application or cluster is not working, and this message appears in the log, then it is more serious.


Check the hostname the client using is correct. If it's in a Hadoop configuration option: examine it carefully, try doing an ping by hand.
Check the IP address the client is trying to talk to for the hostname is correct.
Make sure the destination address in the exception isn't 0.0.0.0 -this means that you haven't actually configured the client with the real address for that.
service, and instead it is picking up the server-side property telling it to listen on every port for connections.


If the error message says the remote service is on "127.0.0.1" or "localhost" that means the configuration file is telling the client that the service is on the local server. If your client is trying to talk to a remote system, then your configuration is broken.
Check that there isn't an entry for your hostname mapped to 127.0.0.1 or 127.0.1.1 in /etc/hosts (Ubuntu is notorious for this).
Check the port the client is trying to talk to using matches that the server is offering a service on.
On the server, try a telnet localhost <port> to see if the port is open there.
On the client, try a telnet <server> <port> to see if the port is accessible remotely.
Try connecting to the server/port from a different machine, to see if it just the single client misbehaving.
If your client and the server are in different subdomains, it may be that the configuration of the service is only publishing the basic hostname, rather than the Fully Qualified Domain Name. The client in the different subdomain can be unintentionally attempt to bind to a host in the local subdomain —and failing.
If you are using a Hadoop-based product from a third party, -please use the support channels provided by the vendor.
Please do not file bug reports related to your problem, as they will be closed as Invalid
None of these are Hadoop problems, they are host, network and firewall configuration issues. As it is your cluster, only you can find out and track down the problem.
根据引用\[1\]和引用\[2\]中的命令,您在启动Flink时遇到了连接被拒绝的错误。这个错误通常是由于无法连接到指定的主机或端口引起的。在您的情况下,错误信息中提到的主机是hadoop029。 有几个可能的原因导致连接被拒绝的错误。首先,请确保您的网络连接正常,并且可以访问hadoop029主机。您可以尝试使用ping命令来测试与该主机的连接性。 另外,请确保您在启动Flink时使用的Hadoop和Flink配置文件是正确的。根据引用\[1\]和引用\[2\]中的命令,您需要设置HADOOP_CONF_DIR和FLINK_CONF_DIR环境变量来指定正确的配置文件路径。 最后,请确保您的Flink和Hadoop版本兼容,并且您的配置文件中的相关参数正确设置。如果您的Flink和Hadoop版本不匹配,可能会导致连接问题。 综上所述,您可以通过检查网络连接、确认配置文件路径和版本兼容性来解决连接被拒绝的问题。如果问题仍然存在,请提供更多详细信息,以便我们能够更好地帮助您解决问题。 #### 引用[.reference_title] - *1* *2* [【Flink】Flink 本地 per-job 模式提交flink报错 Connection refused: localhost/127.0.0.1:8081](https://blog.youkuaiyun.com/qq_21383435/article/details/124824857)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v91^control_2,239^v3^insert_chatgpt"}} ] [.reference_item] [ .reference_list ]
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值