【解决】Spark启动后WebUI看不到Workers(Alive workers:0)

本文针对Spark集群中Worker节点未能在Master上正确注册的问题,总结了六种有效的解决策略,包括调整spark-env.sh配置、使用IP而非主机名、关闭防火墙、修改启动方式等,帮助读者快速定位并解决该问题。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

Work启动,但Http中没有Worker节点:说明Slave节点与Master的通讯出现问题,或Slave节点无法向Master注册,导致虽然Woker启动但不能在Master中找到。

搜了一堆,被这个问题折磨了一天,总结一下主要是下面几个解决办法

1. spark-env.sh 文件中需要显式地设置一些环境变量,不用系统默认值

参考:https://blog.youkuaiyun.com/qq1187239259/article/details/79489800?utm_source=blogxgwz2

2. spark-env.sh文件中有关master节点的信息写32位IP地址,不要写机器别名

参考:https://blog.youkuaiyun.com/u014204541/article/details/80775656

3. 关闭防火墙

4 . 更改启动方式

参考:https://blog.youkuaiyun.com/ybdesire/article/details/70666544

5. 主机名称不一致

参考:https://blog.youkuaiyun.com/ningyanggege/article/details/86655419

6. 重新加载一下spark-env.sh!!!!!???

   日他妈,折磨我一天之后,终于死心了,重头做了一遍 + source spark-env.sh,也不知道是哪个起了作用,反正是终于解决了

025-03-26 09:25:04,770 WARN scheduler.TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources 2025-03-26 09:25:19,770 WARN scheduler.TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources 2025-03-26 09:25:34,771 WARN scheduler.TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources 2025-03-26 09:25:49,770 WARN scheduler.TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources 2025-03-26 09:26:04,770 WARN scheduler.TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources 2025-03-26 09:26:19,771 WARN scheduler.TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources 2025-03-26 09:26:34,771 WARN scheduler.TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources 2025-03-26 09:26:49,771 WARN scheduler.TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources 2025-03-26 09:27:04,771 WARN scheduler.TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources 2025-03-26 09:27:19,770 WARN scheduler.TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
最新发布
03-29
### 解决方案 在YARN模式下启动Spark Shell时,如果遇到无法访问Spark UI的情况,通常是因为网络配置或环境变量设置不当所致。为了确保能够正常访问Spark UI,在启动命令中应特别注意以下几个方面: #### 1. 设置正确的Master URL 当通过`spark-shell`连接至YARN集群时,需指定master参数为`yarn-client`或`yarn-cluster`模式之一。对于希望保持与驱动程序在同一进程中并能直接查看UI界面的情形而言,推荐采用`yarn-client`模式。 ```bash ./bin/spark-shell --master yarn-client ``` 此操作允许客户端机器作为Driver所在位置,并使得本地浏览器可以直接打开Spark应用程序的Web接口[^1]。 #### 2. 配置必要的环境变量 确保设置了合适的环境变量来支持跨节点通信以及资源管理器之间的交互。特别是要确认已正确指定了HADOOP_CONF_DIR路径以便加载来自ResourceManager的相关配置文件。 ```bash export HADOOP_CONF_DIR=/path/to/hadoop/conf ``` 此外,还需保证PYTHONHASHSEED已被设为固定值以避免Python版本差异带来的不确定性影响[^2]。 #### 3. 访问Spark Application Master Web UI 一旦成功提交了应用,则可通过Application Master提供的HTTP服务端口(默认情况下为4040)浏览正在运行的任务状态和其他诊断信息。由于是在分布式环境中部署的应用实例,因此实际地址取决于当前分配给该进程的具体主机名/IP及其开放的服务端口号。 可以通过以下方式获取确切链接: - 登录到提交作业所在的节点; - 查看日志输出中的URL提示信息; 或者, - 查询YARN ResourceManager页面下的Applications列表项关联详情页内的跟踪链接。 请注意,默认情况下每次重启都会改变监听端口编号,所以建议查阅最新记录获得最准确的结果[^3]。
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值