Spark:通过start-slaves.sh脚本启动worker报错:Permission denied

本文介绍了在Spark集群中实现master节点与slave节点间免密登录的重要性,并详细解释了一个常见问题的解决方案:即如何通过正确配置SSH信任关系来避免启动时出现权限拒绝的错误。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

背景信息:

Spark两个节点,主机名分别为master和slave,$SPARK_HOMR/conf/slaves配置了两条记录:两行分别为master和slave。

错误描述:

但是启动的时候报错如下:

[root@master sbin]# ./start-slaves.sh 
root@master's password: 192.168.209.140: starting org.apache.spark.deploy.worker.Worker, logging to /home/spark/spark-2.2.0-bin-hadoop2.7/logs/spark-root-org.apache.spark.deploy.worker.Worker-1-slave.zetyun.com.out

root@master's password: master: Permission denied, please try again.

root@master's password: master: Permission denied, please try again.

master: Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).

从报错信息可以看出,是从master访问master失败了。

解决步骤:

该脚本中,要用ssh免密登录到slaves文件中的每个主机上,由于只配置了master与slave的信任关系,但没有配置master与master的信任关系,因此,造成如上失败。

因此只要将master节点的id_rsa.pub 文件内容追加到authorized_keys文件中即可。


[root@master conf]# /usr/local/spark/sbin/start-all.sh starting org.apache.spark.deploy.master.Master, logging to /usr/local/spark/logs/spark-root-org.apache.spark.deploy.master.Master-1-master.out slave1: starting org.apache.spark.deploy.worker.Worker, logging to /usr/local/spark/logs/spark-root-org.apache.spark.deploy.worker.Worker-1-slave1.out slave2: starting org.apache.spark.deploy.worker.Worker, logging to /usr/local/spark/logs/spark-root-org.apache.spark.deploy.worker.Worker-1-slave2.out slave1: failed to launch: nice -n 0 /usr/local/spark/bin/spark-class org.apache.spark.deploy.worker.Worker --webui-port 8081 spark://master:7077 slave2: failed to launch: nice -n 0 /usr/local/spark/bin/spark-class org.apache.spark.deploy.worker.Worker --webui-port 8081 spark://master:7077 slave1: Spark Command: /usr/local/jdk/bin/java -cp /usr/local/spark/conf/:/usr/local/spark/jars/*:/usr/local/hadoop/etc/hadoop/:/usr/local/hive/lib/ -Xmx1g org.apache.spark.deploy.worker.Worker --webui-port 8081 spark://master:7077 slave1: ======================================== slave1: full log in /usr/local/spark/logs/spark-root-org.apache.spark.deploy.worker.Worker-1-slave1.out slave2: Spark Command: /usr/local/jdk/bin/java -cp /usr/local/spark/conf/:/usr/local/spark/jars/*:/usr/local/hadoop/etc/hadoop/:/usr/local/hive/lib/ -Xmx1g org.apache.spark.deploy.worker.Worker --webui-port 8081 spark://master:7077 slave2: ======================================== slave2: full log in /usr/local/spark/logs/spark-root-org.apache.spark.deploy.worker.Worker-1-slave2.out
最新发布
07-29
[root@master ~]# /usr/local/spark/sbin/start-all.sh starting org.apache.spark.deploy.master.Master, logging to /usr/local/spark/logs/spark-root-org.apache.spark.deploy.master.Master-1-master.out slave1: starting org.apache.spark.deploy.worker.Worker, logging to /usr/local/spark/logs/spark-root-org.apache.spark.deploy.worker.Worker-1-slave1.out slave2: starting org.apache.spark.deploy.worker.Worker, logging to /usr/local/spark/logs/spark-root-org.apache.spark.deploy.worker.Worker-1-slave2.out slave1: failed to launch: nice -n 0 /usr/local/spark/bin/spark-class org.apache.spark.deploy.worker.Worker --webui-port 8081 spark://master:7077 slave2: failed to launch: nice -n 0 /usr/local/spark/bin/spark-class org.apache.spark.deploy.worker.Worker --webui-port 8081 spark://master:7077 slave1: Spark Command: /usr/local/jdk/bin/java -cp /usr/local/spark/conf/:/usr/local/spark/jars/*:/usr/local/hadoop/etc/hadoop/:/usr/local/hive/lib/ -Xmx1g org.apache.spark.deploy.worker.Worker --webui-port 8081 spark://master:7077 slave1: ======================================== slave1: full log in /usr/local/spark/logs/spark-root-org.apache.spark.deploy.worker.Worker-1-slave1.out slave2: Spark Command: /usr/local/jdk/bin/java -cp /usr/local/spark/conf/:/usr/local/spark/jars/*:/usr/local/hadoop/etc/hadoop/:/usr/local/hive/lib/ -Xmx1g org.apache.spark.deploy.worker.Worker --webui-port 8081 spark://master:7077 slave2: ======================================== slave2: full log in /usr/local/spark/logs/spark-root-org.apache.spark.deploy.worker.Worker-1-slave2.out 我哪里有spark-class
07-29
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

汀桦坞

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值