1、关键错误
print(','.leftOuterJoin(rdd.collect()))
2、校正错误
print(','.join(rdd.collect()))
3、完整代码
# 测试join()如何使用以效果
from pyspark import SparkConf, SparkContext
conf = SparkConf().setMaster("local").setAppName("测试8 App")
sc = SparkContext(conf=conf)
list = ["Hadoop", "Spark", "Hive"]
rdd = sc.parallelize(list)
print(rdd.count())
print(rdd.collect())
print(','.join(rdd.collect()))
4、报错原因:待定