IsotonicRegression保序回归
class pyspark.ml.regression.IsotonicRegression(featuresCol=‘features’, labelCol=‘label’, predictionCol=‘prediction’, weightCol=None, isotonic=True, featureIndex=0)
目前使用并行池相邻违规者算法实现。仅支持单变量(单一特征)算法
featureIndex = Param(parent=‘undefined’, name=‘featureIndex’, doc=‘如果 featuresCol 是向量列,则为特征的索引,否则无效。’)
isotonic = Param(parent=‘undefined’, name=‘isotonic’, doc=‘输出序列是否应该是等渗/递增(true)或antitonic/递减(false)。’)
model.boundaries:已知预测的边界按递增顺序排列。
01.构造数据集
from pyspark.sql import SparkSession
spark = SparkSession.builder.config("spark.driver.host","192.168.1.10")\
.config("spark.ui.showConsoleProgress","false")\
.appName("IsotonicRegression").master("local[*]").getOrCreate()
from pyspark.ml.linalg import Vectors
df = spark.createDataFrame([
(1.0, Vectors.dense(1.0)),
(0.0, Vectors.sparse(1, [], []))], ["label", "features"])
df.show()
输出结果:
+-----+---------+
|label| features|
+-----+---------+
| 1.0| [1.0]|
| 0.0|(1,[],[])|
+-----+---------+
02.转换原有数据进行查看
from pyspark.ml.regression import IsotonicRegression
ir = IsotonicRegression()
model = ir.fit(df)
model.transform(df).show()
输出结果:
+-----+---------+----------+
|label| features|prediction|
+-----+---------+----------+
| 1.0| [1.0]| 1.0|
| 0.0|(1,[],[])| 0.0|
+-----+---------+----------+
03.生成测试数据并查看转换的结果:
test0 = spark.createDataFrame([(Vectors.dense(-1.0),)], ["features"])
print(model.transform(test0).head())
输出结果:
Row(features=DenseVector([-1.0]), prediction=0.0)
04.查看预测边界排列
print(model.boundaries)
输出结果:
[0.0,1.0]