第十届泰迪杯数据挖掘B题电力系统负荷预测分析

本文探讨了电力系统负荷预测的重要性,包括短期和中期预测。使用LSTM模型对地区电网未来10天的15分钟间隔负荷、3个月的日负荷最大值和最小值进行预测,同时分析了行业用电负荷突变的时间、量级和可能原因,为电网运营决策提供依据。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

一、问题背景

电力系统负荷(电力需求量,即有功功率)预测是指充分考虑历史的系统负荷、经济状况、气象条件和社会事件等因素的影响,对未来一段时间的系统负荷做出预测。负荷预测是电力系统规划与调度的一项重要内容。短期(两周以内)预测是电网内部机组启停、调度和运营计划制定的基础;中期(未来数月)预测可为保障企业生产和社会生活用电,合理安排电网的运营与检修决策提供支持;长期(未来数年)预测可为电网改造、扩建等计划的制定提供参考,以提高电力系统的经济效益和社会效益。

复杂多变的气象条件和社会事件等不确定因素都会对电力系统负荷造成一定的影响,使得传统负荷预测模型的应用存在一定的局限性。同时,随着电力系统负荷结构的多元化,也使得模型应用的效果有所降低,因此电力系统负荷预测问题亟待进一步研究。

二、解决问题

1.地区负荷的中短期预测分析

根据附件中提供的某地区电网间隔15分钟的负荷数据,建立中短期负荷预测模型:

(1)给出该地区电网未来10天间隔15分钟的负荷预测结果,并分析其预测精度;

(2)给出该地区电网未来3个月日负荷的最大值和最小值预测结果,以及相应达到负荷最大值和最小值的时间,并分析其预测精度。

2.行业负荷的中期预测分析

对不同行业的用电负荷进行中期预测分析,能够为电网运营与调度决策提供重要依据。特别是在新冠疫情、国家“双碳”目标等背景下,通过对大工业、非普工业、普通工业和商业等行业的用电负荷进行预测,有助于掌握各行业的生产和经营状况、复工复产和后续发展走势,进而指导和辅助行业的发展决策。请根据附件中提供的各行业每天用电负荷相关数据,建立数学模型研究下面问题:

(1)挖掘分析各行业用电负荷突变的时间、量级和可能的原因。

(2)给出该地区各行业未来3个月日负荷最大值和最小值的预测结果,并对其预测精度做出分析。

(3)根据各行业的实际情况,研究国家“双碳”目标对各行业未来用电负荷可能产生的影响,并对相关行业提出有针对性的建议。

问题分析

1.地区负荷的中短期预测分析
(1)给出该地区电网未来10天间隔15分钟的负荷预测结果,并分析其预测精度

该小问要求根据附件1某地区电网间隔15分钟的负荷数据建立短期负荷预测模型,给出未来十天间隔15分钟的负荷预测结果且分析预测精度。本文针对短时电负荷数据维度单一、数据随机性强的问题,提出一种基于LSTM的单变量电负荷短期预测模型,对以15分钟为单位的负荷数据进行预测。

1.地区负荷的中短期预测分析
(2)给出该地区电网未来3个月日负荷的最大值和最小值预测结果,以及相应达到负荷最大值和最小值的时间,并分析其预测精度。

该小问要求给出该地区电网未来3个月日负荷的最大值、最小值预测结果以及达到最大、最小值对应的时间且分析预测精度,预测未来几个月的电负荷值属于中期预测,单维数据已不能准确地预测,要多方位考虑天气、时间序列和电负荷整体趋势等因素,因此本文提出基于LSTM的多变量电负荷预测模型[2],分别以有功功率最大值及其时间、以有功功率最小值及其时间作为目标变量,预测未来3个月日负荷的最大值及其时间和最小值及其时间。

2.行业负荷的中期预测分析
(1)挖掘分析各行业用电负荷突变的时间、量级和可能的原因。
(2)给出该地区各行业未来3个月日负荷最大值和最小值的预测结果,并对其预测精度做出分析。

该小题可选取常用的突变算法可确定电负荷突变的位置,根据突变位置确定电负荷突变时间,以突变用电量与突变前一刻的用电量之差作为突变量级,筛选出突变的数据集计算电负荷与气象、日期、趋势等特征的相关系数,确定突变原因,并结合行业的特点画图分析各行业的电负荷突变原因。

代码

地区短期预测

(1)导入安装包

import pandas as pd
import numpy as np
import math
import matplotlib.pyplot as plt
plt.rcParams['font.sans-serif'] = ['SimHei'] 
plt.rcParams['axes.unicode_minus'] = False 
from collections import deque
import tensorflow as tf
from tensorflow.keras.callbacks import ModelCheckpoint
from tensorflow.keras.layers import Dense,LSTM,Bidirectional,Dropout
from tensorflow.keras.utils import to_categorical
from tensorflow.keras.models import Sequential
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error,r2_score
import warnings
warnings.filterwarnings('ignore')

(2)导入数据

data_m = pd.read_csv('./附件1-区域15分钟负荷数据.csv')
data_d = pd.read_csv('./附件2-行业日负荷数据.csv')
data_s = pd.read_csv('./附件3-气象数据.csv')
print("区域15分钟负荷数据",data_m.shape)
print("行业日负荷数据",data_d.shape)
print("气象数据",data_s.shape)

(3)查看缺失值

data_m.isnull().sum()

(4)查看异常值并处理

def outlier_pro(data):
    bp = plt.boxplot(data)
#     将异常值用均值替代
    mean = data.mean()
    new_data = data.copy()
    lower_whisker = [item.get_ydata()[1] for item in bp['whiskers']][0]
    upper_whisker = [item.get_ydata()[1] for item in bp['whiskers']][1]
    for i in range(len(data)):
        if (data[i] < lower_whisker) or (data[i] > upper_whisker):
            new_data[i] = mean
    return new_data
data_m.iloc[:,1] = outlier_pro(data_m.iloc[:,1])

(5)数据归一化、数据集构造

short_term = pd.DataFrame(data_m.iloc[:,1])
#数据归一化
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
short_term_sca = scaler.fit_transform(short_term)


def datas(dataset,men_steps,pred_steps):
    x_data = []
    y_data = []
    for i in range(len(dataset)- men_steps-pred_steps):
        x = dataset[i:i+men_steps,0]
        y = dataset[i+men_steps:i+men_steps+pred_steps,0]
        x_data.append(x)
        y_data.append(y)
    return x_data,y_data

men_steps,pred_steps = 24,1
X,Y = datas(short_term_sca,men_steps,pred_steps)
l = len(X)-int(len(X)*0.8)  #划分数据长度
l

(6)数据集划分、模型构建

X = np.array(X)
Y = np.array(Y)
X = np.reshape(X,(X.shape[0],men_steps,1))

    # 划分数据集
x_train,x_test,y_train,y_test = X[:-l],X[-l:],Y[:-l],Y[-l:]
print("x_train",x_train.shape)
print("x_test",x_test.shape)

neurons = 64
loss = 'mse'  # 损失函数
optimizer="adam"  # 优化函数      
dropout = 0.01 
activation_function = 'relu'

    #创建模型
model = Sequential()
file_path = "best_checkpoint_0.hdf5"
checkpoint_callback = ModelCheckpoint(filepath=file_path,monitor='loss',
mode='min',save_best_only=True,save_weights_only=True)
model.add(LSTM(neurons,input_shape=(men_steps,1)))
model.add(Dropout(dropout))
model.add(Dense(pred_steps))
model.compile(loss=loss, optimizer=optimizer)
history = model.fit(x_train,y_train,epochs=100,batch_size=32,validation_data=(x_test,y_test),callbacks=[checkpoint_callback])
    
plt.figure(dpi=100)
plt.plot(history.history['loss'], label='train loss')
plt.plot(history.history['val_loss'], label='val loss')
plt.title("LOSS")
plt.xlabel("Epochs")
plt.ylabel("Loss")
plt.legend(loc='best')
plt.show()

(8)模型拟合

pred_test = model.predict(x_test)
pred_test = scaler.inverse_transform(pred_test)
pred_test = pred_test.reshape(-1)
y_ = scaler.inverse_transform(y_test)
R2_score = r2_score(y_,pred_test)
print("R2得分: ",R2_score)
from sklearn.metrics import mean_squared_log_error
rmlse = math.sqrt(mean_squared_log_error(y_,pred_test))
print('Test RMLSE: %.3f' % rmlse)
mape = np.mean(np.abs((y_-pred_test)/y_))*100
print("mape",mape)

在这里插入图片描述
(9)查看模型结构

model.summary()

在这里插入图片描述

def LSTM_model(X,Y):
    l = int(len(X)*0.2)
    X = np.array(X)
    Y = np.array(Y)
    X = np.reshape(X,(X.shape[0],men_steps,1))

    # 划分数据集
    x_train,x_test,y_train,y_test = X[:-l],X[-l:],Y[:-l],Y[-l:]
    print("x_train",x_train.shape)
    print("x_test",x_test.shape)

    neurons = 64
    loss = 'mse'  # 损失函数
    optimizer="adam"  # 优化函数      
    dropout = 0.01 
    activation_function = 'relu'

    #创建模型
    model = Sequential()
    model.add(LSTM(neurons,input_shape=(men_steps,1)))
    model.add(Dropout(dropout))
    model.add(Dense(pred_steps))
    model.compile(loss=loss, optimizer=optimizer)
    model.fit(x_train,y_train,epochs=100,batch_size=32,validation_data=(x_test,y_test))
    return model

(10)单步预测未来值

length = 3
k = 1
i = 0
predict = []
new_data = short_term_sca
# model = LSTM_model(X,Y)
while k <= length:
    predict_xlist = []#添加预测x列表
    predict_y = []#添加预测y列表
    predict_xlist.extend(new_data[new_data.shape[0]-8*men_steps:new_data.shape[0],0].tolist())
    while len(predict_y) < 96*4:
        predictx = np.array(predict_xlist[-men_steps:])
        predictx = np.reshape(predictx,(1,men_steps,1))
        
        lstm_predict = model.predict(predictx)  #预测
        
        new_data = np.append(new_data,lstm_predict)
        predict_xlist.extend(lstm_predict[0])#将新预测出来的predict_steps个数据,加入predict_xlist列表,用于下次预测
        lstm_predict = scaler.inverse_transform(lstm_predict)
        predict.append(float(lstm_predict[0][0])) #记录预测值
        predict_y.extend(lstm_predict[0])
    i += 1
    if i == 3:
        break
    else:
        new_data = new_data[96*4:]
        new_data = pd.DataFrame(new_data)
        print("new_data.shape",new_data.shape)
        new_data = np.array(new_data)
        x,y = datas(new_data,men_steps,pred_steps)
        k += 1
        model = LSTM_model(x,y)  #添加预测数据重新训练

在这里插入图片描述(11)模型评估

y_true = np.array(short_term.values[-8*men_steps:])
test_score = math.sqrt(mean_squared_error(y_true,predict[:8*men_steps]))
print("test score RMSE: %.2f"% test_score)
mape = np.mean(np.abs((y_true-predict[:8*men_steps])/y_true))*100
print("mape",mape)
R2_score = r2_score(y_true-predict[:8*men_steps])
print("R2得分: ",R2_score)
plt.figure(dpi=100)
plt.plot(y_true,c="g",label='True')
plt.plot(predict[:8*men_steps], c="r",label='Prediction')
plt.legend()
plt.show()

在这里插入图片描述

在这里插入图片描述

地区中期预测分析

import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
plt.rcParams['font.sans-serif'] = ['SimHei']  # 用来正常显示中文标签
plt.rcParams['axes.unicode_minus'] = False  # 用来正常显示负号
from tensorflow.keras.layers import Dense,LSTM,Bidirectional,Dropout
from tensorflow.keras.models import Sequential
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler,MinMaxScaler
from sklearn.metrics import r2_score
from sklearn.preprocessing import LabelEncoder

data_m = pd.read_csv('./附件1-区域15分钟负荷数据.csv')
data_d = pd.read_csv('./附件2-行业日负荷数据.csv')
data_s = pd.read_csv('./附件3-气象数据.csv')

#查看重复值
data_s[data_s.duplicated ()]
data_m[data_m.duplicated ()]
data_d[data_d.duplicated ()]

#删除重复值
data_s.drop_duplicates(inplace=True)

# # 绘制折线图
fig = plt.figure(figsize = (10,6))
plt.title("区域15分钟负荷数据折线图、直方图、箱线图")
ax1 = fig.add_subplot(3,1,1)  
ax1.plot(data_m.iloc[:,1].index,data_m.iloc[:,1].values)
plt.grid()

# 绘制直方图
ax2 = fig.add_subplot(3,1,2)  
data_m.iloc[:,1].hist(bins=30,alpha = 0.5,ax = ax2)    
data_m.iloc[:,1].plot(kind = 'kde', secondary_y=True,ax = ax2)
plt.grid()

# 绘制横着的箱型图
import seaborn as sns
ax3 = fig.add_subplot(3,1,3) 

sns.boxplot(data=data_m.iloc[:,1].values, orient="h")  

在这里插入图片描述

def outlier_pro(data):
    bp = plt.boxplot(data)
    k = 0
#     将异常值用均值替代
    mean = data.mean()
    new_data = data.copy()
    lower_whisker = [item.get_ydata()[1] for item in bp['whiskers']][0]
    upper_whisker = [item.get_ydata()[1] for item in bp['whiskers']][1]
    for i in range(len(data)):
        if (data[i] < lower_whisker) or (data[i] > upper_whisker):
            k+=1
            new_data[i] = mean
    print("替换{}条异常值".format(k))
    return new_data
data_m.iloc[:,1] = outlier_pro(data_m.iloc[:,1])

在这里插入图片描述

# # 绘制折线图
fig = plt.figure(figsize = (10,6))
plt.title("预处理后的区域15分钟负荷数据折线图、直方图、箱线图")
ax1 = fig.add_subplot(3,1,1)  
ax1.plot(data_m.iloc[:,1].index,data_m.iloc[:,1].values)
plt.grid()

# 绘制直方图
ax2 = fig.add_subplot(3,1,2)  
data_m.iloc[:,1].hist(bins=30,alpha = 0.5,ax = ax2)    
data_m.iloc[:,1].plot(kind = 'kde', secondary_y=True,ax = ax2)
plt.grid()

# 绘制横着的箱型图
import seaborn as sns
ax3 = fig.add_subplot(3,1,3) 

sns.boxplot(data=data_m.iloc[:,1].values, orient="h")  

在这里插入图片描述
处理气象特征

data_s['天气状况1'],data_s['天气状况2'] = data_s['天气状况'].str.split('/', expand=True)[0],data_s['天气状况'].str.split('/', expand=True)[1]
data_s['最高温度'] = data_s['最高温度'].apply(lambda x:x[:-1])
data_s['最低温度'] = data_s['最低温度'].apply(lambda x:x[:-1])
data_s.drop('天气状况',axis=1,inplace=True)

处理日期

def date_processing(x):
    x = x.str.split('日').str[0]
    x = x.str.replace('年','-').str.replace('月','-')
    return x
data_s['日期'] = date_processing(data_s['日期'])
data_s['日期'] = pd.to_datetime(data_s['日期'])

enc=LabelEncoder()   
data_s.iloc[:,3] = enc.fit_transform(data_s.iloc[:,3])  
data_s.iloc[:,4] = enc.fit_transform(data_s.iloc[:,4])  
data_s.iloc[:,5] = enc.fit_transform(data_s.iloc[:,5])  
data_s.iloc[:,6] = enc.fit_transform(data_s.iloc[:,6]) 

data_m['日期'] = data_m['数据时间'].str.split(' ',expand=True).iloc[:,0]
data_m['日期'] = pd.to_datetime(data_m['日期'])
data_m['小时'] = pd.to_datetime(data_m['数据时间']).dt.hour
hour_list = pd.date_range('2018-1-1','2021-8-31')
data_m_d = data_m.groupby(['日期','小时'],as_index=False).sum()
max_kw = []
min_kw = []
max_loc = []
min_loc = []
for i in hour_list:
    data = data_m_d[data_m_d['日期']==i]
    max_ = max(data['总有功功率(kw)'])
    min_ = min(data['总有功功率(kw)'])
    max_kw.append(max_)
    max_lo = data[data['总有功功率(kw)']==max_]['小时']
    max_loc.append(max_lo.values)
    min_kw.append(min_)
    min_lo = data[data['总有功功率(kw)']==min_]['小时']
    min_loc.append(min_lo.values)
    
data_m_max = pd.DataFrame(hour_list,columns=['日期'])
data_m_min = pd.DataFrame(hour_list,columns=['日期'])

#找出最大值最小值的值和位置
max_location = []
min_location = []
for i in max_loc:
    max_location.append(i[0])
for i in min_loc:
    min_location.append(i[0])
    
data_m_max['有功功率最大值(kw)'] = pd.DataFrame(max_kw)
data_m_max['位置'] = pd.DataFrame(max_location)

data_m_min['有功功率最小值(kw)'] = pd.DataFrame(min_kw)
data_m_min['位置'] = pd.DataFrame(min_location)

合并数据

new_m_max = pd.merge(data_m_max,data_s,on='日期')
new_m_min = pd.merge(data_m_min,data_s,on='日期')

日期特征

new_m_max['月份'] = new_m_max['日期'].dt.month
new_m_max['季节'] = new_m_max['月份'].copy()
for i in range(len(new_m_max)):
    if new_m_max.loc[i,'月份'] >=3 and new_m_max.loc[i,'月份'] <= 5:
        new_m_max.loc[i,'季节'] = 1
    elif new_m_max.loc[i,'月份'] >= 6 and new_m_max.loc[i,'月份'] <= 8:
        new_m_max.loc[i,'季节'] = 2
    elif new_m_max.loc[i,'月份'] >= 9 and new_m_max.loc[i,'月份'] <=11:
        new_m_max.loc[i,'季节'] = 3
    elif new_m_max.loc[i,'月份'] <= 12 or new_m_max.loc[i,'月份'] <= 2:
        new_m_max.loc[i,'季节'] = 4
        
        
new_m_min['月份'] = new_m_min['日期'].dt.month
new_m_min['季节'] = new_m_min['月份'].copy()
for i in range(len(new_m_min)):
    if new_m_min.loc[i,'月份'] >=3 and new_m_min.loc[i,'月份'] <= 5:
        new_m_min.loc[i,'季节'] = 1
    elif new_m_min.loc[i,'月份'] >= 6 and new_m_min.loc[i,'月份'] <= 8:
        new_m_min.loc[i,'季节'] = 2
    elif new_m_min.loc[i,'月份'] >= 9 and new_m_min.loc[i,'月份'] <=11:
        new_m_min.loc[i,'季节'] = 3
    elif new_m_min.loc[i,'月份'] <= 12 or new_m_min.loc[i,'月份'] <= 2:
        new_m_min.loc[i,'季节'] = 4
data = new_m_max.iloc[:,:2]
data = data.set_index('日期')

from statsmodels.datasets import co2
import matplotlib.pyplot as plt
from pandas.plotting import register_matplotlib_converters
register_matplotlib_converters()
data = data.resample('M').mean().ffill()
from statsmodels.tsa.seasonal import STL
res = STL(data).fit()
plt.figure(figsize=(16,10),dpi=1000)
res.plot()
plt.show() 

在这里插入图片描述

data = new_m_min.iloc[:,:2]
data = data.set_index('日期')

from statsmodels.datasets import co2
import matplotlib.pyplot as plt
from pandas.plotting import register_matplotlib_converters
register_matplotlib_converters()
data = data.resample('M').mean().ffill()
from statsmodels.tsa.seasonal import STL
res = STL(data).fit()
plt.figure(figsize=(16,10),dpi=1000)
res.plot()
plt.show() 

在这里插入图片描述
趋势特征

from scipy.signal import find_peaks,argrelextrema
max_peaks = argrelextrema(np.array(new_m_max['有功功率最大值(kw)']),np.greater) #找出极大值
min_peaks = argrelextrema(np.array(new_m_max['有功功率最大值(kw)']),np.less) #找出极小值
plt_maxy = []
plt_miny = []
for i in max_peaks[0]:
    plt_maxy.append(new_m_max.iloc[i,1])
for i in min_peaks[0]:
    plt_miny.append(new_m_max.iloc[i,1])
plt.figure(dpi=100)
plt.title('有功功率最大值极值情况')
plt.plot(new_m_max['有功功率最大值(kw)'])
plt.plot(max_peaks[0],plt_maxy,'x')
plt.plot(min_peaks[0],plt_miny,'x')
plt.xlabel('时间')
plt.ylabel('有功功率最大值(kw)')
plt.show()

在这里插入图片描述

max_peaks = argrelextrema(np.array(new_m_min['有功功率最小值(kw)']),np.greater) #找出极大值
min_peaks = argrelextrema(np.array(new_m_min['有功功率最小值(kw)']),np.less) #找出极小值
plt_maxy = []
plt_miny = []
for i in max_peaks[0]:
    plt_maxy.append(new_m_min.iloc[i,1])
for i in min_peaks[0]:
    plt_miny.append(new_m_min.iloc[i,1])
plt.figure(dpi=100)
plt.title('有功功率最小值极值情况')
plt.plot(new_m_min['有功功率最小值(kw)'])
plt.plot(max_peaks[0],plt_maxy,'x')
plt.plot(min_peaks[0],plt_miny,'x')
plt.xlabel('时间')
plt.ylabel('有功功率最小值(kw)')
plt.show()

在这里插入图片描述

#是否为极值,极大值用1,极小值用-1,非极值用0
new_m_max['极值'] = new_m_max['有功功率最大值(kw)'].copy()
for i in range(len(new_m_max)):
    if i in max_peaks[0]:
        new_m_max.loc[i,'极值'] = 1
    elif i in min_peaks[0]:
        new_m_max.loc[i,'极值'] = -1
    else:
        new_m_max.loc[i,'极值'] = 0
#是否为极值,极大值用1,极小值用-1,非极值用0
new_m_min['极值'] = new_m_min['有功功率最小值(kw)'].copy()
for i in range(len(new_m_min)):
    if i in max_peaks[0]:
        new_m_min.loc[i,'极值'] = 1
    elif i in min_peaks[0]:
        new_m_min.loc[i,'极值'] = -1
    else:
        new_m_min.loc[i,'极值'] = 0
#标注的极值数量
new_m_max['极值'].value_counts()
new_m_min['极值'].value_counts()
#变化率
new_m_max['变化率'] = new_m_max['有功功率最大值(kw)'].diff()/new_m_max['有功功率最大值(kw)']
new_m_max.dropna(axis=0,inplace=True)

#变化率
new_m_min['变化率'] = new_m_min['有功功率最小值(kw)'].diff()/new_m_min['有功功率最小值(kw)']
new_m_min.dropna(axis=0,inplace=True)
corr_max = pd.DataFrame(new_m_max.corr()).iloc[0,:][1:]
corr_min = pd.DataFrame(new_m_min.corr()).iloc[0,:][1:]
plt.figure(dpi=100)
plt.title('各特征与有功功率最大值(kw)的相关性')
plt.plot(corr_max.index,corr_max.values,'-o')
plt.xticks(rotation=45)
for a,b in zip(corr_max.index,corr_max.values):
    plt.text(a,b+0.009,round(b,2),ha='center')
plt.show()

在这里插入图片描述

plt.figure(dpi=100)
plt.title('各特征与有功功率最小值(kw)的相关性')
plt.plot(corr_min.index,corr_min.values,'-o')
plt.xticks(rotation=45)
for a,b in zip(corr_min.index,corr_min.values):
    plt.text(a,b+0.009,round(b,2),ha='center')
plt.show()

在这里插入图片描述

# 将序列转换成监督式学习
def series_to_supervised(data, n_in, n_out=1, dropnan=True):
    n_vars = 1 if type(data) is list else data.shape[1]
    df = pd.DataFrame(data)
    cols, names = list(), list()
    # 输入序列(t-n, ... t-1)
    for i in range(n_in, 0, -1):
        cols.append(df.shift(i))
        names += [('var%d(t-%d)' % (j+1, i)) for j in range(n_vars)]
    # 预测序列 (t, t+1, ... t+n)
    for i in range(0, n_out):
        cols.append(df.shift(-i))
        if i == 0:
            names += [('var%d(t)' % (j+1)) for j in range(n_vars)]
        else:
            names += [('var%d(t+%d)' % (j+1, i)) for j in range(n_vars)]
    # 将他们整合在一起
    agg = pd.concat(cols, axis=1)
    agg.columns = names
    # 删除那些包含空值(NaN)的行
    if dropnan:
        agg.dropna(inplace=True)
    return agg
pre_data = new_m_max.set_index('日期')
pre_data = pre_data['2020-9':'2020-11'].reset_index()
new_m_max = pd.concat([new_m_max,pre_data],ignore_index=True)  #把构造好的X加入数据集中
scaler = MinMaxScaler()
scaled = scaler.fit_transform(new_m_max.drop(['日期'],axis=1))   
n_hours = 3
n_features = 12
reframed = series_to_supervised(scaled, n_hours,1)
reframed.columns[-10:]
reframed.drop(reframed.columns[-10:], axis=1, inplace=True)

# 将数据分割为训练集和测试集
values = reframed.values
n_train_hours = 1064
train = values[:n_train_hours, :]
test = values[n_train_hours:, :]
 
# 分离特征集和标签
n_obs = n_hours * (n_features)
train_X, train_y = train[:, :n_obs], train[:, :2]
test_X, test_y = test[:, :n_obs], test[:, :2]
print(train_X.shape, train_y.shape,test_y.shape)

train_X = train_X.reshape((train_X.shape[0], n_hours, n_features))
test_X = test_X.reshape((test_X.shape[0], n_hours, n_features))
print("train_X.shape",train_X.shape)
print("test_X.shape",test_X.shape)

model = Sequential()
model.add(LSTM(64, input_shape=(train_X.shape[1], train_X.shape[2])))
model.add(Dense(2))
model.compile(loss='mae', optimizer='adam')
# 训练模型
history = model.fit(train_X, train_y, epochs=30, batch_size=32, validation_data=(test_X[:-91], test_y[:-91]))
# 可视化
plt.plot(history.history['loss'], label='train')
plt.plot(history.history['val_loss'], label='test')
plt.title("LOSS")
plt.xlabel("Epochs")
plt.ylabel("Loss")
plt.legend()
plt.show()

在这里插入图片描述

# 对测试集进行预测
yhat = model.predict(test_X[:-91])
test_x = test_X[:-91].reshape((test_X[:-91].shape[0], n_hours*n_features))
# 对预测数据进行逆标准化
inv_yhat = np.concatenate((yhat, test_x[:, -10:]), axis=1)
inv_yhat = scaler.inverse_transform(inv_yhat)
inv_yhat_max = inv_yhat[:,0]
inv_yhat_maxloc = inv_yhat[:,1]
# 对测试集标签进行逆标准化
test_XX = test_X.reshape((test_X.shape[0], n_hours*n_features))

test_y = test_y.reshape((len(test_y), 2))
y_ = np.concatenate((test_y, test_XX[:, -10:]), axis=1)
inv_y = scaler.inverse_transform(y_)
inv_y_max = inv_y[:-91,0]
inv_y_maxloc = inv_y[:-91,1]
# 计算均方根误差
import math
from sklearn.metrics import mean_squared_log_error
rmlsemax = math.sqrt(mean_squared_log_error(inv_y_max, inv_yhat_max))
print('Test RMLSE_max: %.3f' % rmlsemax)

rmlsemaxloc = math.sqrt(mean_squared_log_error(inv_y_max, inv_yhat_max))
print('Test RMLSE_maxloc: %.3f' % rmlsemaxloc)

mapemax = np.mean(np.abs((inv_y_max-inv_yhat_max)/inv_y_max))*100
print("mape",mapemax)

mapemaxloc = np.mean(np.abs((inv_y_maxloc-inv_yhat_maxloc)/inv_y_maxloc))*100
print("mape",mapemaxloc)

R2max = r2_score(inv_y_max, inv_yhat_max)
print("R2得分: ",R2max)
R2maxloc = r2_score(inv_y_maxloc, inv_yhat_maxloc)
print("R2得分: ",R2maxloc)

plt.figure(dpi=100)
plt.plot(inv_y_max,label='真实值')
plt.plot(inv_yhat_max,label='预测值')
plt.xlabel('时间')
plt.ylabel('有功功率最大值(kw)')
plt.show()

在这里插入图片描述

在这里插入图片描述

y_3 = yhat[-3:]
#替换test_y最后7个值
for i in range(3):
    test_X[-91][i][0] = y_3[i][0]
    
next_predicted_list=[] 
for k in range(91):
    lastOne=(test_X[-91+k]).reshape(-1,n_features)
    backData=lastOne.tolist() 
    one_next=backData[len(backData)-n_hours:] 
    one_next=np.array(one_next).reshape(1,n_hours,n_features) 
    next_predicted=model.predict([one_next])  
    y_3 = y_3[1:]
    y_3 = np.append(y_3,next_predicted[0].tolist())
    for i in range(3):
        test_X[-91+k][i][0] = y_3[i]
    next_predicted_list.append(next_predicted[0].tolist()) 
test_xx = test_X[-91:].reshape((test_X[-91:].shape[0], n_hours*n_features))
# 对预测数据进行逆标准化
inv_pred = np.concatenate((next_predicted_list, test_xx[-91:, -10:]), axis=1)
inv_pred = scaler.inverse_transform(inv_pred)
inv_pred_max = inv_pred[:,0]
inv_pred_maxloc = inv_pred[:,1]
plt.figure(dpi=100)
plt.plot(inv_y_max,label='真实值')
plt.plot(inv_yhat_max,label='测试集预测值')
plt.plot([i for i in range(267,267+91)],inv_pred_max,label='未来值')
plt.xlabel('时间')
plt.ylabel('有功功率最大值(kw)')
plt.legend()
plt.show()

在这里插入图片描述
预测未来值和位置

#构造预测的特征
pre_data = new_m_min.set_index('日期')
pre_data = pre_data['2020-9':'2020-11'].reset_index()
new_m_min = pd.concat([new_m_min,pre_data],ignore_index=True)  #把构造好的X加入数据集中
scaler = MinMaxScaler()
scaled = scaler.fit_transform(new_m_min.drop(['日期'],axis=1))   
n_hours = 3
n_features = 12
reframed = series_to_supervised(scaled, n_hours,1)
reframed.columns[-10:]
reframed.drop(reframed.columns[-10:], axis=1, inplace=True)

# 将数据分割为训练集和测试集
values = reframed.values
n_train_hours = 1064
train = values[:n_train_hours, :]
test = values[n_train_hours:, :]
 
# 分离特征集和标签
n_obs = n_hours * (n_features)
train_X, train_y = train[:, :n_obs], train[:, :2]
test_X, test_y = test[:, :n_obs], test[:, :2]
print(train_X.shape, train_y.shape,test_y.shape)

train_X = train_X.reshape((train_X.shape[0], n_hours, n_features))
test_X = test_X.reshape((test_X.shape[0], n_hours, n_features))
print("train_X.shape",train_X.shape)
print("test_X.shape",test_X.shape)

model = Sequential()
model.add(LSTM(64, input_shape=(train_X.shape[1], train_X.shape[2])))
model.add(Dense(2))
model.compile(loss='mae', optimizer='adam')
# 训练模型
history = model.fit(train_X, train_y, epochs=30, batch_size=32, validation_data=(test_X[:-91], test_y[:-91]))
# 可视化
plt.plot(history.history['loss'], label='train')
plt.plot(history.history['val_loss'], label='test')
plt.title("LOSS")
plt.xlabel("Epochs")
plt.ylabel("Loss")
plt.legend()
plt.show()

在这里插入图片描述

# 对测试集进行预测
yhat = model.predict(test_X[:-91])
test_x = test_X[:-91].reshape((test_X[:-91].shape[0], n_hours*n_features))
# 对预测数据进行逆标准化
inv_yhat = np.concatenate((yhat, test_x[:, -10:]), axis=1)
inv_yhat = scaler.inverse_transform(inv_yhat)
inv_yhat_min = inv_yhat[:,0]
inv_yhat_minloc = inv_yhat[:,1]
# 对测试集标签进行逆标准化
test_XX = test_X.reshape((test_X.shape[0], n_hours*n_features))

test_y = test_y.reshape((len(test_y), 2))
y_ = np.concatenate((test_y, test_XX[:, -10:]), axis=1)
inv_y = scaler.inverse_transform(y_)
inv_y_min = inv_y[:-91,0]
inv_y_minloc = inv_y[:-91,1]
# 计算均方根误差
import math
from sklearn.metrics import mean_squared_log_error
rmlsemin = math.sqrt(mean_squared_log_error(inv_y_min, inv_yhat_min))
print('Test RMLSE_min: %.3f' % rmlsemin)

rmlseminloc = math.sqrt(mean_squared_log_error(inv_y_min, inv_yhat_min))
print('Test RMLSE_minloc: %.3f' % rmlseminloc)

mapemin = np.mean(np.abs((inv_y_min-inv_yhat_min)/inv_y_min))*100
print("mape",mapemin)

mapeminloc = np.mean(np.abs((inv_y_minloc-inv_yhat_minloc)/inv_y_minloc))*100
print("mape",mapeminloc)

R2min = r2_score(inv_y_min, inv_yhat_min)
print("R2得分: ",R2min)
R2minloc = r2_score(inv_y_minloc, inv_yhat_minloc)
print("R2得分: ",R2minloc)

plt.figure(dpi=100)
plt.plot(inv_y_min,label='真实值')
plt.plot(inv_yhat_min,label='预测值')
plt.xlabel('时间')
plt.ylabel('有功功率最大值(kw)')
plt.show()

在这里插入图片描述

y_3 = yhat[-3:]
#替换test_y最后7个值
for i in range(3):
    test_X[-91][i][1] = y_3[i][1]
    
next_predicted_list=[] 
for k in range(91):
    lastOne=(test_X[-91+k]).reshape(-1,n_features)
    backData=lastOne.tolist() 
    one_next=backData[len(backData)-n_hours:] 
    one_next=np.array(one_next).reshape(1,n_hours,n_features) 
    next_predicted=model.predict([one_next])  
    y_3 = y_3[1:]
    y_3 = np.append(y_3,next_predicted[0].tolist())
    for i in range(3):
        test_X[-91+k][i][1] = y_3[i]
    next_predicted_list.append(next_predicted[0].tolist()) 
test_xx = test_X[-91:].reshape((test_X[-91:].shape[0], n_hours*n_features))
# 对预测数据进行逆标准化
inv_pred = np.concatenate((next_predicted_list, test_xx[-91:, -10:]), axis=1)
inv_pred = scaler.inverse_transform(inv_pred)
inv_pred_min = inv_pred[:,0]
inv_pred_minloc = inv_pred[:,1]
plt.figure(dpi=100)
plt.plot(inv_y_min,label='真实值')
plt.plot(inv_yhat_min,label='测试集预测值')
plt.plot([i for i in range(267,267+91)],inv_pred_min,label='未来值')
plt.xlabel('时间')
plt.ylabel('有功功率最小值(kw)')
plt.legend()
plt.show()

在这里插入图片描述
保存结果

pd.DataFrame([inv_yhat_max,inv_yhat_maxloc]).to_csv('第二问结果最大值.csv') #预测值

行业负荷中期预测

import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
plt.rcParams['font.sans-serif'] = ['SimHei']  
plt.rcParams['axes.unicode_minus'] = False       
plt.style.use("ggplot")   
from tensorflow.keras.callbacks import ModelCheckpoint
from tensorflow.keras.layers import Dense,LSTM,Bidirectional,Dropout,Attention
from tensorflow.keras.models import Sequential
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler,MinMaxScaler
import math
from sklearn.metrics import mean_squared_log_error
from sklearn.metrics import r2_score
import warnings
warnings.filterwarnings('ignore')

data = pd.read_csv('./附件2-行业日负荷数据.csv')
data_s = pd.read_csv('./处理好的气象数据.csv')
print("data.shape",data.shape)

突变检验

from scipy import stats
import numpy as np
from matplotlib import pyplot as plt 
 
def sk(data):
    n=len(data)
    Sk     = [0]
    UFk    = [0]
    s      =  0
    E      = [0]
    Var    = [0]
    for i in range(1,n):
        for j in range(i):
            if data[i] > data[j]:
                s = s+1
            else:
                s = s+0
        Sk.append(s)
        E.append((i+1)*(i+2)/4 )                     # Sk[i]的均值
        Var.append((i+1)*i*(2*(i+1)+5)/72 )            # Sk[i]的方差
        UFk.append((Sk[i]-E[i])/np.sqrt(Var[i]))
    UFk=np.array(UFk)
    return UFk

#a为置信度
def MK(data,a,title,time,name):
    ufk=sk(data)          #顺序列
    ubk1=sk(data[::-1])   #逆序列
    ubk=-ubk1[::-1]        #逆转逆序列
    
    #输出突变点的位置
    p=[]
    num = []
    u=ufk-ubk
    for i in range(1,len(ufk)):
        if u[i-1]*u[i]<0:
            p.append(time[i])
            num.append(data[i]-data[i-1])
    if p:
        p = pd.DataFrame(p)
        num = pd.DataFrame(num)
        mutant = pd.concat([p,num],axis=1)
#         print("突变点位置和量级:\n",mutant)
#         mutant.to_csv('突变位置和量级.csv', mode='a', header=False)    
    else:
        print("未检测到突变点")

industry_tpyes = data['行业类型'].unique()
for i in industry_tpyes:
    new_data = data[data['行业类型']==i]
    MK(np.array(new_data['有功功率最大值(kw)']),0.95,i,np.array(new_data['数据时间']),'有功功率最大值')
    MK(np.array(new_data['有功功率最小值(kw)']),0.95,i,np.array(new_data['数据时间']),'有功功率最小值')

merge_data = pd.merge(data,data_s,left_on='数据时间',right_on='日期')
merge_data = merge_data.drop('日期',axis=1)
merge_data[merge_data.loc[:,'有功功率最小值(kw)']<0]
merge_data.loc[139,'有功功率最小值(kw)'] = 0
merge_data.loc[967,'有功功率最小值(kw)'] = 0
for i in industry_tpyes:
    new_data = merge_data[merge_data['行业类型']==i]
#     mutant_ = mutant[mutant['行业类型']==i]
    new_data.index = [i for i in range(len(new_data))]
    new_data['月份'] = pd.to_datetime(new_data['数据时间']).dt.month
    new_data['季节'] = pd.to_datetime(new_data['月份']).copy()
    for i in range(len(new_data)):
        if new_data.loc[i,'月份'] >=3 and new_data.loc[i,'月份'] <= 5:
            new_data.loc[i,'季节'] = 1
        elif new_data.loc[i,'月份'] >= 6 and new_data.loc[i,'月份'] <= 8:
            new_data.loc[i,'季节'] = 2
        elif new_data.loc[i,'月份'] >= 9 and new_data.loc[i,'月份'] <=11:
            new_data.loc[i,'季节'] = 3
        elif new_data.loc[i,'月份'] <= 12 or new_data.loc[i,'月份'] <= 2:
            new_data.loc[i,'季节'] = 4
#     print(len(new_data[new_data['数据时间'].isin(mutant_['突变位置'])]))
#     corr = new_data[new_data['数据时间'].isin(mutant_['突变位置'])].corr()
#     print(corr)
#     corr.to_csv('突变原因.csv',mode='a', header=False)
def series_to_supervised(data, n_in, n_out=1, dropnan=True):
    n_vars = 1 if type(data) is list else data.shape[1]
    df = pd.DataFrame(data)
    cols, names = list(), list()
    # 输入序列(t-n, ... t-1)
    for i in range(n_in, 0, -1):
        cols.append(df.shift(i))
        names += [('var%d(t-%d)' % (j+1, i)) for j in range(n_vars)]
        # 预测序列 (t, t+1, ... t+n)
    for i in range(0, n_out):
        cols.append(df.shift(-i))
        if i == 0:
            names += [('var%d(t)' % (j+1)) for j in range(n_vars)]
        else:
            names += [('var%d(t+%d)' % (j+1, i)) for j in range(n_vars)]
        # 将他们整合在一起
    agg = pd.concat(cols, axis=1)
    agg.columns = names
        # 删除那些包含空值(NaN)的行
    if dropnan:
        agg.dropna(inplace=True)
    return agg

def industry_predict(data,k):
    # 将序列转换成监督式学习
    data['数据时间'] = pd.to_datetime(data['数据时间'])
    data = data.set_index('数据时间')
    pre_data = data['2020-9':'2020-11'].reset_index()
    data = pd.concat([data,pre_data],ignore_index=True)  #把构造好的X加入数据集中
    scaler = MinMaxScaler()
    scaled = scaler.fit_transform(data.drop('数据时间',axis=1)) 
    
    n_hours = 7
    n_features = 12
    
    reframed = series_to_supervised(scaled, n_hours,1)
    reframed.drop(reframed.columns[-11:], axis=1, inplace=True)
    print(reframed.columns)
    
    # 将数据分割为训练集和测试集
    values = reframed.values
    n_train_hours = int(len(values)*0.7)
    train = values[:n_train_hours, :]
    test = values[n_train_hours:, :]

    # 分离特征集和标签
    n_obs = n_hours * n_features
    train_X, train_y = train[:, :n_obs], train[:,0]
    test_X, test_y = test[:, :n_obs], test[:,0]
    print(train_X.shape, train_y.shape)
    
    train_X = train_X.reshape((train_X.shape[0], n_hours, n_features))
    test_X = test_X.reshape((test_X.shape[0], n_hours, n_features))
    
    model = Sequential()
    file_path = "best_checkpointx.hdf5"

    checkpoint_callback = ModelCheckpoint(filepath=file_path,
                                                                 monitor='loss',
                                                                 mode='min',
                                                                 save_best_only=True,
                                                                 save_weights_only=True)
    model.add(LSTM(128, input_shape=(train_X.shape[1], train_X.shape[2])))
    model.add(Dense(1))
    model.compile(loss='mae', optimizer='adam')
    # 训练模型
    history = model.fit(train_X, train_y, epochs=30, batch_size=32, validation_data=(test_X[:-91], test_y[:-91]),callbacks=[checkpoint_callback])
    # 可视化
    plt.plot(history.history['loss'], label='train')
    plt.plot(history.history['val_loss'], label='test')
    plt.title("LOSS")
    plt.xlabel("Epochs")
    plt.ylabel("Loss")
    plt.legend()
    plt.show()
    
    # 对测试集进行预测
    yhat = model.predict(test_X[:-91])
    test_x = test_X[:-91].reshape((test_X[:-91].shape[0], n_hours*n_features))
    # 对预测数据进行逆标准化
    inv_yhat = np.concatenate((yhat, test_x[:, -11:]), axis=1)
    inv_yhat = scaler.inverse_transform(inv_yhat)
    inv_yhat = inv_yhat[:,0]
    # 对测试集标签进行逆标准化
    test_XX = test_X.reshape((test_X.shape[0], n_hours*n_features))

    test_y = test_y.reshape((len(test_y), 1))
    inv_y = np.concatenate((test_y, test_XX[:, -11:]), axis=1)
    inv_y = scaler.inverse_transform(inv_y)
    inv_y = inv_y[:-91,0]
    # 计算均方根误差
    if k== 0:
        rmlse_max = math.sqrt(mean_squared_log_error(inv_y, inv_yhat))
        print('最大值Test RMLSE: %.3f' % rmlse_max)
        mape_max = np.mean(np.abs((inv_y-inv_yhat)/inv_y))*100
        print("最大值mape",mape_max)
        R2_max = r2_score(inv_y, inv_yhat)
        print("最大值R2得分: ",R2_max)
    else:
        rmlse_min = math.sqrt(mean_squared_log_error(inv_y, inv_yhat))
        print('最小值Test RMLSE: %.3f' % rmlse_min)
        mape_min = np.mean(np.abs((inv_y-inv_yhat)/inv_y))*100
        print("最小值mape",mape_min)
        R2_min = r2_score(inv_y, inv_yhat)
        print("最小值R2得分: ",R2_min)

    plt.figure(dpi=100)
    if k==0:
        plt.plot(inv_y,label='最大值_真实值')
        plt.plot(inv_yhat,label='最大值_预测值')
    else:
        plt.plot(inv_y,label='最小值_真实值')
        plt.plot(inv_yhat,label='最小值_预测值')
    plt.xlabel('时间')
    plt.ylabel('总有功功率(kw)')
    plt.legend()
    plt.show()
    
    y_7 = yhat[-7:]
    #替换test_y最后7个值
    for i in range(7):
        test_X[-91][i][0] = y_7[i]
    next_predicted_list=[] 
    for k in range(91):
        lastOne=(test_X[-91+k]).reshape(-1,n_features)
        backData=lastOne.tolist() 
        one_next=backData[len(backData)-n_hours:] 
        one_next=np.array(one_next).reshape(1,n_hours,n_features) 
        next_predicted=model.predict([one_next])  #预测一个y值
        y_7 = y_7[1:]
        y_7 = np.append(y_7,next_predicted[0].tolist())
        for i in range(7):
            test_X[-91+k][i][0] = y_7[i]
        next_predicted_list.append(next_predicted[0].tolist()) #记录预测值
        
    test_xx = test_X[-91:].reshape((test_X[-91:].shape[0], n_hours*n_features))
    # 对预测数据进行逆标准化
    inv_pred = np.concatenate((next_predicted_list, test_xx[-91:, -11:]), axis=1)
    inv_pred = scaler.inverse_transform(inv_pred)
    inv_pred = inv_pred[:,0]
#     pd.DataFrame(inv_pred).to_csv('第二问预测结果新.csv', mode='a', header=False)
    
max_columns = ['数据时间', '有功功率最大值(kw)',  '最高温度', '最低温度', '白天风力风向',
       '夜晚风力风向', '天气状况1', '天气状况2', '月份', '季节', '极值_大', '极值_小', '变化率_大']
min_columns = ['数据时间', '有功功率最小值(kw)',  '最高温度', '最低温度', '白天风力风向',
       '夜晚风力风向', '天气状况1', '天气状况2', '月份', '季节', '极值_大', '极值_小', '变化率_小']
from scipy.signal import find_peaks,argrelextrema
for i in industry_tpyes:
    print(i)
    new_data = merge_data[merge_data['行业类型']==i]
    new_data.index = [i for i in range(len(new_data))]
    new_data['月份'] = pd.to_datetime(new_data['数据时间']).dt.month
    new_data['季节'] = pd.to_datetime(new_data['月份']).copy()
    for i in range(len(new_data)):
        if new_data.loc[i,'月份'] >=3 and new_data.loc[i,'月份'] <= 5:
            new_data.loc[i,'季节'] = 1
        elif new_data.loc[i,'月份'] >= 6 and new_data.loc[i,'月份'] <= 8:
            new_data.loc[i,'季节'] = 2
        elif new_data.loc[i,'月份'] >= 9 and new_data.loc[i,'月份'] <=11:
            new_data.loc[i,'季节'] = 3
        elif new_data.loc[i,'月份'] <= 12 or new_data.loc[i,'月份'] <= 2:
            new_data.loc[i,'季节'] = 4
#     mutant = pd.read_csv('./突变位置和量级.csv')
    #突变原因分析

    # 趋势特征
    max_peaks = argrelextrema(np.array(new_data['有功功率最大值(kw)']),np.greater) #找出极大值
    min_peaks = argrelextrema(np.array(new_data['有功功率最大值(kw)']),np.less) #找出极小值

    max_peaks_ = argrelextrema(np.array(new_data['有功功率最小值(kw)']),np.greater) #找出极大值
    min_peaks_ = argrelextrema(np.array(new_data['有功功率最小值(kw)']),np.less) #找出极小值

    #是否为极值,极大值用1,极小值用-1,非极值用0
    new_data['极值_大'] = new_data['有功功率最大值(kw)'].copy()
    new_data['极值_小'] = new_data['有功功率最大值(kw)'].copy()
    for i in range(len(new_data)):
        if i in max_peaks[0]:
            new_data.loc[i,'极值_大'] = 1
        elif i in min_peaks[0]:
            new_data.loc[i,'极值_大'] = -1
        else:
            new_data.loc[i,'极值_大'] = 0

    for i in range(len(new_data)):
        if i in max_peaks_[0]:
            new_data.loc[i,'极值_小'] = 1
        elif i in min_peaks_[0]:
            new_data.loc[i,'极值_小'] = -1
        else:
            new_data.loc[i,'极值_小'] = 0

    #变化率
    new_data['变化率_大'] = new_data['有功功率最大值(kw)'].diff()/new_data['有功功率最大值(kw)']
    new_data['变化率_小'] = new_data['有功功率最小值(kw)'].diff()/new_data['有功功率最小值(kw)']
    new_data.dropna(axis=0,inplace=True)
    #去除无穷值
    new_data[np.isinf(new_data['变化率_大'])]=0
    new_data[np.isinf(new_data['变化率_小'])]=0
    industry_predict(new_data[max_columns],0)
    industry_predict(new_data[min_columns],1)
def predict(data,k):
    # 将序列转换成监督式学习
    data['数据时间'] = pd.to_datetime(data['数据时间'])
    data = data.set_index('数据时间')
    pre_data = data['2020-9':'2020-11'].reset_index()
    data = pd.concat([data,pre_data],ignore_index=True)  #把构造好的X加入数据集中
    scaler = MinMaxScaler()
    scaled = scaler.fit_transform(data.drop('数据时间',axis=1)) 
    
    n_hours = 7
    n_features = 12
    
    reframed = series_to_supervised(scaled, n_hours,1)
    reframed.drop(reframed.columns[-11:], axis=1, inplace=True)
    print(reframed.columns)
    
    # 将数据分割为训练集和测试集
    values = reframed.values
    n_train_hours = int(len(values)*0.7)
    train = values[:n_train_hours, :]
    test = values[n_train_hours:, :]

    # 分离特征集和标签
    n_obs = n_hours * n_features
    train_X, train_y = train[:, :n_obs], train[:,0]
    test_X, test_y = test[:, :n_obs], test[:,0]
    print(train_X.shape, train_y.shape)
    
    train_X = train_X.reshape((train_X.shape[0], n_hours, n_features))
    test_X = test_X.reshape((test_X.shape[0], n_hours, n_features))
    
    model = Sequential()
    file_path = "best_checkpointx.hdf5"

    checkpoint_callback = ModelCheckpoint(filepath=file_path,
                                                                 monitor='loss',
                                                                 mode='min',
                                                                 save_best_only=True,
                                                                 save_weights_only=True)
    model.add(LSTM(128, input_shape=(train_X.shape[1], train_X.shape[2])))
    model.add(Dropout(0.01))
    model.add(Attention())
    model.add(Dense(1))
    model.compile(loss='mae', optimizer='adam')
    # 训练模型
    history = model.fit(train_X, train_y, epochs=200, batch_size=32, validation_data=(test_X[:-91], test_y[:-91]),callbacks=[checkpoint_callback])
    # 可视化
    plt.plot(history.history['loss'], label='train')
    plt.plot(history.history['val_loss'], label='test')
    plt.title("LOSS")
    plt.xlabel("Epochs")
    plt.ylabel("Loss")
    plt.legend()
    plt.show()

    yhat = model.predict(test_X[:-91])
    test_x = test_X[:-91].reshape((test_X[:-91].shape[0], n_hours*n_features))
    
    inv_yhat = np.concatenate((yhat, test_x[:, -11:]), axis=1)
    inv_yhat = scaler.inverse_transform(inv_yhat)
    inv_yhat = inv_yhat[:,0]
    inv_yhat.to_csv('./极小值.csv',model='a')
    
  
    test_XX = test_X.reshape((test_X.shape[0], n_hours*n_features))

    test_y = test_y.reshape((len(test_y), 1))
    inv_y = np.concatenate((test_y, test_XX[:, -11:]), axis=1)
    inv_y = scaler.inverse_transform(inv_y)
    inv_y = inv_y[:-91,0]
   
    if k== 0:
        rmlse_max = math.sqrt(mean_squared_log_error(inv_y, inv_yhat))
        print('最大值Test RMLSE: %.3f' % rmlse_max)
        mape_max = np.mean(np.abs((inv_y-inv_yhat)/inv_y))*100
        print("最大值mape",mape_max)
        R2_max = r2_score(inv_y, inv_yhat)
        print("最大值R2得分: ",R2_max)
    else:
        rmlse_min = math.sqrt(mean_squared_log_error(inv_y, inv_yhat))
        print('最小值Test RMLSE: %.3f' % rmlse_min)
        mape_min = np.mean(np.abs((inv_y-inv_yhat)/inv_y))*100
        print("最小值mape",mape_min)
        R2_min = r2_score(inv_y, inv_yhat)
        print("最小值R2得分: ",R2_min)
for i in name:
    print(i)
    new_data = merge_data[merge_data['行业类型']==i]
    new_data.index = [i for i in range(len(new_data))]
    new_data['月份'] = pd.to_datetime(new_data['数据时间']).dt.month
    new_data['季节'] = pd.to_datetime(new_data['月份']).copy()
    for i in range(len(new_data)):
        if new_data.loc[i,'月份'] >=3 and new_data.loc[i,'月份'] <= 5:
            new_data.loc[i,'季节'] = 1
        elif new_data.loc[i,'月份'] >= 6 and new_data.loc[i,'月份'] <= 8:
            new_data.loc[i,'季节'] = 2
        elif new_data.loc[i,'月份'] >= 9 and new_data.loc[i,'月份'] <=11:
            new_data.loc[i,'季节'] = 3
        elif new_data.loc[i,'月份'] <= 12 or new_data.loc[i,'月份'] <= 2:
            new_data.loc[i,'季节'] = 4

    # 趋势特征
    max_peaks = argrelextrema(np.array(new_data['有功功率最大值(kw)']),np.greater) #找出极大值
    min_peaks = argrelextrema(np.array(new_data['有功功率最大值(kw)']),np.less) #找出极小值

    max_peaks_ = argrelextrema(np.array(new_data['有功功率最小值(kw)']),np.greater) #找出极大值
    min_peaks_ = argrelextrema(np.array(new_data['有功功率最小值(kw)']),np.less) #找出极小值

    #是否为极值,极大值用1,极小值用-1,非极值用0
    new_data['极值_大'] = new_data['有功功率最大值(kw)'].copy()
    new_data['极值_小'] = new_data['有功功率最大值(kw)'].copy()
    for i in range(len(new_data)):
        if i in max_peaks[0]:
            new_data.loc[i,'极值_大'] = 1
        elif i in min_peaks[0]:
            new_data.loc[i,'极值_大'] = -1
        else:
            new_data.loc[i,'极值_大'] = 0

    for i in range(len(new_data)):
        if i in max_peaks_[0]:
            new_data.loc[i,'极值_小'] = 1
        elif i in min_peaks_[0]:
            new_data.loc[i,'极值_小'] = -1
        else:
            new_data.loc[i,'极值_小'] = 0

    #变化率
    new_data['变化率_大'] = new_data['有功功率最大值(kw)'].diff()/new_data['有功功率最大值(kw)']
    new_data['变化率_小'] = new_data['有功功率最小值(kw)'].diff()/new_data['有功功率最小值(kw)']
    new_data.dropna(axis=0,inplace=True)
    #去除无穷值
    new_data[np.isinf(new_data['变化率_大'])]=0
    new_data[np.isinf(new_data['变化率_小'])]=0
    industry_predict(new_data[max_columns],0)
#     predict(new_data[min_columns],1)
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

茶冻茶茶

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值