【创新】python人工智能在金融风险预测中实践——优化提高预测准确性

1. 数据准备注意:代码仅供参考,对人工智能前景探索。

这期全是干货
我们将使用 pandas 库来处理数据,假设我们有一个包含客户特征和是否违约标签的 CSV 文件 loan_data.csv。

import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import accuracy_score, classification_report

读取数据

data = pd.read_csv('loan_data.csv')

假设数据集中 ‘default’ 列是违约标签,其他列是特征

X = data.drop(‘default’, axis=1)
y = data[‘default’]

划分训练集和测试集

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
  1. 模型训练
    我们使用逻辑回归模型进行训练。

创建逻辑回归模型

model = LogisticRegression()

训练模型

model.fit(X_train, y_train)

3. 模型预测与评估
使用训练好的模型对测试集进行预测,并评估模型的性能。

预测测试集

y_pred = model.predict(X_test)

计算准确率

accuracy = accuracy_score(y_test, y_pred)
print(f"模型准确率: {accuracy}")

打印分类报告

print(classification_report(y_test, y_pred))

完整代码示例

import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import accuracy_score, classification_report

# 读取数据
data = pd.read_csv('loan_data.csv')

# 假设数据集中 'default' 列是违约标签,其他列是特征
X = data.drop('default', axis=1)
y = data['default']

# 划分训练集和测试集
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

# 创建逻辑回归模型
model = LogisticRegression()

# 训练模型
model.fit(X_train, y_train)

# 预测测试集
y_pred = model.predict(X_test)

# 计算准确率
accuracy = accuracy_score(y_test, y_pred)
print(f"模型准确率: {accuracy}")

# 打印分类报告
print(classification_report(y_test, y_pred))
注意事项
数据处理:在实际应用中,数据可能存在缺失值、异常值等问题,需要进行数据清洗和预处理。例如,可以使用 pandas 的 fillna() 方法填充缺失值,使用 scikit-learn 的 StandardScaler 对特征进行标准化处理。
特征工程:选择合适的特征对于模型的性能至关重要。可以使用特征选择方法(如相关性分析、卡方检验等)来筛选出最有价值的特征。
模型选择:逻辑回归只是一种简单的线性模型,在实际应用中,可能需要尝试更复杂的模型,如决策树、随机森林、神经网络等。
更复杂的模型示例(随机森林)
python
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import accuracy_score, classification_report

# 读取数据
data = pd.read_csv('loan_data.csv')

# 假设数据集中 'default' 列是违约标签,其他列是特征
X = data.drop('default', axis=1)
y = data['default']

# 划分训练集和测试集
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

# 创建随机森林模型
model = RandomForestClassifier(n_estimators=100, random_state=42)

# 训练模型
model.fit(X_train, y_train)

# 预测测试集
y_pred = model.predict(X_test)

# 计算准确率
accuracy = accuracy_score(y_test, y_pred)
print(f"模型准确率: {accuracy}")

# 打印分类报告
print(classification_report(y_test, y_pred))

这个示例展示了如何使用随机森林模型进行金融风险预测,随机森林通常比逻辑回归具有更好的性能。
别以为这就没了,让我们继续增加深度
金融风险预测是金融领域的核心任务之一,准确的风险预测能够帮助金融机构提前制定应对策略,降低损失,保障金融系统的稳定运行。随着金融市场的不断发展和数据量的急剧增长,传统的风险预测方法面临着巨大挑战。人工智能技术凭借其强大的数据分析和模式识别能力,为金融风险预测提供了新的思路和方法。Python 作为一种功能强大且易于使用的编程语言,拥有丰富的机器学习和深度学习库,如 Scikit - learn、TensorFlow、PyTorch 等,使得开发高效准确的金融风险预测模型成为可能。本文将深入探讨如何运用 Python 中的人工智能技术,从多个层面优化金融风险预测模型,提高预测的准确性。
数据收集与预处理

(一)数据收集
金融数据具有多样性和复杂性,其来源广泛且特点各异。除了常见的股票交易数据、债券市场数据和宏观经济指标外,还包括新闻舆情数据、社交媒体数据等非结构化数据。这些非结构化数据蕴含着丰富的市场情绪和潜在风险信息。
结构化数据获取:对于结构化的金融数据,可以通过专业的数据提供商如 Bloomberg、Refinitiv 等获取高质量的历史数据。以获取股票期权数据为例,使用 Python 的pandas - datareader结合相应的 API 接口可以方便地获取数据:

import pandas_datareader.data as web
import datetime

start = datetime.datetime(2020, 1, 1)
end = datetime.datetime(2025, 1, 1)
option_data = web.DataReader('SPY', 'yahoo - options', start, end)
非结构化数据采集:对于非结构化数据,如新闻舆情数据,可以使用网络爬虫技术从财经新闻网站、社交媒体平台等抓取数据。使用BeautifulSoup和Requests库实现简单的新闻爬虫:
python
import requests
from bs4 import BeautifulSoup

url = 'https://examplefinancialnews.com'
response = requests.get(url)
soup = BeautifulSoup(response.text, 'html.parser')
news_articles = soup.find_all('div', class_='news - article')
for article in news_articles:
    title = article.find('h2').text
    content = article.find('p').text
    print(title, content)

(二)数据预处理

数据预处理是提高模型准确性的关键步骤,它直接影响到模型的训练效果。
缺失值处理:金融数据中的缺失值可能由多种原因导致,如数据录入错误、数据源问题等。除了均值填充、中位数填充和插值法外,还可以使用更复杂的方法,如基于机器学习的方法进行缺失值填充。以随机森林填充为例:

from sklearn.ensemble import RandomForestRegressor
import pandas as pd
import numpy as np

data = pd.read_csv('financial_data.csv')
missing_cols = data.columns[data.isnull().any()].tolist()
for col in missing_cols:
    non_missing_data = data.dropna(subset=[col])
    X_train = non_missing_data.drop(missing_cols, axis = 1)
    y_train = non_missing_data[col]
    X_test = data[data[col].isnull()].drop(missing_cols, axis = 1)
    model = RandomForestRegressor()
    model.fit(X_train, y_train)
    y_pred = model.predict(X_test)
    data.loc[data[col].isnull(), col] = y_pred

异常值处理:除了盖帽法,还可以使用基于统计分布的方法,如基于正态分布的 3σ 原则识别异常值。对于非正态分布的数据,可以使用基于四分位数间距(IQR)的方法。同时,对于异常值的处理需要谨慎,因为有些异常值可能是真实的市场信号。

import numpy as np
import pandas as pd

data = pd.read_csv('financial_data.csv')
for col in data.columns:
    mean = np.mean(data[col])
    std = np.std(data[col])
    upper_bound = mean + 3 * std
    lower_bound = mean - 3 * std
    data[col] = np.where((data[col] > upper_bound) | (data[col] < lower_bound), np.nan, data[col])
    data[col].fillna(data[col].median(), inplace=True)

数据标准化:不同的标准化方法适用于不同的模型和数据分布。Z - score 标准化适用于数据近似正态分布的情况,而 Min - Max 标准化更适合数据分布范围已知且需要将数据缩放到特定区间的情况。在某些情况下,还可以使用 RobustScaler,它对异常值具有更强的鲁棒性。

from sklearn.preprocessing import RobustScaler

scaler = RobustScaler()
data[['column1', 'column2']] = scaler.fit_transform(data[['column1', 'column2']])

三、特征工程

(一)特征选择
特征选择的目的是从众多特征中挑选出最具预测能力的特征,减少模型的复杂度和过拟合风险。除了常见的相关性分析、卡方检验和互信息法外,还可以使用递归特征消除(RFE)和基于模型的特征选择方法。
递归特征消除(RFE):通过递归地删除不重要的特征,逐步构建最优特征子集。

from sklearn.feature_selection import RFE
from sklearn.linear_model import LogisticRegression

X = data.drop('target_column', axis = 1)
y = data['target_column']
model = LogisticRegression()
rfe = RFE(model, n_features_to_select = 10)
X_new = rfe.fit_transform(X, y)
selected_features = X.columns[rfe.support_].tolist()
基于模型的特征选择:使用随机森林等模型的特征重要性来选择特征。
python
from sklearn.ensemble import RandomForestClassifier

model = RandomForestClassifier()
model.fit(X, y)
feature_importances = pd.DataFrame(model.feature_importances_, index = X.columns, columns=['importance'])
feature_importances = feature_importances.sort_values('importance', ascending = False)
selected_features = feature_importances.head(10).index.tolist()

(二)特征提取

特征提取是从原始特征中挖掘出更有价值的信息。在金融领域,可以结合领域知识和时间序列分析方法进行特征提取。
时间序列特征提取:对于金融时间序列数据,可以提取移动平均线、相对强弱指数(RSI)、布林带等技术指标作为特征。

import pandas as pd

data['SMA_5'] = data['Close'].rolling(window = 5).mean()
data['SMA_20'] = data['Close'].rolling(window = 20).mean()
data['RSI'] = calculate_rsi(data['Close'], period = 14)  # 自定义RSI计算函数

def calculate_rsi(prices, period = 14):
    deltas = np.diff(prices)
    seed = deltas[:period + 1]
    up = seed[seed >= 0].sum() / period
    down = -seed[seed < 0].sum() / period
    rs = up / down
    rsi = np.zeros_like(prices)
    rsi[:period] = 100. - 100. / (1. + rs)

    for i in range(period, len(prices)):
        delta = deltas[i - 1]
        if delta > 0:
            upval = delta
            downval = 0.
        else:
            upval = 0.
            downval = -delta
        up = (up * (period - 1) + upval) / period
        down = (down * (period - 1) + downval) / period
        rs = up / down
        rsi[i] = 100. - 100. / (1. + rs)
    return rsi

文本特征提取:对于非结构化的新闻舆情数据,可以使用词袋模型、TF - IDF、Word2Vec 等方法将文本转化为数值特征。

from sklearn.feature_extraction.text import TfidfVectorizer

corpus = data['news_content'].tolist()
vectorizer = TfidfVectorizer()
X_text = vectorizer.fit_transform(corpus)

四、模型选择与训练

(一)常见模型
不同的模型具有不同的特点和适用场景,在金融风险预测中需要根据具体问题选择合适的模型。
支持向量机(SVM):SVM 在处理高维数据和非线性分类问题时表现出色。可以通过调整核函数和正则化参数来优化模型性能。

from sklearn.svm import SVC
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score

X = data.drop('target_column', axis = 1)
y = data['target_column']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = 42)

model = SVC(kernel='rbf', C = 10)
model.fit(X_train, y_train)
y_pred = model.predict(X_test)
print("SVM Accuracy:", accuracy_score(y_test, y_pred))
长短期记忆网络(LSTM):LSTM 是一种特殊的循环神经网络,适合处理序列数据,如金融时间序列。可以使用 Keras 或 PyTorch 构建 LSTM 模型。
python
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import LSTM, Dense

X = data.drop('target_column', axis = 1).values
y = data['target_column'].values
X = X.reshape((X.shape[0], X.shape[1], 1))
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = 42)

model = Sequential()
model.add(LSTM(50, input_shape=(X_train.shape[1], X_train.shape[2])))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
model.fit(X_train, y_train, epochs = 10, batch_size = 32)
y_pred = (model.predict(X_test) > 0.5).astype("int32")
print("LSTM Accuracy:", accuracy_score(y_test, y_pred))

(二)模型训练技巧

早停策略:在模型训练过程中,为了避免过拟合,可以使用早停策略。当验证集上的性能不再提升时,停止训练。

from tensorflow.keras.callbacks import EarlyStopping

early_stopping = EarlyStopping(monitor='val_loss', patience = 3)
model.fit(X_train, y_train, epochs = 100, batch_size = 32, validation_data=(X_test, y_test), callbacks=[early_stopping])

学习率调整:合适的学习率对于模型的收敛至关重要。可以使用学习率调度器动态调整学习率。

from tensorflow.keras.optimizers.schedules import ExponentialDecay

initial_learning_rate = 0.01
lr_schedule = ExponentialDecay(
    initial_learning_rate,
    decay_steps = 100000,
    decay_rate = 0.96,
    staircase = True)
optimizer = tf.keras.optimizers.Adam(learning_rate = lr_schedule)
model.compile(loss='binary_crossentropy', optimizer = optimizer, metrics=['accuracy'])

五、模型评估与优化

(一)评估指标
除了常用的准确率、精确率、召回率、F1 值和 AUC - ROC 曲线外,还可以使用其他评估指标,如对数损失(Log Loss)和平均精度均值(MAP)。

from sklearn.metrics import log_loss

log_loss_score = log_loss(y_test, model.predict_proba(X_test))
print("Log Loss:", log_loss_score)

(二)模型优化

模型融合与堆叠:除了简单的投票法和加权平均法,还可以使用堆叠(Stacking)方法进行模型融合。堆叠通过训练一个元模型来组合多个基模型的预测结果。

from sklearn.ensemble import StackingClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier

estimators = [
    ('dt', DecisionTreeClassifier()),
    ('rf', RandomForestClassifier())
]
stacking_model = StackingClassifier(estimators = estimators, final_estimator = LogisticRegression())
stacking_model.fit(X_train, y_train)
y_pred = stacking_model.predict(X_test)
print("Stacking Model Accuracy:", accuracy_score(y_test, y_pred))

对抗训练:在深度学习模型中,对抗训练可以提高模型的鲁棒性。通过引入对抗样本,使模型能够更好地应对噪声和异常情况。

import tensorflow as tf
from tensorflow.keras.layers import Input
from tensorflow.keras.models import Model
from tensorflow.keras.optimizers import Adam

定义生成对抗样本的函数

def generate_adversarial_example(model, x, epsilon = 0.01):
    x = tf.cast(x, tf.float32)
    with tf.GradientTape() as tape:
        tape.watch(x)
        predictions = model(x)
        loss = tf.keras.losses.binary_crossentropy(y_train, predictions)
    gradient = tape.gradient(loss, x)
    signed_grad = tf.sign(gradient)
    adversarial_example = x + epsilon * signed_grad
    return adversarial_example

对抗训练

for epoch in range(10):
    adversarial_examples = generate_adversarial_example(model, X_train)
    X_train_adv = tf.concat([X_train, adversarial_examples], axis = 0)
    y_train_adv = tf.concat([y_train, y_train], axis = 0)
    model.fit(X_train_adv, y_train_adv, epochs = 1, batch_size = 32)

六、结论

通过运用 Python 进行全面而深入的数据收集、预处理、特征工程、模型选择与训练、评估与优化等一系列操作,能够显著提高金融风险预测的准确性。在实际应用中,需要综合考虑数据特点、业务需求和模型性能等因素,不断探索和尝试新的方法和技术,以应对日益复杂的金融市场环境,为金融风险管理提供更加可靠的支持。同时,模型的可解释性也是一个重要的研究方向,未来需要进一步研究如何在提高预测准确性的同时,增强模型的可解释性,以便更好地应用于实际决策中。

接下来,就是完整代码,这全是干货

import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split, GridSearchCV, cross_val_score
from sklearn.preprocessing import StandardScaler, MinMaxScaler, RobustScaler
from sklearn.feature_selection import SelectKBest, mutual_info_classif, RFE
from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier, GradientBoostingClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.svm import SVC
from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score, roc_curve, auc, log_loss
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import LSTM, Dense, Dropout
from tensorflow.keras.callbacks import EarlyStopping, ReduceLROnPlateau
from tensorflow.keras.optimizers import Adam
import matplotlib.pyplot as plt
import requests
from bs4 import BeautifulSoup
import nltk
from nltk.corpus import stopwords
from nltk.stem import WordNetLemmatizer
from sklearn.feature_extraction.text import TfidfVectorizer
from imblearn.over_sampling import SMOTE

# 数据收集
def collect_structured_data():
    # 从雅虎财经获取股票数据
    import pandas_datareader.data as web
    import datetime
    start = datetime.datetime(2015, 1, 1)
    end = datetime.datetime(2025, 1, 1)
    stock_data = web.DataReader('AAPL', 'yahoo', start, end)
    return stock_data

def collect_financial_statements():
    # 假设从本地文件读取财务报表数据
    financial_statements = pd.read_csv('financial_statements.csv')
    return financial_statements

def collect_news_sentiment():
    url = 'https://financialnewswebsite.com'  # 替换为实际新闻网站
    response = requests.get(url)
    soup = BeautifulSoup(response.text, 'html.parser')
    news_articles = []
    for article in soup.find_all('div', class_='news - article'):
        title = article.find('h2').text
        content = article.find('p').text
        news_articles.append(title + ' ' + content)
    return news_articles

# 数据预处理
def preprocess_structured_data(data):
    # 缺失值处理:使用插值法
    data = data.interpolate()
    # 异常值处理:使用 IQR 方法
    Q1 = data.quantile(0.25)
    Q3 = data.quantile(0.75)
    IQR = Q3 - Q1
    data = data[~((data < (Q1 - 1.5 * IQR)) | (data > (Q3 + 1.5 * IQR))).any(axis=1)]
    # 数据标准化:使用 StandardScaler
    scaler = StandardScaler()
    data_scaled = scaler.fit_transform(data)
    data_scaled = pd.DataFrame(data_scaled, columns=data.columns)
    return data_scaled

def preprocess_text_data(texts):
    nltk.download('stopwords')
    nltk.download('wordnet')
    lemmatizer = WordNetLemmatizer()
    stop_words = set(stopwords.words('english'))
    processed_texts = []
    for text in texts:
        text = text.lower()
        words = text.split()
        words = [lemmatizer.lemmatize(word) for word in words if word.isalpha() and word not in stop_words]
        processed_texts.append(' '.join(words))
    return processed_texts

# 特征工程
def feature_engineering_structured(data):
    # 计算技术指标
    data['SMA_5'] = data['Close'].rolling(window=5).mean()
    data['SMA_20'] = data['Close'].rolling(window=20).mean()
    data['Volatility'] = data['Close'].pct_change().rolling(window=20).std()
    data = data.dropna()
    return data

def feature_engineering_text(texts):
    vectorizer = TfidfVectorizer(max_features=1000)
    text_features = vectorizer.fit_transform(texts)
    return text_features

# 模型构建与训练
def train_ml_models(X, y):
    X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
    # 随机森林
    rf = RandomForestClassifier()
    param_grid_rf = {
        'n_estimators': [100, 200, 300],
        'max_depth': [None, 10, 20]
    }
    grid_search_rf = GridSearchCV(rf, param_grid_rf, cv=5)
    grid_search_rf.fit(X_train, y_train)
    rf_best = grid_search_rf.best_estimator_
    # 逻辑回归
    lr = LogisticRegression()
    param_grid_lr = {
        'C': [0.1, 1, 10],
        'penalty': ['l1', 'l2']
    }
    grid_search_lr = GridSearchCV(lr, param_grid_lr, cv=5)
    grid_search_lr.fit(X_train, y_train)
    lr_best = grid_search_lr.best_estimator_
    # 支持向量机
    svm = SVC()
    param_grid_svm = {
        'C': [0.1, 1, 10],
        'kernel': ['linear', 'rbf']
    }
    grid_search_svm = GridSearchCV(svm, param_grid_svm, cv=5)
    grid_search_svm.fit(X_train, y_train)
    svm_best = grid_search_svm.best_estimator_
    return rf_best, lr_best, svm_best

def train_lstm_model(X, y):
    X = np.array(X)
    y = np.array(y)
    X = X.reshape((X.shape[0], X.shape[1], 1))
    X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
    model = Sequential()
    model.add(LSTM(128, input_shape=(X_train.shape[1], X_train.shape[2]), return_sequences=True))
    model.add(Dropout(0.2))
    model.add(LSTM(64))
    model.add(Dropout(0.2))
    model.add(Dense(1, activation='sigmoid'))
    optimizer = Adam(learning_rate=0.001)
    model.compile(loss='binary_crossentropy', optimizer=optimizer, metrics=['accuracy'])
    early_stopping = EarlyStopping(monitor='val_loss', patience=10)
    reduce_lr = ReduceLROnPlateau(monitor='val_loss', factor=0.2, patience=5, min_lr=0.00001)
    model.fit(X_train, y_train, epochs=50, batch_size=32, validation_data=(X_test, y_test), callbacks=[early_stopping, reduce_lr])
    return model

# 模型评估
def evaluate_model(model, X_test, y_test):
    y_pred = model.predict(X_test)
    if hasattr(model, 'predict_proba'):
        y_scores = model.predict_proba(X_test)[:, 1]
    else:
        y_scores = y_pred
    accuracy = accuracy_score(y_test, y_pred)
    precision = precision_score(y_test, y_pred)
    recall = recall_score(y_test, y_pred)
    f1 = f1_score(y_test, y_pred)
    fpr, tpr, thresholds = roc_curve(y_test, y_scores)
    roc_auc = auc(fpr, tpr)
    log_loss_score = log_loss(y_test, y_scores)
    print(f"Accuracy: {accuracy}")
    print(f"Precision: {precision}")
    print(f"Recall: {recall}")
    print(f"F1 - score: {f1}")
    print(f"AUC - ROC: {roc_auc}")
    print(f"Log Loss: {log_loss_score}")
    plt.plot(fpr, tpr, label='ROC curve (area = %0.2f)' % roc_auc)
    plt.plot([0, 1], [0, 1], 'k--')
    plt.xlim([0.0, 1.0])
    plt.ylim([0.0, 1.05])
    plt.xlabel('False Positive Rate')
    plt.ylabel('True Positive Rate')
    plt.title('Receiver Operating Characteristic')
    plt.legend(loc="lower right")
    plt.show()

# 模型优化:集成学习
def ensemble_models(models, X, y):
    X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
    predictions = []
    for model in models:
        pred = model.predict_proba(X_test)[:, 1]
        predictions.append(pred)
    ensemble_pred = np.mean(predictions, axis=0)
    ensemble_pred_binary = (ensemble_pred > 0.5).astype(int)
    accuracy = accuracy_score(y_test, ensemble_pred_binary)
    print(f"Ensemble Accuracy: {accuracy}")

# 主函数
def main():
    # 数据收集
    stock_data = collect_structured_data()
    financial_statements = collect_financial_statements()
    news_sentiment = collect_news_sentiment()

    # 数据预处理
    stock_data_processed = preprocess_structured_data(stock_data)
    news_sentiment_processed = preprocess_text_data(news_sentiment)

    # 特征工程
    stock_data_features = feature_engineering_structured(stock_data_processed)
    text_features = feature_engineering_text(news_sentiment_processed)

    # 合并特征(假设目标变量为 'target')
    all_features = pd.concat([stock_data_features.reset_index(drop=True), pd.DataFrame(text_features.toarray())], axis=1)
    target = all_features['target']  # 替换为实际目标列名
    all_features = all_features.drop('target', axis=1)

    # 处理类别不平衡问题
    smote = SMOTE()
    X_resampled, y_resampled = smote.fit_resample(all_features, target)

    # 模型训练
    rf, lr, svm = train_ml_models(X_resampled, y_resampled)
    lstm = train_lstm_model(X_resampled, y_resampled)

    # 模型评估
    X_train, X_test, y_train, y_test = train_test_split(X_resampled, y_resampled, test_size=0.2, random_state=42)
    print("Evaluating Random Forest:")
    evaluate_model(rf, X_test, y_test)
    print("Evaluating Logistic Regression:")
    evaluate_model(lr, X_test, y_test)
    print("Evaluating SVM:")
    evaluate_model(svm, X_test, y_test)
    X_test_lstm = np.array(X_test).reshape((X_test.shape[0], X_test.shape[1], 1))
    print("Evaluating LSTM:")
    evaluate_model(lstm, X_test_lstm, y_test)

    # 模型优化:集成学习
    ensemble_models([rf, lr, svm], X_resampled, y_resampled)

if __name__ == "__main__":
    main()

商务合作请私信我,我将尽力回复。期待更多学者投身该领域,有错误我会尽力改,感谢大家支持。

评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值