21、机器学习与深度学习实战:模型构建、评估与优化

机器学习与深度学习实战:模型构建、评估与优化

1. 机器学习基础:添加正则化到模型

在机器学习中,正则化是防止过拟合的重要技术。以下是添加正则化到逻辑回归模型的详细步骤:
1. 加载数据

import pandas as pd
feats = pd.read_csv('data/bank_data_feats_e3.csv', index_col=0)
target = pd.read_csv('data/bank_data_target_e2.csv', index_col=0)
  1. 划分数据集 :将数据划分为训练集和测试集,并预留部分训练集用于验证。
from sklearn.model_selection import train_test_split
test_size = 0.2
random_state = 13
X_train, X_test, y_train, y_test = train_test_split(feats, target, test_size=test_size, random_state=random_state)
  1. 查看数据集维度
print(f'Shape of X_train: {X_train.shape}')
print(f'Shape of y_train: {y_train.shape}')
print(f'Shape of X_test: {X_test.shape}')
print(f'Shape of y_test: {y_test.shape}')
  1. 实例化模型 :尝试两种正则化参数(l1 和 l2),并进行 10 折交叉验证。
import numpy as np
from sklearn.linear_model import LogisticRegressionCV
Cs = np.logspace(-2, 6, 9)
model_l1 = LogisticRegressionCV(Cs=Cs, penalty='l1', cv=10, solver='liblinear', random_state=42)
model_l2 = LogisticRegressionCV(Cs=Cs, penalty='l2', cv=10, random_state=42)
  1. 拟合模型
model_l1.fit(X_train, y_train['y'])
model_l2.fit(X_train, y_train['y'])
  1. 查看最佳正则化参数
print(f'Best hyperparameter for l1 regularization model: {model_l1.C_[0]}')
print(f'Best hyperparameter for l2 regularization model: {model_l2.C_[0]}')
  1. 模型评估
y_pred_l1 = model_l1.predict(X_test)
y_pred_l2 = model_l2.predict(X_test)
from sklearn import metrics
accuracy_l1 = metrics.accuracy_score(y_pred=y_pred_l1, y_true=y_test)
accuracy_l2 = metrics.accuracy_score(y_pred=y_pred_l2, y_true=y_test)
precision_l1, recall_l1, fscore_l1, _ = metrics.precision_recall_fscore_support(y_pred=y_pred_l1, y_true=y_test, average='binary')
precision_l2, recall_l2, fscore_l2, _ = metrics.precision_recall_fscore_support(y_pred=y_pred_l2, y_true=y_test, average='binary')
  1. 查看系数值
coef_list = [f'{feature}: {coef}' for coef, feature in sorted(zip(model_l1.coef_[0], X_train.columns.values.tolist()))]
for item in coef_list:
    print(item)
coef_list = [f'{feature}: {coef}' for coef, feature in sorted(zip(model_l2.coef_[0], X_train.columns.values.tolist()))]
for item in coef_list:
    print(item)

通过上述步骤,我们学会了如何使用正则化和交叉验证来评估模型,正则化有助于确保模型不会过拟合训练数据,从而在新数据上表现更好。

2. 机器学习与深度学习对比:创建逻辑回归模型

接下来,我们使用 Keras 创建一个逻辑回归模型:
1. 加载数据

import pandas as pd
feats = pd.read_csv('bank_data_feats.csv')
target = pd.read_csv('bank_data_target.csv')
  1. 划分数据集
from sklearn.model_selection import train_test_split
test_size = 0.2
random_state = 42
X_train, X_test, y_train, y_test = train_test_split(feats, target, test_size=test_size, random_state=random_state)
  1. 初始化模型
from keras.models import Sequential
model = Sequential()
  1. 添加全连接层
from keras.layers import Dense
model.add(Dense(1, input_dim=X_train.shape[1]))
  1. 添加激活函数
from keras.layers import Activation
model.add(Activation('sigmoid'))
  1. 编译模型
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
print(model.summary())
  1. 拟合模型
history = model.fit(X_train, y_train['y'], epochs=10, validation_split=0.2)
  1. 绘制损失和准确率曲线
import matplotlib.pyplot as plt
%matplotlib inline
plt.plot(history.history['acc'])
plt.plot(history.history['val_acc'])
plt.title('Model accuracy')
plt.ylabel('Accuracy')
plt.xlabel('Epoch')
plt.legend(['Train', 'Validation'], loc='upper left')
plt.show()
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('Model loss')
plt.ylabel('Loss')
plt.xlabel('Epoch')
plt.legend(['Train', 'Validation'], loc='upper left')
plt.show()
  1. 评估模型
test_loss, test_acc = model.evaluate(X_test, y_test['y'])
print(f'The loss on the test set is {test_loss:.4f} and the accuracy is {test_acc*100:.3f}%')

3. 深度学习基础:构建单层神经网络进行二元分类

以下是构建单层神经网络进行二元分类的步骤:
1. 加载所需包

from keras.models import Sequential 
from keras.layers import Dense, Activation 
import numpy
import matplotlib.pyplot as plt 
import matplotlib
%matplotlib inline 
import matplotlib.patches as mpatches
from utils import load_dataset, plot_decision_boundary
  1. 设置随机种子
seed = 1
  1. 加载数据集
X, Y = load_dataset()
print("X size = ", X.shape)
print("Y size = ", Y.shape)
print("Number of examples = ", X.shape[0])
  1. 绘制数据集
matplotlib.rcParams['figure.figsize'] = (10.0, 8.0)
reds = Y == 0
blues = Y == 1
class_1=plt.scatter(X[reds, 0], X[reds, 1], c="red", s=40, edgecolor='k')
class_2=plt.scatter(X[blues, 0], X[blues, 1], c="blue", s=40, edgecolor='k')
plt.legend((class_1, class_2),('class 1','class 2'))
  1. 构建逻辑回归模型
numpy.random.seed(seed)
model = Sequential()
model.add(Dense(1, activation='sigmoid', input_dim=2)) 
model.compile(optimizer='sgd', loss='binary_crossentropy') 
model.fit(X, Y, batch_size=5, epochs=100,verbose=1)
matplotlib.rcParams['figure.figsize'] = (10.0, 8.0)
plot_decision_boundary(lambda x: model.predict(x), X, Y) 
plt.title("Logistic Regression")
  1. 构建具有一个隐藏层(3 个节点)的神经网络
numpy.random.seed(seed)
model = Sequential() 
model.add(Dense(3, activation='relu', input_dim=2))
model.add(Dense(1, activation='sigmoid'))
model.compile(optimizer='sgd', loss='binary_crossentropy') 
model.fit(X, Y, batch_size=5, epochs=200, verbose=1)
matplotlib.rcParams['figure.figsize'] = (10.0, 8.0)
plot_decision_boundary(lambda x: model.predict(x), X, Y) 
plt.title("Decision Boundary for Neural Network with hidden layer size 3")
  1. 构建具有一个隐藏层(6 个节点)的神经网络
numpy.random.seed(seed)
model = Sequential() 
model.add(Dense(6, activation='relu', input_dim=2))
model.add(Dense(1, activation='sigmoid'))
model.compile(optimizer='sgd', loss='binary_crossentropy') 
model.fit(X, Y, batch_size=5, epochs=400, verbose=1) 
matplotlib.rcParams['figure.figsize'] = (10.0, 8.0)
plot_decision_boundary(lambda x: model.predict(x), X, Y) 
plt.title("Decision Boundary for Neural Network with hidden layer size 6")
  1. 构建具有一个隐藏层(3 个节点)和 tanh 激活函数的神经网络
numpy.random.seed(seed)
model = Sequential() 
model.add(Dense(3, activation='tanh', input_dim=2))
model.add(Dense(1, activation='sigmoid'))
model.compile(optimizer='sgd', loss='binary_crossentropy') 
model.fit(X, Y, batch_size=5, epochs=200, verbose=1) 
matplotlib.rcParams['figure.figsize'] = (10.0, 8.0)
plot_decision_boundary(lambda x: model.predict(x), X, Y) 
plt.title("Decision Boundary for Neural Network with hidden layer size 3")
  1. 构建具有一个隐藏层(6 个节点)和 tanh 激活函数的神经网络
numpy.random.seed(seed)
model = Sequential() 
model.add(Dense(6, activation='tanh', input_dim=2))
model.add(Dense(1, activation='sigmoid'))
model.compile(optimizer='sgd', loss='binary_crossentropy') 
model.fit(X, Y, batch_size=5, epochs=400, verbose=1) 
matplotlib.rcParams['figure.figsize'] = (10.0, 8.0)
plot_decision_boundary(lambda x: model.predict(x), X, Y) 
plt.title("Decision Boundary for Neural Network with hidden layer size 6")

通过对比不同结构的神经网络,我们可以看到增加隐藏层节点数量可以提高模型捕捉非线性边界的能力,但不同激活函数的选择也会影响模型的性能。

4. 糖尿病诊断的神经网络应用

4.1 加载和划分数据集

import numpy
data = numpy.loadtxt("./data/pima-indians-diabetes.csv", delimiter=",")
X = data[:,0:8]
y = data[:,8]
print("Number of Examples in the Dataset = ", X.shape[0])
print("Number of Features for each example = ", X.shape[1]) 
print("Possible Output Classes = ", numpy.unique(y))
seed = 1
numpy.random.seed(seed)
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=0)
print(X_train.shape)
print(y_train.shape)
print(X_test.shape)
print(y_test.shape)
print ("Number of examples in training set = ", X_train.shape[0])
print ("Number of examples in test set = ", X_test.shape[0])

4.2 构建不同隐藏层结构的神经网络

4.2.1 1 个隐藏层(8 个节点)
numpy.random.seed(seed)
from keras.models import Sequential
from keras.layers import Dense
classifier = Sequential()
classifier.add(Dense(units = 8, activation = 'relu'))
classifier.add(Dense(units = 1, activation = 'sigmoid'))
classifier.compile(optimizer = 'adam', loss = 'binary_crossentropy', metrics = ['accuracy'])
history=classifier.fit(X_train, y_train, batch_size = 5, epochs = 300, validation_data=(X_test, y_test))
import matplotlib.pyplot as plt 
import matplotlib
%matplotlib inline 
matplotlib.rcParams['figure.figsize'] = (10.0, 8.0) 
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train loss', 'test loss'], loc='upper right')
print("Best Accuracy on training set = ", max(history.history['acc'])*100)
print("Best Accuray on test set = ", max(history.history['val_acc'])*100)
4.2.2 2 个隐藏层(16 和 8 个节点)
numpy.random.seed(seed)
classifier = Sequential()
classifier.add(Dense(units = 16, activation = 'relu', input_dim = 8))
classifier.add(Dense(units = 8, activation = 'relu'))
classifier.add(Dense(units = 1, activation = 'sigmoid'))
classifier.compile(optimizer = 'adam', loss = 'binary_crossentropy', metrics = ['accuracy'])
history=classifier.fit(X_train, y_train, batch_size = 5, epochs = 350, validation_data=(X_test, y_test))
matplotlib.rcParams['figure.figsize'] = (10.0, 8.0)
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train loss', 'test loss'], loc='upper right')
print("Best Accuracy on training set = ", max(history.history['acc'])*100)
print("Best Accuray on test set = ", max(history.history['val_acc'])*100)
4.2.3 3 个隐藏层(16、8 和 4 个节点)
numpy.random.seed(seed)
classifier = Sequential()
classifier.add(Dense(units = 16, activation = 'relu', input_dim = 8))
classifier.add(Dense(units = 8, activation = 'relu'))
classifier.add(Dense(units = 4, activation = 'relu'))
classifier.add(Dense(units = 1, activation = 'sigmoid'))
classifier.compile(optimizer = 'adam', loss = 'binary_crossentropy', metrics = ['accuracy'])
history=classifier.fit(X_train, y_train, batch_size = 5, epochs = 400, validation_data=(X_test, y_test))
matplotlib.rcParams['figure.figsize'] = (10.0, 8.0)
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train loss', 'test loss'], loc='upper right')
print("Best Accuracy on training set = ", max(history.history['acc'])*100)
print("Best Accuracy on test set = ", max(history.history['val_acc'])*100)

通过构建不同隐藏层结构的神经网络,我们发现随着隐藏层数量的增加,模型在训练集上的准确率有所提高,但在测试集上可能会出现过拟合现象。

4.3 模型评估与选择

4.3.1 模型评估
import numpy
data = numpy.loadtxt("./data/pima-indians-diabetes.csv", delimiter=",")
X = data[:,0:8]
y = data[:,8]
print("Number of Examples in the Dataset = ", X.shape[0])
print("Number of Features for each example = ", X.shape[1]) 
print("Possible Output Classes = ", numpy.unique(y))
from keras.models import Sequential
from keras.layers import Dense
def build_model():
    model = Sequential()
    model.add(Dense(16, input_dim=8, activation='relu'))
    model.add(Dense(8, activation='relu'))
    model.add(Dense(1, activation='sigmoid'))
    model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
    return model
import numpy
from keras.wrappers.scikit_learn import KerasClassifier
from sklearn.model_selection import StratifiedKFold
from sklearn.model_selection import cross_val_score
seed =1
numpy.random.seed(seed)
n_folds = 5
epochs=300
batch_size=5
classifier = KerasClassifier(build_fn=build_model, epochs=epochs, batch_size=batch_size, verbose=1)
kfold = StratifiedKFold(n_splits=n_folds, shuffle=True, random_state=seed)
results = cross_val_score(classifier, X, y, cv=kfold)
for f in range(n_folds):
    print("Test accuracy at fold ", f+1, " = ", results[f])
print("\n")
print("Final Cross-validation Test Accuracy:", results.mean())
print("Standard Deviation of Final Test Accuracy:", results.std())
4.3.2 模型选择
from keras.models import Sequential
from keras.layers import Dense
from keras.wrappers.scikit_learn import KerasClassifier
from sklearn.model_selection import StratifiedKFold
from sklearn.model_selection import cross_val_score
import numpy
data = numpy.loadtxt("./data/pima-indians-diabetes.csv", delimiter=",")
X = data[:,0:8]
y = data[:,8]
def build_model_1(activation='relu', optimizer='adam'):
    model = Sequential()
    model.add(Dense(4, input_dim=8, activation=activation))
    model.add(Dense(4, activation=activation))
    model.add(Dense(4, activation=activation))
    model.add(Dense(1, activation='sigmoid'))
    model.compile(loss='binary_crossentropy', optimizer=optimizer, metrics=['accuracy'])
    return model
def build_model_2(activation='relu', optimizer='adam'):
    model = Sequential()
    model.add(Dense(16, input_dim=8, activation=activation))
    model.add(Dense(8, activation=activation))
    model.add(Dense(1, activation='sigmoid'))
    model.compile(loss='binary_crossentropy', optimizer=optimizer, metrics=['accuracy'])
    return model
def build_model_3(activation='relu', optimizer='adam'):
    model = Sequential()
    model.add(Dense(8, input_dim=8, activation=activation))
    model.add(Dense(8, activation=activation))
    model.add(Dense(1, activation='sigmoid'))
    model.compile(loss='binary_crossentropy', optimizer=optimizer, metrics=['accuracy'])
    return model
seed = 2
numpy.random.seed(seed)
n_folds = 5
batch_size=5
epochs=300
results =[]
models = [build_model_1, build_model_2, build_model_3]
for m in range(len(models)):
    classifier = KerasClassifier(build_fn=models[m], epochs=epochs, batch_size=batch_size, verbose=0)
    kfold = StratifiedKFold(n_splits=n_folds, shuffle=True, random_state=seed)
    result = cross_val_score(classifier, X, y, cv=kfold)
    results.append(result)
for m in range(len(models)):
    print("Model", m+1,"Test Accuracy =", results[m].mean())
seed = 2
numpy.random.seed(seed)
n_folds = 5
epochs = [250, 300]
batches = [5, 10]
results =[]
for e in range(len(epochs)):
    for b in range(len(batches)):
        classifier = KerasClassifier(build_fn=build_model_3, epochs=epochs[e], batch_size=batches[b], verbose=0)
        kfold = StratifiedKFold(n_splits=n_folds, shuffle=True, random_state=seed)
        result = cross_val_score(classifier, X, y, cv=kfold)
        results.append(result)
c = 0
for e in range(len(epochs)):
    for b in range(len(batches)):
        print("batch_size =", batches[b]," epochs =", epochs[e], " Test Accuracy =", results[c].mean())
        c += 1
seed = 2
numpy.random.seed(seed)
n_folds = 5
batch_size=10
epochs=300
results =[]
optimizers = ['rmsprop', 'adam','sgd']
activations = ['relu', 'tanh']
for o in range(len(optimizers)):
    for a in range(len(activations)):
        optimizer = optimizers[o]
        activation = activations[a]
        classifier = KerasClassifier(build_fn=build_model_3, epochs=epochs, batch_size=batch_size, verbose=0)
        kfold = StratifiedKFold(n_splits=n_folds, shuffle=True, random_state=seed)
        result = cross_val_score(classifier, X, y, cv=kfold)
        results.append(result)
c = 0
for o in range(len(optimizers)):
    for a in range(len(activations)):
        print("activation = ", activations[a]," optimizer = ", optimizers[o], " Test accuracy = ", results[c].mean())
        c += 1

通过交叉验证,我们可以选择出性能最佳的模型结构、超参数组合。

4.4 波士顿房价数据集的模型选择

from keras.models import Sequential
from keras.layers import Dense
from keras.wrappers.scikit_learn import KerasRegressor
from sklearn.model_selection import KFold
from sklearn.model_selection import cross_val_score
from sklearn.preprocessing import StandardScaler
from sklearn.pipeline import make_pipeline
import numpy
from sklearn.datasets import load_boston
boston = load_boston()
print("Input data size = ", boston.data.shape)
print("Output size = ", boston.target.shape)
X = boston.data
y = boston.target
print("Output Range = (", min(y), ", ", max(y), ")")
def build_model_1(optimizer='adam'):
    model = Sequential()
    model.add(Dense(10, input_dim=13, activation='relu'))
    model.add(Dense(1))
    model.compile(loss='mean_squared_error', optimizer=optimizer)
    return model
def build_model_2(optimizer='adam'):
    model = Sequential()
    model.add(Dense(10, input_dim=13, activation='relu'))
    model.add(Dense(10, activation='relu'))
    model.add(Dense(1))
    model.compile(loss='mean_squared_error', optimizer=optimizer)
    return model
def build_model_3(optimizer='adam'):
    model = Sequential()
    model.add(Dense(10, input_dim=13, activation='relu'))
    model.add(Dense(10, activation='relu'))
    model.add(Dense(10, activation='relu'))
    model.add(Dense(1))
    model.compile(loss='mean_squared_error', optimizer=optimizer)
    return model
seed = 1
numpy.random.seed(seed)
n_folds = 5
results =[]
models = [build_model_1, build_model_2, build_model_3]
for i in range(len(models)):
    regressor = KerasRegressor(build_fn=models[i], epochs=100, batch_size=5, verbose=0)
    model = make_pipeline(StandardScaler(), regressor)
    kfold = KFold(n_splits=n_folds, shuffle=True, random_state=seed)
    result = cross_val_score(model, X, y, cv=kfold)
    results.append(result)
for i in range(len(models)):
    print("Model ", i+1," test error rate = ", abs(results[i].mean()))
n_folds = 5
results =[]
epochs = [80, 100]
batches = [5, 10]
for i in range(len(epochs)):
    for j in range(len(batches)):
        regressor = KerasRegressor(build_fn=build_model_2, epochs=epochs[i], batch_size=batches[j], verbose=0)
        model = make_pipeline(StandardScaler(), regressor)
        kfold = KFold(n_splits=n_folds, shuffle=True, random_state=seed)
        result = cross_val_score(model, X, y, cv=kfold)
        results.append(result)
c = 0
for i in range(len(epochs)):
    for j in range(len(batches)):
        print("batch_size = ", batches[j]," epochs = ", epochs[i], " Test error rate = ", abs(results[c].mean()))
        c += 1
n_folds = 5
results =[]
optimizers = ['adam', 'sgd', 'rmsprop']
for i in range(len(optimizers)):
    optimizer=optimizers[i]
    regressor = KerasRegressor(build_fn=build_model_2, epochs=80, batch_size=5, verbose=0)
    model = make_pipeline(StandardScaler(), regressor)
    kfold = KFold(n_splits=n_folds, shuffle=True, random_state=seed)
    result = cross_val_score(model, X, y, cv=kfold)
    results.append(result)
for i in range(len(optimizers)):
    print("optimizer=", optimizers[i]," test error rate = ", abs(results[i].mean()))

通过对波士顿房价数据集的实验,我们可以确定最佳的模型结构、超参数和优化器,以降低测试误差率。

5. 模型准确率的提升

5.1 糖尿病诊断分类器的权重正则化

import numpy
data = numpy.loadtxt("./data/pima-indians-diabetes.csv", delimiter=",")
X = data[:,0:8]
y = data[:,8]
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=0)
import numpy
seed = 1
numpy.random.seed(seed)
from keras.models import Sequential
from keras.layers import Dense
model = Sequential()
model.add(Dense(8, input_dim=8, activation='relu'))
model.add(Dense(8, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='sgd', metrics=['accuracy'])
history=model.fit(X_train, y_train, batch_size = 10, epochs = 300, validation_data=(X_test, y_test), verbose=0)
import matplotlib.pyplot as plt 
import matplotlib
%matplotlib inline
matplotlib.rcParams['figure.figsize'] = (10.0, 8.0) 
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.ylim(0,1)
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train loss', 'test loss'], loc='upper right')
print("Best Accuray on Test Set =", max(history.history['val_acc']))
numpy.random.seed(seed)
from keras.regularizers import l2
model = Sequential()
model.add(Dense(8, input_dim=8, activation='relu', kernel_regularizer=l2(0.01)))
model.add(Dense(8, activation='relu', kernel_regularizer=l2(0.01)))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='sgd', metrics=['accuracy'])
history=model.fit(X_train, y_train, batch_size = 10, epochs = 300, validation_data=(X_test, y_test), verbose=0)
matplotlib.rcParams['figure.figsize'] = (10.0, 8.0) 
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.ylim(0,1)
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train loss', 'test loss'], loc='upper right')
print("Best Accuray on Test Set =", max(history.history['val_acc']))
numpy.random.seed(seed)
from keras.regularizers import l2
model = Sequential()
model.add(Dense(8, input_dim=8, activation='relu', kernel_regularizer=l2(0.1)))
model.add(Dense(8, activation='relu', kernel_regularizer=l2(0.1)))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='sgd', metrics=['accuracy'])
history=model.fit(X_train, y_train, batch_size = 10, epochs = 300, validation_data=(X_test, y_test), verbose=0)
matplotlib.rcParams['figure.figsize'] = (10.0, 8.0) 
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.ylim(0,1)
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train loss', 'test loss'], loc='upper right')
print("Best Accuray on Test Set =", max(history.history['val_acc']))
numpy.random.seed(seed)
from keras.regularizers import l2
model = Sequential()
model.add(Dense(8, input_dim=8, activation='relu', kernel_regularizer=l2(0.5)))
model.add(Dense(8, activation='relu', kernel_regularizer=l2(0.5)))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='sgd', metrics=['accuracy'])
history=model.fit(X_train, y_train, batch_size = 10, epochs = 300, validation_data=(X_test, y_test), verbose=0)
matplotlib.rcParams['figure.figsize'] = (10.0, 8.0) 
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.ylim(0,1)
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train loss', 'test loss'], loc='upper right')
print("Best Accuray on Test Set =", max(history.history['val_acc']))
numpy.random.seed(seed)
from keras.regularizers import l1
model = Sequential()
model.add(Dense(8, input_dim=8, activation='relu', kernel_regularizer=l1(0.01)))
model.add(Dense(8, activation='relu', kernel_regularizer=l1(0.01)))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='sgd', metrics=['accuracy'])
history=model.fit(X_train, y_train, batch_size = 10, epochs = 300, validation_data=(X_test, y_test), verbose=0)
matplotlib.rcParams['figure.figsize'] = (10.0, 8.0) 
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.ylim(0,1)
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train loss', 'test loss'], loc='upper right')
print("Best Accuray on Test Set =", max(history.history['val_acc']))
numpy.random.seed(seed)
from keras.regularizers import l1
model = Sequential()
model.add(Dense(8, input_dim=8, activation='relu', kernel_regularizer=l1(0.1)))
model.add(Dense(8, activation='relu', kernel_regularizer=l1(0.1)))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='sgd', metrics=['accuracy'])
history=model.fit(X_train, y_train, batch_size = 10, epochs = 300, validation_data=(X_test, y_test), verbose=0)
matplotlib.rcParams['figure.figsize'] = (10.0, 8.0) 
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.ylim(0,1)
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train loss', 'test loss'], loc='upper right')
print("Best Accuray on Test Set =", max(history.history['val_acc']))
numpy.random.seed(seed)
from keras.regularizers import l1_l2
model = Sequential()
model.add(Dense(8, input_dim=8, activation='relu', kernel_regularizer=l1_l2(l1=0.01, l2=0.1)))
model.add(Dense(8, activation='relu', kernel_regularizer=l1_l2(l1=0.01, l2=0.1)))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='sgd', metrics=['accuracy'])
history=model.fit(X_train, y_train, batch_size = 10, epochs = 300, validation_data=(X_test, y_test), verbose=0)
matplotlib.rcParams['figure.figsize'] = (10.0, 8.0) 
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.ylim(0,1)
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train loss', 'test loss'], loc='upper right')
print("Best Accuray on Test Set =", max(history.history['val_acc']))

通过对糖尿病诊断分类器应用不同的权重正则化(L1、L2、L1+L2),我们可以观察到正则化对减少过拟合和提高测试准确率的影响。

5.2 波士顿房价数据集的 Dropout 正则化

from sklearn.datasets import load_boston
boston = load_boston()
X = boston.data
y = boston.target
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=0)
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
X_train = scaler.fit_transform(X_train)
X_test = scaler.transform(X_test)
import numpy
seed = 1
numpy.random.seed(seed)
from keras.models import Sequential
from keras.layers import Dense
model = Sequential()
model.add(Dense(10, input_dim=13, activation='relu'))
model.add(Dense(10, activation='relu'))
model.add(Dense(1))
model.compile(loss='mean_squared_error', optimizer='rmsprop')
history=model.fit(X_train, y_train, batch_size = 5, epochs = 150, validation_data=(X_test, y_test), verbose=0)
import matplotlib.pyplot as plt 
import matplotlib
%matplotlib inline 
matplotlib.rcParams['figure.figsize'] = (10.0, 8.0) 
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.ylim((0, 100))
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train loss', 'test loss'], loc='upper right')
print("Lowest error on training set = ", min(history.history['loss']))
print("Lowest error on test set = ", min(history.history['val_loss']))
numpy.random.seed(seed)
from keras.layers import Dropout
model = Sequential()
model.add(Dense(10, input_dim=13, activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(10, activation='relu'))
model.add(Dense(1))
model.compile(loss='mean_squared_error', optimizer='rmsprop')
history=model.fit(X_train, y_train, batch_size = 5, epochs = 200, validation_data=(X_test, y_test), verbose=0)
matplotlib.rcParams['figure.figsize'] = (10.0, 8.0)
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.ylim((0, 100))
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train loss', 'test loss'], loc='upper right')
print("Lowest error on training set = ", min(history.history['loss']))
print("Lowest error on test set = ", min(history.history['val_loss']))
numpy.random.seed(seed)
model = Sequential()
model.add(Dense(10, input_dim=13, activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(10, activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(1))
model.compile(loss='mean_squared_error', optimizer='rmsprop')
history=model.fit(X_train, y_train, batch_size = 5, epochs = 200, validation_data=(X_test, y_test), verbose=0)
matplotlib.rcParams['figure.figsize'] = (10.0, 8.0)
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.ylim((0, 100))
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train loss', 'test loss'], loc='upper right')
print("Lowest error on training set = ", min(history.history['loss']))
print("Lowest error on test set = ", min(history.history['val_loss']))
numpy.random.seed(seed)
model = Sequential()
model.add(Dense(10, input_dim=13, activation='relu'))
model.add(Dropout(0.1))
model.add(Dense(10, activation='relu'))
model.add(Dropout(0.1))
model.add(Dense(1))
model.compile(loss='mean_squared_error', optimizer='rmsprop')
history=model.fit(X_train, y_train, batch_size = 5, epochs = 200, validation_data=(X_test, y_test), verbose=0)
matplotlib.rcParams['figure.figsize'] = (10.0, 8.0)
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.ylim((0, 100))
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train loss', 'test loss'], loc='upper right')
print("Lowest error on training set = ", min(history.history['loss']))
print("Lowest error on test set = ", min(history.history['val_loss']))

通过在波士顿房价数据集上应用不同的 Dropout 正则化率,我们可以观察到 Dropout 对减少过拟合和平衡训练误差与测试误差的作用。

5.3 糖尿病诊断分类器的超参数调优

import numpy
data = numpy.loadtxt("./data/pima-indians-diabetes.csv", delimiter=",")
X = data[:,0:8]
y = data[:,8]
from keras.models import Sequential
from keras.layers import Dense
from keras.regularizers import l2
def build_model(lambda_parameter):
    model = Sequential()
    model.add(Dense(8, input_dim=8, activation='relu', kernel_regularizer=l2(lambda_parameter)))
    model.add(Dense(8, activation='relu', kernel_regularizer=l2(lambda_parameter)))
    model.add(Dense(1, activation='sigmoid'))
    model.compile(loss='binary_crossentropy', optimizer='sgd', metrics=['accuracy'])
    return model
from keras.wrappers.scikit_learn import KerasClassifier
from sklearn.model_selection import GridSearchCV
import numpy
seed = 1
numpy.random.seed(seed)
model = KerasClassifier(build_fn=build_model, verbose=0)
lambda_parameter = [0.01, 0.5, 1]
epochs = [350, 400]
batch_size = [10]
param_grid = dict(lambda_parameter=lambda_parameter, epochs=epochs, batch_size=batch_size)
grid_seach = GridSearchCV(estimator=model, param_grid=param_grid, cv=5)
results = grid_seach.fit(X, y)
print("Best cross validation score =", results.best_score_)
print("Parameters for Best cross validation score =", results.best_params_)
accuracy_means = results.cv_results_['mean_test_score']
accuracy_stds = results.cv_results_['std_test_score']
parameters = results.cv_results_['params']
for p in range(len(parameters)):
    print("Accuracy %f (std %f) for params %r" % (accuracy_means[p], accuracy_stds[p], parameters[p]))
import numpy
seed = 1
numpy.random.seed(seed)
model = KerasClassifier(build_fn=build_model, verbose=0)
lambda_parameter = [0.001, 0.01, 0.05, 0.1]
epochs = [400]
batch_size = [10]
param_grid = dict(lambda_parameter=lambda_parameter, epochs=epochs, batch_size=batch_size)
grid_seach = GridSearchCV(estimator=model, param_grid=param_grid, cv=5)
results = grid_seach.fit(X, y)
print("Best cross validation score =", results.best_score_)
print("Parameters for Best cross validation score =", results.best_params_)
accuracy_means = results.cv_results_['mean_test_score']
accuracy_stds = results.cv_results_['std_test_score']
parameters = results.cv_results_['params']
for p in range(len(parameters)):
    print("Accuracy %f (std %f) for params %r" % (accuracy_means[p], accuracy_stds[p], parameters[p]))
from keras.layers import Dropout
def build_model(rate):
    model = Sequential()
    model.add(Dense(8, activation='relu'))
    model.add(Dropout(rate))
    model.add(Dense(8, activation='relu'))
    model.add(Dropout(rate))
    model.add(Dense(1, activation='sigmoid'))
    model.compile(loss='binary_crossentropy', optimizer='sgd', metrics=['accuracy'])
    return model
import numpy
seed = 1
numpy.random.seed(seed)
model = KerasClassifier(build_fn=build_model, verbose=0)
rate = [0, 0.2, 0.4]
epochs = [350, 400]
batch_size = [10]
param_grid = dict(rate=rate, epochs=epochs, batch_size=batch_size)
grid_seach = GridSearchCV(estimator=model, param_grid=param_grid, cv=5)
results = grid_seach.fit(X, y)
print("Best cross validation score =", results.best_score_)
print("Parameters for Best cross validation score =", results.best_params_)
accuracy_means = results.cv_results_['mean_test_score']
accuracy_stds = results.cv_results_['std_test_score']
parameters = results.cv_results_['params']
for p in range(len(parameters)):
    print("Accuracy %f (std %f) for params %r" % (accuracy_means[p], accuracy_stds[p], parameters[p]))
import numpy
seed = 1
numpy.random.seed(seed)
model = KerasClassifier(build_fn=build_model, verbose=0)
rate = [0.0, 0.05, 0.1]
epochs = [400]
batch_size = [10]
param_grid = dict(rate=rate, epochs=epochs, batch_size=batch_size)
grid_seach = GridSearchCV(estimator=model, param_grid=param_grid, cv=5)
results = grid_seach.fit(X, y)
print("Best cross validation score =", results.best_score_)
print("Parameters for Best cross validation score =", results.best_params_)
accuracy_means = results.cv_results_['mean_test_score']
accuracy_stds = results.cv_results_['std_test_score']
parameters = results.cv_results_['params']
for p in range(len(parameters)):
    print("Accuracy %f (std %f) for params %r" % (accuracy_means[p], accuracy_stds[p], parameters[p]))

通过网格搜索和交叉验证,我们可以找到糖尿病诊断分类器的最佳超参数组合,从而提高模型的性能。

6. 模型评估

6.1 改变训练/测试划分时神经网络的准确率和零准确率计算

import numpy as np
import pandas as pd
patient_data=pd.read_csv("Health_Data.csv")
patient_data.head()
mydata=pd.read_csv("Health_Data.csv")
X=mydata.iloc[:,1:9]
y=mydata.iloc[:,9]
X.head()
A_type=pd.get_dummies(X.iloc[:,1],drop_first=True,prefix='Atype')
New_gender=pd.get_dummies(X.iloc[:,4],drop_first=True,prefix='Gender')
Pre_exdis=pd.get_dummies(X.iloc[:,2],drop_first=True,prefix='PreExistDis')
X.drop(['Admission_type','PreExistingDisease','Gender'],axis=1,inplace=True)
X=pd.concat([X,A_type,New_gender,Pre_exdis],axis=1)
from sklearn.model_selection import train_test_split
xtrain,xtest,ytrain,ytest= train_test_split(X, y, test_size=0.25, random_state=500)
from sklearn.preprocessing import StandardScaler
sc=StandardScaler()
xtrain=sc.fit_transform(xtrain)
xtrain=pd.DataFrame(xtrain,columns=xtest.columns)
xtest=sc.transform(xtest)
xtest=pd.DataFrame(xtest,columns=xtrain.columns)
x_train=xtrain.values
x_test=xtest.values
y_train=ytrain.values
y_test=ytest.values
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import Dropout
model=Sequential()
model.add(Dense(units=6,activation='relu',kernel_initializer='uniform',input_dim=11))
model.add(Dropout(rate=0.3))
model.add(Dense(units=6,activation='relu',kernel_initializer='uniform'))
model.add(Dropout(rate=

### 6.1 改变训练/测试划分时神经网络的准确率和零准确率计算(续)
```python
0.3))
model.add(Dense(units=1,activation='sigmoid',kernel_initializer='uniform'))
model.compile(optimizer='adam',loss='binary_crossentropy',metrics=['accuracy'])
model.fit(x_train,y_train,epochs=100,batch_size=20)
y_pred_class=model.predict(x_test)
y_pred_prob=model.predict_proba(x_test)
y_pred_class=y_pred_class>0.5
from sklearn.metrics import accuracy_score
accuracy = accuracy_score(y_test,y_pred_class)
print("The final calculated accuracy:", accuracy)
ytest.value_counts()
null_accuracy = ytest.value_counts().head(1)/len(ytest)
print("The null accuracy:", null_accuracy)

在这个过程中,我们首先加载了患者数据,对数据进行预处理,包括创建虚拟变量、划分训练集和测试集、数据标准化等操作。然后构建了一个神经网络模型并进行训练,得到预测结果。通过 accuracy_score 函数计算模型的准确率,同时使用 value_counts 函数计算零准确率。可以发现,改变训练/测试划分会影响准确率和零准确率。

6.2 基于混淆矩阵的指标计算与分析

from sklearn.metrics import confusion_matrix
cm=confusion_matrix(y_test,y_pred_class)
print("Computed confusion matrix:\n", cm)
TN=cm[0,0]
FN=cm[1,0]
FP=cm[0,1]
TP=cm[1,1]
Sensitivity=TP/(TP+FN)
Specificity=TN/(TN+FP)
Precision= TP/(TP+FP)
False_Positive_rate= FP/(FP+TN)
print("Sensitivity:", Sensitivity)
print("Specificity:", Specificity)
print("Precision:", Precision)
print("False Positive rate:", False_Positive_rate)
y_pred_class=y_pred_class>0.3
cm=confusion_matrix(y_test,y_pred_class)
print("Confusion matrix with a threshold of 0.3:\n", cm)
TN=cm[0,0]
FN=cm[1,0]
FP=cm[0,1]
TP=cm[1,1]
Sensitivity=TP/(TP+FN)
Specificity=TN/(TN+FP)
print("Recomputed Sensitivity:", Sensitivity)
print("Recomputed Specificity:", Specificity)
import matplotlib.pyplot as plt
%matplotlib inline
plt.hist(y_pred_prob)
plt.title("Histogram of Predicted Probabilities")
plt.xlabel("Predicted Probabilities of patient readmission")
plt.ylabel("Frequency")
plt.show()

我们通过 confusion_matrix 函数计算混淆矩阵,进而计算出灵敏度、特异度、精确率和假阳性率等指标。当改变分类阈值时,混淆矩阵会发生变化,相应的指标也会改变。通过绘制预测概率的直方图,我们可以直观地看到预测概率的分布情况,这有助于我们理解为什么改变阈值会影响灵敏度和特异度。

6.2 指标总结

指标 阈值 0.5 阈值 0.3
灵敏度 Sensitivity_0_5 Sensitivity_0_3
特异度 Specificity_0_5 Specificity_0_3
精确率 Precision_0_5 -
假阳性率 False_Positive_rate_0_5 -

从表格中可以看出,降低阈值通常会提高灵敏度,但可能会降低特异度。

7. 计算机视觉中的卷积神经网络

7.1 多图层模型与 SoftMax 激活函数的应用

from keras.models import Sequential
from keras.layers import Conv2D
from keras.layers import MaxPool2D
from keras.layers import Flatten
from keras.layers import Dense
classifier=Sequential()
classifier.add(Conv2D(32,3,3,input_shape=(64,64,3),activation='relu'))
classifier.add(Conv2D(32, (3, 3), activation = 'relu'))
classifier.add(Conv2D(32, (3, 3), activation = 'relu'))
classifier.add(MaxPool2D(2,2))
classifier.add(Flatten())
classifier.add(Dense(128,activation='relu')) 
classifier.add(Dense(128,activation='relu'))
classifier.add(Dense(128,activation='relu'))
classifier.add(Dense(128,activation='relu'))
classifier.add(Dense(1,activation='softmax')) 
classifier.compile(optimizer='adam',loss='binary_crossentropy',metrics=['accuracy'])
from keras.preprocessing.image import ImageDataGenerator
train_datagen = ImageDataGenerator(rescale = 1./255,
                                   shear_range = 0.2,
                                   zoom_range = 0.2,
                                   horizontal_flip = True)
test_datagen = ImageDataGenerator(rescale = 1./255)
training_set = train_datagen.flow_from_directory('../dataset/training_set',
target_size = (64, 64),
batch_size = 32,
class_mode = 'binary')
test_set = test_datagen.flow_from_directory('../dataset/test_set',
target_size = (64, 64),
batch_size = 32,
class_mode = 'binary')
classifier.fit_generator(training_set,
steps_per_epoch = 10000,
epochs = 2,
validation_data = test_set,
validation_steps = 2500)

在这个过程中,我们构建了一个卷积神经网络模型,包含多个卷积层、池化层和全连接层。使用 ImageDataGenerator 对图像进行预处理和增强,然后将模型在训练集上进行训练,并在测试集上进行验证。然而,尽管增加了网络层数,但由于将激活函数从 sigmoid 改为 softmax ,模型的准确率下降到了 50.01%。

7.2 新图像分类

from keras.preprocessing import image
import numpy as np
new_image = image.load_img('../test/test_image_2.jpg', target_size = (64, 64))
new_image = image.img_to_array(new_image)
new_image = np.expand_dims(new_image, axis = 0)
result = classifier.predict(new_image)
training_set.class_indices
if result[0][0] == 1:
    prediction = 'It is a Dog'
else:
    prediction = 'It is a Cat'
print(prediction)

我们加载一张新的图像,对其进行预处理,然后使用之前训练好的模型进行预测。通过 class_indices 方法将预测结果映射为具体的类别。

8. 迁移学习与预训练模型

8.1 使用 VGG16 网络进行图像识别

import numpy as np
from keras.applications.vgg16 import VGG16
from keras.preprocessing import image
from keras.applications.vgg16 import preprocess_input
classifier = VGG16()
print(classifier.summary())
new_image= image.load_img('../Data/Prediction/test_image_1.jpg', target_size=(224, 224))
transformed_image= image.img_to_array(new_image)
transformed_image=np.expand_dims(transformed_image,axis=0)
transformed_image=preprocess_input(transformed_image)
y_pred= classifier.predict(transformed_image)
from keras.applications.vgg16 import decode_predictions
top_five = decode_predictions(y_pred,top=5)
print("Top-five probabilities of our image:\n", top_five)
label = decode_predictions(y_pred)
decoded_label = label[0][0]
print('%s (%.2f%%)' % (decoded_label[1], decoded_label[2]*100 ))

我们使用预训练的 VGG16 模型对图像进行识别。首先加载模型并查看其结构,然后加载一张图像,对其进行预处理,最后使用模型进行预测。通过 decode_predictions 函数获取图像属于前五个类别的概率,并提取最可能的类别标签。

8.2 使用 ResNet 进行图像分类

import numpy as np
from keras.applications.resnet50 import ResNet50
from keras.preprocessing import image
from keras.applications.resnet50 import preprocess_input 
classifier=ResNet50()
print(classifier.summary())
new_image= image.load_img('../Data/Prediction/test_image_2.jpg', target_size=(224, 224))
transformed_image= image.img_to_array(new_image)
transformed_image=np.expand_dims(transformed_image,axis=0)
transformed_image=preprocess_input(transformed_image)
y_pred= classifier.predict(transformed_image)
from keras.applications.resnet50 import decode_predictions
top_five = decode_predictions(y_pred,top=5)
print("Top-five probabilities of our image:\n", top_five)
label = decode_predictions(y_pred)
decoded_label = label[0][0]
print('%s (%.2f%%)' % (decoded_label[1], decoded_label[2]*100 ))

与 VGG16 类似,我们使用预训练的 ResNet50 模型对图像进行分类。加载模型、图像预处理、预测和提取标签的步骤基本相同。

9. 循环神经网络的序列建模

9.1 使用 50 个单元的 LSTM 预测微软股票价格趋势

import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
dataset_training = pd.read_csv('MSFT_train.csv')
dataset_training.head()
training_data = dataset_training.iloc[:, 1:2].values 
from sklearn.preprocessing import MinMaxScaler
sc = MinMaxScaler(feature_range = (0, 1))
training_data_scaled = sc.fit_transform(training_data)
X_train = []
y_train = []
for i in range(60, 1258):
    X_train.append(training_data_scaled[i-60:i, 0])
    y_train.append(training_data_scaled[i, 0])
X_train, y_train = np.array(X_train), np.array(y_train)
X_train = np.reshape(X_train, (X_train.shape[0], X_train.shape[1], 1))
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import LSTM
from keras.layers import Dropout
model = Sequential()
model.add(LSTM(units = 50, return_sequences = True, input_shape = (X_train.shape[1], 1)))
model.add(LSTM(units = 50, return_sequences = True))
model.add(LSTM(units = 50, return_sequences = True))
model.add(LSTM(units = 50))
model.add(Dense(units = 1))
model.compile(optimizer = 'adam', loss = 'mean_squared_error')
model.fit(X_train, y_train, epochs = 100, batch_size = 32)
dataset_testing = pd.read_csv('MSFT_test.csv')
actual_stock_price = dataset_testing.iloc[:, 1:2].values
total_data = pd.concat((dataset_training['Open'], dataset_testing['Open']), axis = 0)
inputs = total_data[len(total_data) - len(dataset_testing) - 60:].values
inputs = inputs.reshape(-1,1)
inputs = sc.transform(inputs)
X_test = []
for i in range(60, 81):
    X_test.append(inputs[i-60:i, 0])
X_test = np.array(X_test)
X_test = np.reshape(X_test, (X_test.shape[0], X_test.shape[1], 1))
predicted_stock_price = model.predict(X_test)
predicted_stock_price = sc.inverse_transform(predicted_stock_price)
plt.plot(actual_stock_price, color = 'green', label = 'Real Microsoft Stock Price',ls='--')
plt.plot(predicted_stock_price, color = 'red', label = 'Predicted Microsoft Stock Price',ls='-')
plt.title('Predicted Stock Price')
plt.xlabel('Time in days')
plt.ylabel('Real Stock Price')
plt.legend()
plt.show()

我们使用 50 个单元的 LSTM 网络对微软股票价格进行预测。首先加载训练数据,进行特征缩放,然后创建训练数据集。构建 LSTM 模型并进行训练,接着加载测试数据,进行预处理和预测。最后将预测结果与实际股票价格进行可视化对比。

9.2 增加正则化的微软股票价格预测

import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
dataset_training = pd.read_csv('MSFT_train.csv')
dataset_training.head()
training_data = dataset_training.iloc[:, 1:2].values
from sklearn.preprocessing import MinMaxScaler
sc = MinMaxScaler(feature_range = (0, 1))
training_data_scaled = sc.fit_transform(training_data)
X_train = []
y_train = []
for i in range(60, 1258):
    X_train.append(training_data_scaled[i-60:i, 0])
    y_train.append(training_data_scaled[i, 0])
X_train, y_train = np.array(X_train), np.array(y_train)
X_train = np.reshape(X_train, (X_train.shape[0], X_train.shape[1], 1))
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import LSTM
from keras.layers import Dropout
model = Sequential()
model.add(LSTM(units = 50, return_sequences = True, input_shape = (X_train.shape[1], 1)))
model.add(Dropout(0.2))
model.add(LSTM(units = 50, return_sequences = True))
model.add(Dropout(0.2))
model.add(LSTM(units = 50, return_sequences = True))
model.add(Dropout(0.2))
model.add(LSTM(units = 50))
model.add(Dropout(0.2))
model.add(Dense(units = 1))
model.compile(optimizer = 'adam', loss = 'mean_squared_error')
model.fit(X_train, y_train, epochs = 100, batch_size = 32)
dataset_testing = pd.read_csv('MSFT_test.csv')
actual_stock_price = dataset_testing.iloc[:, 1:2].values
total_data = pd.concat((dataset_training['Open'], dataset_testing['Open']), axis = 0)
inputs = total_data[len(total_data) - len(dataset_testing) - 60:].values
inputs = inputs.reshape(-1,1)
inputs = sc.transform(inputs)
X_test = []
for i in range(60, 81):
    X_test.append(inputs[i-60:i, 0])
X_test = np.array(X_test)
X_test = np.reshape(X_test, (X_test.shape[0], X_test.shape[1], 1))
predicted_stock_price = model.predict(X_test)
predicted_stock_price = sc.inverse_transform(predicted_stock_price)
plt.plot(actual_stock_price, color = 'green', label = 'Real Microsoft Stock Price',ls='--')
plt.plot(predicted_stock_price, color = 'red', label = 'Predicted Microsoft Stock Price',ls='-')
plt.title('Predicted Stock Price')
plt.xlabel('Time in days')
plt.ylabel('Real Stock Price')
plt.legend()
plt.show()

在这个例子中,我们在 LSTM 模型中增加了 Dropout 正则化。与之前的模型相比,增加正则化后模型的预测趋势可能会变差。

9.3 使用 100 个单元的 LSTM 预测微软股票价格趋势

import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
dataset_training = pd.read_csv('MSFT_train.csv')
dataset_training.head()
training_data = dataset_training.iloc[:, 1:2].values
from sklearn.preprocessing import MinMaxScaler
sc = MinMaxScaler(feature_range = (0, 1))
training_data_scaled = sc.fit_transform(training_data)
X_train = []
y_train = []
for i in range(60, 1258):
    X_train.append(training_data_scaled[i-60:i, 0])
    y_train.append(training_data_scaled[i, 0])
X_train, y_train = np.array(X_train), np.array(y_train)
X_train = np.reshape(X_train, (X_train.shape[0], X_train.shape[1], 1))
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import LSTM
from keras.layers import Dropout
model = Sequential()
model.add(LSTM(units = 100, return_sequences = True, input_shape = (X_train.shape[1], 1)))
model.add(LSTM(units = 100, return_sequences = True))
model.add(LSTM(units = 100, return_sequences = True))
model.add(LSTM(units = 100))
model.add(Dense(units = 1))
model.compile(optimizer = 'adam', loss = 'mean_squared_error')
model.fit(X_train, y_train, epochs = 100, batch_size = 32)
dataset_testing = pd.read_csv('MSFT_test.csv')
actual_stock_price = dataset_testing.iloc[:, 1:2].values
total_data = pd.concat((dataset_training['Open'], dataset_testing['Open']), axis = 0)
inputs = total_data[len(total_data) - len(dataset_testing) - 60:].values
inputs = inputs.reshape(-1,1)
inputs = sc.transform(inputs)
X_test = []
for i in range(60, 81):
    X_test.append(inputs[i-60:i, 0])
X_test = np.array(X_test)
X_test = np.reshape(X_test, (X_test.shape[0], X_test.shape[1], 1))
predicted_stock_price = model.predict(X_test)
predicted_stock_price = sc.inverse_transform(predicted_stock_price)
plt.plot(actual_stock_price, color = 'green', label = 'Actual Microsoft Stock Price',ls='--')
plt.plot(predicted_stock_price, color = 'red', label = 'Predicted Microsoft Stock Price',ls='-')
plt.title('Predicted Stock Price')
plt.xlabel('Time in days')
plt.ylabel('Real Stock Price')
plt.legend()
plt.show()

使用 100 个单元的 LSTM 模型进行股票价格预测。增加单元数量可以提高模型的拟合能力,但会增加计算时间。

9.4 不同 LSTM 模型对比

模型 单元数量 正则化 计算时间 预测效果
模型 1 50 较短 趋势基本一致
模型 2 50 有(Dropout 0.2) 适中 趋势变差
模型 3 100 较长 趋势更好,但计算成本高

通过对比不同的 LSTM 模型,我们可以根据实际需求选择合适的模型。

综上所述,我们介绍了机器学习和深度学习中的多个重要主题,包括模型构建、评估、优化以及不同领域的应用。通过实际的代码示例和详细的步骤说明,希望读者能够更好地理解和掌握这些技术。在实际应用中,需要根据具体问题选择合适的模型和方法,并不断进行实验和调优,以达到最佳的效果。

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值