斯坦福cs231n计算机视觉训练营----Image features exercise

本文探讨了如何使用验证集来设置机器学习模型的学习率和正则化强度,通过实验比较不同参数组合的效果,以提高模型在验证集上的准确率。文中详细介绍了线性SVM和支持向量机的参数调优过程,并尝试了不同的颜色直方图bin数量,最终目标是在验证集上获得接近0.44的精度。此外,还讨论了如何训练两层神经网络,并通过交叉验证选择最佳模型。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

################################################################################
# TODO:                                                                        #
# Use the validation set to set the learning rate and regularization strength. #
# This should be identical to the validation that you did for the SVM; save    #
# the best trained classifer in best_svm. You might also want to play          #
# with different numbers of bins in the color histogram. If you are careful    #
# you should be able to get accuracy of near 0.44 on the validation set.       #
################################################################################
from copy import deepcopy
for lr in learning_rates:
    for reg in regularization_strengths:
        svm = LinearSVM()
        loss_hist = svm.train(X_train_feats,y_train,learning_rate=lr,reg=reg,num_iters=3000,verbose=False)
        y_train_pred = svm.predict(X_train_feats)
        train_acc = np.mean(y_train_pred==y_train)
        y_val_pred = svm.predict(X_val_feats)
        val_acc = np.mean(y_val_pred==y_val)
        results[(lr,reg)]=(train_acc,val_acc)
        if val_acc > best_val:
            best_val = val_acc
            best_svm = deepcopy(svm)
################################################################################
#                              END OF YOUR CODE                                #
################################################################################

Inline question 1:

Describe the misclassification results that you see. Do they make sense?

比如说会把鸟预测成船,因为鸟张开翅膀的形状确实像船,make sense。

################################################################################
# TODO: Train a two-layer neural network on image features. You may want to    #
# cross-validate various parameters as in previous sections. Store your best   #
# model in the best_net variable.                                              #
################################################################################
learning_rate = [1, 1e-1]
regularization = [1e-5, 5e-5, 1e-4]
best_acc = -1
for lr in learning_rate:
    for reg in regularization:
        net = TwoLayerNet(input_dim, hidden_dim, num_classes)
        state = net.train(X_train_feats, y_train, X_val_feats, y_val,
                num_iters=2000, batch_size=200,
                learning_rate=lr, learning_rate_decay=0.95,
                reg=reg, verbose=False)
        val_acc = np.mean(net.predict(X_val_feats) == y_val)
        if val_acc > best_acc:
            best_acc = val_acc
            best_net = deepcopy(net)
print('best val acc: {:.3f}'.format(best_acc))
################################################################################
#                              END OF YOUR CODE                                #
################################################################################

 

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值