1. Bayesian regularization
2. online learning
stochastic gradient descent:随机梯度下降
3. ML advicea. more training examples => fix high variance
b. Trying a smaller set of features => fixes high variance.
c. Trying a larger set of features => fix high bias.
d. adding email features => fix high bias.
e. run gradient descent for more iterations => fixes the optimization algorithm
f. try Newton's method => fixes the optimization algorithm
g. using a different value for lambda => fixes the optimization objective
h. changing to an SVM is also another way of trying to fix the optimization objective
see more in http://download.youkuaiyun.com/detail/nomad2/3759561
本文探讨了在机器学习中通过增加训练实例、减少特征、调整超参数等方法来优化模型性能,包括解决过拟合与欠拟合问题的策略。
1819

被折叠的 条评论
为什么被折叠?



