1. Bayesian regularization
2. online learning
stochastic gradient descent: 随机梯度下降
3. ML advicea. more training examples => fix high variance
b. Trying a smaller set of features => fixes high variance.
c. Trying a larger set of features => fix high bias.
d. adding email features => fix high bias.
e. run gradient descent for more iterations => fixes the optimization algorithm
f. try Newton's method => fixes the optimization algorithm
g. using a different value for lambda => fixes the optimization objective
h. changing to an SVM is also another way of trying to fix the optimization objective
see more in http://download.youkuaiyun.com/detail/nomad2/3759561
本文深入探讨了在机器学习领域中如何通过使用Bayesian正则化、在线学习和随机梯度下降等方法来优化算法性能。详细介绍了通过增加训练样本、尝试不同特征集大小、添加邮件特征以及调整迭代次数和优化算法策略来解决过拟合和欠拟合问题的方法。同时,文章还提到了转换为支持向量机作为一种调整优化目标的方法。
1822

被折叠的 条评论
为什么被折叠?



