SVM的参数选择:
C(1/lambda):Large C: lower bias,high variance
small C,Higher bias, low variance
sigma^2: Large sigma^2: Feature fi, vary more smoothly
Higher bias, lower variance
small sigma^2: Feature fi vary loss less smoothly
Lower bias, higher variance
SVM 和Logistics回归的比较
n=number of features, m=number of training examples
if n is large(relative to m): e.g. n>=m, n=10000, m=10...1000
Use logistic regression, or SVM without a kernel("linear kernel")
if n is small, m is intermediate (n=1000, m=10-10000)
Use SVM with Gaussian kernel
if n is small, m is large
Create/add more features, then use logistic regression or SVM without a kernel
Neural network likely to work well for most of these settings, but may be slower to train
本文探讨了SVM和支持向量机的参数选择策略,包括C和sigma^2的影响,并对比了SVM与Logistics回归在不同特征数量和训练样本规模下的适用场景。
2106

被折叠的 条评论
为什么被折叠?



