1. Introduction (about machine learning)
2. Concept Learning and the General-to-Specific Ordering
3. Decision Tree Learning
4. Artificial Neural Networks
5. Evaluating Hypotheses
6. Bayesian Learning
7. Computational Learning Theory
8. Instance-Based Learning
9. Genetic Algorithms
10. Learning Sets of Rules
11. Analytical Learning
12. Combining Inductive and Analytical Learning
13. Reinforcement Learning
5. Evaluating Hypotheses
作为理科生,概率论是基础,就不细说了!
5.1 MOTIVATION
In many cases it is important to evaluate the performance of learned hypotheses as precisely as possible.
5.2 ESTIMATING HYPOTHESIS ACCURAC
5.2.1 Sample Error and True Error
5.2.2 Confidence Intervals for Discrete-Valued Hypotheses
5.3 BASICS OF SAMPLING THEORY
5.3.1 Error Estimation and Estimating Binomial Proportions
5.3.2 The Binomial Distribution
5.3.3 Mean and Variance
5.3.4 Estimators, Bias, and Variance
5.3.5 Confidence Intervals
5.3.6 Two-sided and One-sided Bound
5.4 A GENERAL APPROACH FOR DERIVING CONFIDENCE INTERVALS
5.4.1 Central Limit Theorem
5.5 DIFFERENCE IN ERROR OF TWO HYPOTHESES
5.5.1 Hypothesis Testing
5.6 COMPARING LEARNING ALGORITHMS
5.6.1 Paired t Tests
5.6.2 Practical Consideration
本文探讨了机器学习中假设评估的重要性,介绍了样本误差与真实误差的概念,并详细讨论了置信区间估计、采样理论基础等内容。此外,还涉及两种假设之间的误差差异及如何比较不同的学习算法。
2599

被折叠的 条评论
为什么被折叠?



