as its name, bayes optimal decision is the best decision over all decisions, however it is impossible to working out solutions because it requires to summing out all hypothese. you can't obtain ALL hypothese.
in <<Machine Learning>>, its definition is:
in <<Pattern Recognition>>:
in <<Foundations of Statistical Natural Language Processing>>:
"Suppose that we did not actually see the sequence of coin tosses but just heard the results shouted out over the fence. Now it may be the case, as we have assumed so far, that the results reported truly reflect the results of tossing a single, possibly weighted coin. This is the theory 'u' which is a family of models, with a parameter representing the weighting of the coin. But an alternative theory is that at each step someone is tossing two fair coins, and calling out "tails" if both of them come down tails, and heads otherwise."
this explanation is very vivid. hypothese or parameters is a theory(model) in problem space. following is some metaphors:
ideal categorizations about texts ------- methods(hypothese; theory; modal): SVM, KNN, Bayes... ------- a specified category
mathematical reasoning ------- programming languages ------- codes
thinkings ------- natural languages ------- text or speech
probability distribution about pattern recognition decision ------- kinds of distribution & parameters for them ------- a decision
M.A.P (P(x|a)P(a)) ------- M.L (P(x|a)) ------- result (x)
speech ------- HMM ------- meaning underlying natural language
a concept in world ------- operations of features ------- a pattern
本文探讨了贝叶斯最优决策的概念及其在机器学习、模式识别领域的应用。通过生动的例子解释了理论(模型)在问题空间中的作用,并对比了不同决策方法如最大后验概率(MAP)与最大似然估计(MLE)的区别。
4957

被折叠的 条评论
为什么被折叠?



