Probabilistic Graphical Modeling概率图模型学习笔记

本文介绍了概率图模型的基础知识,包括贝叶斯网络、马尔科夫随机场、条件随机场和因子图。贝叶斯网络通过有向无环图表示变量间的因果关系,而马尔科夫随机场则适用于描述非方向关联,常用于计算机视觉领域。条件随机场是马尔科夫随机场的特例,用于建模条件概率分布。因子图则提供了一种直观的方式来表示变量和因素之间的关系。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

0. learning materials

1. Introduction

A great amout of problems we have are to model the real world with some kind of functions (the most direct example is estimation fitting problems, and further our deep learning, machine learning algorithms are mostly fitting the real problem with a function).

  • But most of the time, the measurement involves a significant amont of uncertainty (in another work “error”). As a result, our measurements are actually following a probability distribution. This introduces the Probability Theory.
  • And the measurements are dependenting on each other, and sometimes we cannot find the exact expression of these relationship, but we know it exists, and we can know some properties of their relation, by prior knowledges. And this introduces the Graph modeling.

As a result, Probabilistic Graphical Modeling is concived to solve such kinds of questions.
There are three main elements in PGM :

  • Representation : How to specify a model ? Normally, we have Bayesian network for a Directed Acyclic Graph; and Markov Random Field for a Undirected graph representation. (And of course, we have other models)
  • Inference : How to ask the model questions ? For example, the Marginal inference telling the probability of a given variable, when we sum over every other variables. And Maximum a posterior inference to tell the most likely assignment of variables.
  • Learning : How to fit a model to real-world data ? Inference and learning have a special link. Inference is a key to learning.

2. Representation

2.1 Bayesian network

It is a directed acyclic graph, and it can deal with variables with causality (the variables, which have directed relationship).

p ( x 1 , x 2 , x 3 , . . , x n ) = p ( x 1 ) p ( x 2 ∣ x 1 ) . . . p ( x n ∣ x n − 1 , . . . , x 2 , x 1 ) p(x_{1}, x_{2}, x_{3},..,x_{n}) = p(x_{1})p(x_{2} | x_{1}) ...p(x_{n}|x_{n-1}, ...,x_{2},x_{1}) p(x1,x2,x3,..,xn)=p(x1)p(x2x1)...p(xnxn1,...,x2,x1)

Based on these relationship, a directed graph could be built, and further the probabilty expression could be formed. That the variable only depends on some of the ancestors A i A_{i} Ai.

p ( x i ∣ x

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值