Introduction to Restricted Boltzmann Machines

本文介绍了受限玻尔兹曼机(RBM)的基本原理及其在构建电影推荐系统中的应用。通过模拟用户对电影的喜好进行二元选择,RBM能够发现隐藏在用户行为背后的潜在因素,从而实现个性化推荐。文中详细解释了RBM的构成、工作原理及权重学习过程,并通过具体例子展示了其如何解析用户偏好并生成推荐。

Introduction

Suppose you ask a bunch of users to rate a set of movies on a 0-100 scale. In classical factor analysis, you could then try to explain each movie and user in terms of a set of latent factors. For example, movies like Star Wars and Lord of the Rings might have strong associations with a latent science fiction and fantasy factor, and users who like Wall-E and Toy Story might have strong associations with a latent Pixar factor.

Restricted Boltzmann Machines essentially perform a binary version of factor analysis. (This is one way of thinking about RBMs; there are, of course, others, and lots of different ways to use RBMs, but I’ll adopt this approach for this post.) Instead of users rating a set of movies on a continuous scale, they simply tell you whether they like a movie or not, and the RBM will try to discover latent factors that can explain the activation of these movie choices.

More technically, a Restricted Boltzmann Machine is a stochastic neural network (neural networkmeaning we have neuron-like units whose binary activations depend on the neighbors they’re connected to; stochastic meaning these activations have a probabilistic element) consisting of:

  • One layer of visible units (users’ movie preferences whose states we know and set);
  • One layer of hidden units (the latent factors we try to learn); and
  • A bias unit (whose state is always on, and is a way of adjusting for the different inherent popularities of each movie).

Furthermore, each visible unit is connected to all the hidden units (this connection is undirected, so each hidden unit is also connected to all the visible units), and the bias unit is connected to all the visible units and all the hidden units. To make learning easier, we restrict the network so that no visible unit is connected to any other visible unit and no hidden unit is connected to any other hidden unit.

For example, suppose we have a set of six movies (Harry Potter, Avatar, LOTR 3, Gladiator, Titanic, and Glitter) and we ask users to tell us which ones they want to watch. If we want to learn two latent units underlying movie preferences – for example, two natural groups in our set of six movies appear to be SF/fantasy (containing Harry Potter, Avatar, and LOTR 3) and Oscar winners (containing LOTR 3, Gladiator, and Titanic), so we might hope that our latent units will correspond to these categories – then our RBM would look like the following:


(Note the resemblance to a factor analysis graphical model.)

State Activation

Restricted Boltzmann Machines, and neural networks in general, work by updating the states of some neurons given the states of others, so let’s talk about how the states of individual units change. Assuming we know the connection weights in our RBM (we’ll explain how to learn these below), to update the state of unit  i :

  • Compute the activation energy  ai=jwijxj  of unit  i , where the sum runs over all units  j that unit  i  is connected to,  wij  is the weight of the connection between  i  and  j , and  xj  is the 0 or 1 state of unit  j . In other words, all of unit  i ’s neighbors send it a message, and we compute the sum of all these messages.
  • Let  pi=σ(ai) , where  σ(x)=1/(1+exp(x))  is the logistic function. Note that  pi  is close to 1 for large positive activation energies, and  pi  is close to 0 for negative activation energies.
  • We then turn unit  i  on with probability  pi , and turn it off with probability  1pi .
  • (In layman’s terms, units that are positively connected to each other try to get each other to share the same state (i.e., be both on or off), while units that are negatively connected to each other are enemies that prefer to be in different states.)

For example, let’s suppose our two hidden units really do correspond to SF/fantasy and Oscar winners.

  • If Alice has told us her six binary preferences on our set of movies, we could then ask our RBM which of the hidden units her preferences activate (i.e., ask the RBM to explain her preferences in terms of latent factors). So the six movies send messages to the hidden units, telling them to update themselves. (Note that even if Alice has declared she wants to watch Harry Potter, Avatar, and LOTR 3, this doesn’t guarantee that the SF/fantasy hidden unit will turn on, but only that it will turn on with high probability. This makes a bit of sense: in the real world, Alice wanting to watch all three of those movies makes us highly suspect she likes SF/fantasy in general, but there’s a small chance she wants to watch them for other reasons. Thus, the RBM allows us to generate models of people in the messy, real world.)
  • Conversely, if we know that one person likes SF/fantasy (so that the SF/fantasy unit is on), we can then ask the RBM which of the movie units that hidden unit turns on (i.e., ask the RBM to generate a set of movie recommendations). So the hidden units send messages to the movie units, telling them to update their states. (Again, note that the SF/fantasy unit being on doesn’t guarantee that we’ll always recommend all three of Harry Potter, Avatar, and LOTR 3 because, hey, not everyone who likes science fiction liked Avatar.)

Learning Weights

So how do we learn the connection weights in our network? Suppose we have a bunch of training examples, where each training example is a binary vector with six elements corresponding to a user’s movie preferences. Then for each epoch, do the following:

  • Take a training example (a set of six movie preferences). Set the states of the visible units to these preferences.
  • Next, update the states of the hidden units using the logistic activation rule described above: for the  j th hidden unit, compute its activation energy  aj=iwijxi , and set  xj  to 1 with probability  σ(aj)  and to 0 with probability  1σ(aj) . Then for each edge  eij , compute  Positive(eij)=xixj  (i.e., for each pair of units, measure whether they’re both on).
  • Now reconstruct the visible units in a similar manner: for each visible unit, compute its activation energy  ai , and update its state. (Note that this reconstruction may not match the original preferences.) Then update the hidden units again, and compute  Negative(eij)=xixj  for each edge.
  • Update the weight of each edge  eij  by setting  wij=wij+L(Positive(eij)Negative(eij)) , where  L  is a learning rate.
  • Repeat over all training examples.

Continue until the network converges (i.e., the error between the training examples and their reconstructions falls below some threshold) or we reach some maximum number of epochs.

Why does this update rule make sense? Note that

  • In the first phase,  Positive(eij)  measures the association between the  i th and  j th unit that we want the network to learn from our training examples;
  • In the “reconstruction” phase, where the RBM generates the states of visible units based on its hypotheses about the hidden units alone,  Negative(eij)  measures the association that the network itself generates (or “daydreams” about) when no units are fixed to training data.

So by adding  Positive(eij)Negative(eij)  to each edge weight, we’re helping the network’s daydreams better match the reality of our training examples.

(You may hear this update rule called contrastive divergence, which is basically a funky term for “approximate gradient descent”.)

Examples

I wrote a simple RBM implementation in Python (the code is heavily commented, so take a look if you’re still a little fuzzy on how everything works), so let’s use it to walk through some examples.

First, I trained the RBM using some fake data.

  • Alice: (Harry Potter = 1, Avatar = 1, LOTR 3 = 1, Gladiator = 0, Titanic = 0, Glitter = 0). Big SF/fantasy fan.
  • Bob: (Harry Potter = 1, Avatar = 0, LOTR 3 = 1, Gladiator = 0, Titanic = 0, Glitter = 0). SF/fantasy fan, but doesn’t like Avatar.
  • Carol: (Harry Potter = 1, Avatar = 1, LOTR 3 = 1, Gladiator = 0, Titanic = 0, Glitter = 0). Big SF/fantasy fan.
  • David: (Harry Potter = 0, Avatar = 0, LOTR 3 = 1, Gladiator = 1, Titanic = 1, Glitter = 0). Big Oscar winners fan.
  • Eric: (Harry Potter = 0, Avatar = 0, LOTR 3 = 1, Gladiator = 1, Titanic = 1, Glitter = 0). Oscar winners fan, except for Titanic.
  • Fred: (Harry Potter = 0, Avatar = 0, LOTR 3 = 1, Gladiator = 1, Titanic = 1, Glitter = 0). Big Oscar winners fan.

The network learned the following weights:

                 Bias Unit       Hidden 1        Hidden 2
Bias Unit       -0.08257658     -0.19041546      1.57007782 
Harry Potter    -0.82602559     -7.08986885      4.96606654 
Avatar          -1.84023877     -5.18354129      2.27197472 
LOTR 3           3.92321075      2.51720193      4.11061383 
Gladiator        0.10316995      6.74833901     -4.00505343 
Titanic         -0.97646029      3.25474524     -5.59606865 
Glitter         -4.44685751     -2.81563804     -2.91540988

Note that the first hidden unit seems to correspond to the Oscar winners, and the second hidden unit seems to correspond to the SF/fantasy movies, just as we were hoping.

What happens if we give the RBM a new user, George, who has (Harry Potter = 0, Avatar = 0, LOTR 3 = 0, Gladiator = 1, Titanic = 1, Glitter = 0) as his preferences? It turns the Oscar winners unit on (but not the SF/fantasy unit), correctly guessing that George probably likes movies that are Oscar winners.

What happens if we activate only the SF/fantasy unit, and run the RBM a bunch of different times? In my trials, it turned on Harry Potter, Avatar, and LOTR 3 three times; it turned on Avatar and LOTR 3, but not Harry Potter, once; and it turned on Harry Potter and LOTR 3, but not Avatar, twice. Note that, based on our training examples, these generated preferences do indeed match what we might expect real SF/fantasy fans want to watch.

Modifications

I tried to keep the connection-learning algorithm I described above pretty simple, so here are some modifications that often appear in practice:

  • Above,  Negative(eij)  was determined by taking the product of the  i th and  j th units after reconstructing the visible units once and then updating the hidden units again. We could also take the product after some larger number of reconstructions (i.e., repeat updating the visible units, then the hidden units, then the visible units again, and so on); this is slower, but describes the network’s daydreams more accurately.
  • Instead of using  Positive(eij)=xixj , where  xi  and  xj  are binary 0 or 1 states, we could also let  xi  and/or  xj  be activation probabilities. Similarly for  Negative(eij) .
  • We could penalize larger edge weights, in order to get a sparser or more regularized model.
  • When updating edge weights, we could use a momentum factor: we would add to each edge a weighted sum of the current step as described above (i.e.,  L(Positive(eij)Negative(eij) ) and the step previously taken.
  • Instead of using only one training example in each epoch, we could use batches of examples in each epoch, and only update the network’s weights after passing through all the examples in the batch. This can speed up the learning by taking advantage of fast matrix-multiplication algorithms.

Further

If you’re interested in learning more about Restricted Boltzmann Machines, here are some good links.

 Jul 18th, 2011  expository

【无人机】基于改进粒子群算法的无人机路径规划研究[和遗传算法、粒子群算法进行比较](Matlab代码实现)内容概要:本文围绕基于改进粒子群算法的无人机路径规划展开研究,重点探讨了在复杂环境中利用改进粒子群算法(PSO)实现无人机三维路径规划的方法,并将其与遗传算法(GA)、标准粒子群算法等传统优化算法进行对比分析。研究内容涵盖路径规划的多目标优化、避障策略、航路点约束以及算法收敛性和寻优能力的评估,所有实验均通过Matlab代码实现,提供了完整的仿真验证流程。文章还提到了多种智能优化算法在无人机路径规划中的应用比较,突出了改进PSO在收敛速度和全局寻优方面的优势。; 适合人群:具备一定Matlab编程基础和优化算法知识的研究生、科研人员及从事无人机路径规划、智能优化算法研究的相关技术人员。; 使用场景及目标:①用于无人机在复杂地形或动态环境下的三维路径规划仿真研究;②比较不同智能优化算法(如PSO、GA、蚁群算法、RRT等)在路径规划中的性能差异;③为多目标优化问题提供算法选型和改进思路。; 阅读建议:建议读者结合文中提供的Matlab代码进行实践操作,重点关注算法的参数设置、适应度函数设计及路径约束处理方式,同时可参考文中提到的多种算法对比思路,拓展到其他智能优化算法的研究与改进中。
### Boltzmann Machine 和 Boltzmann Distribution 的基本概念 #### 什么是 Boltzmann Machine? Boltzmann Machine 是一种基于能量模型的概率图模型,属于无向图模型的一种。它通过定义节点之间的连接权重来表示变量间的依赖关系,并利用能量函数描述系统的状态[^2]。具体来说: - **结构特点**: Boltzmann Machine 中的节点分为可见单元(visible units)和隐藏单元(hidden units)。其中,可见单元对应于输入数据中的特征,而隐藏单元用于捕捉数据分布中的复杂模式。 - **建模能力**: 隐藏单元的数量决定了机器的表达能力。增加隐藏单元可以增强其对复杂概率分布的学习能力[^2]。 - **训练过程**: 训练目标是最小化自由能并最大化似然估计。这通常涉及对比散度算法(Contrastive Divergence, CD),该方法能够有效近似梯度下降所需的期望值计算。 #### 什么是 Restricted Boltzmann Machine (RBM)? Restricted Boltzmann Machine 是 Boltzmann Machine 的简化版本,进一步限制了网络拓扑结构,仅允许可见层与隐含层之间存在连接,而不允许同一层内的节点相互连接。这种设计显著降低了参数数量以及学习难度。 ```python import numpy as np class RBM: def __init__(self, num_visible, num_hidden): self.num_visible = num_visible self.num_hidden = num_hidden # 初始化权重矩阵W及偏置b,c self.weights = np.random.randn(num_visible, num_hidden) * 0.01 self.visible_bias = np.zeros(num_visible) self.hidden_bias = np.zeros(num_hidden) def sample_h_given_v(self, v): prob_h = sigmoid(np.dot(v, self.weights) + self.hidden_bias) return prob_h, np.random.binomial(1, prob_h) def sigmoid(x): return 1 / (1 + np.exp(-x)) ``` 以上代码片段展示了一个简单的RBMs实现框架[^1]。 #### Boltzmann Distribution 在计算机科学中的应用 在统计物理学基础上发展起来的 Boltzmann 分布被广泛应用于优化问题求解领域。例如模拟退火(Simulated Annealing),就是借鉴热力学原理,在搜索空间中随机采样候选解,并按照一定温度调度策略接受较差解的可能性逐渐减小直至收敛到全局最优附近位置的过程。此过程中每一步选取新状态y代替当前状态x的概率遵循如下公式: \[ P(y|x,T)=\frac{e^{-E(y)/T}}{\sum_{z} e^{-E(z)/T}} \] 这里 \( E() \) 表达的是某个特定配置的能量水平;\( T \) 则代表控制探索强度的虚拟“温度”。 ---
评论
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值