方法一:梯度下降
- 公式
- 实现
u = v = np.array(R).astype(np.float32)
m, n = u.shape
w = theano.shared(np.random.random((m, k)).astype(np.float32), name='w')
h = theano.shared(np.random.random((k, n)).astype(np.float32), name='h')
v = T.matrices('v')
update_h = function([v],
updates=[(h, h * (w.T.dot(v)) / w.T.dot(w).dot(h))])
update_w = function([v],
updates=[(w, w * (v.dot(h.T)) / w.dot(h).dot(h.T))])
方法二:乘法迭代
- 公式
- 实现
for step in xrange(steps):
for i in xrange(len(R)):
for j in xrange(len(R[i])):
if R[i][j] > 0:
eij = R[i][j] - numpy.dot(P[i, :], Q[:, j])
for k in xrange(K):
# P[i][k] = P[i][k] + alpha * (2 * eij * Q[k][j] - beta * P[i][k])
# Q[k][j] = Q[k][j] + alpha * (2 * eij * P[i][k] - beta * Q[k][j])
P[i][k] = P[i][k] + alpha * (Q[k][j] * eij)
Q[k][j] = Q[k][j] + alpha * (P[i][k] * eij)
实验对比
R = [[5, 3, 0, 1],
[4, 0, 0, 1],
[1, 1, 0, 5],
[1, 0, 0, 4],
[0, 1, 5, 4]]
方法一误差:13.8626429282
方法二误差:1.26773119711