1.环境的配置:
我的电脑的配置不高,所以不能用Cude进行gpu的加速,选择例cpu版本的。系统为ubuntu14.04,python2.7,theano,keras也全部都安装上,下面进行tensorflow的安装。
$ pip install https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-0.11.0-cp27-none-linux_x86_64.whl
安装的是tensorflow的0.11.0的版本。
然后进行第一个程序的运行,检查tensorflow是否安装成功。
# coding: UTF-8
import tensorflow as tf
import numpy as np
# 使用 NumPy 生成假数据(phony data), 总共 100 个点.
x_data = np.float32(np.random.rand(2, 100)) # 随机输入
y_data = np.dot([0.100, 0.200], x_data) + 0.300
# 构造一个线性模型
#
b = tf.Variable(tf.zeros([1]))
W = tf.Variable(tf.random_uniform([1, 2], -1.0, 1.0))
y = tf.matmul(W, x_data) + b
# 最小化方差
loss = tf.reduce_mean(tf.square(y - y_data))
optimizer = tf.train.GradientDescentOptimizer(0.5)
train = optimizer.minimize(loss)
# 初始化变量
init = tf.initialize_all_variables()
# 启动图 (graph)
sess = tf.Session()
sess.run(init)
# 拟合平面
for step in xrange(0, 201):
sess.run(train)
if step % 20 == 0:
print step, sess.run(W), sess.run(b)
tensorflow的计算格式:tf.zeros([3,4], int32) ==>[[0,0,0,0],[0,0,0,0],[0,0,0,0],[0,0,0,0]]
tf.ones([2,3], int32)
tf.linespace(10.0, 12.0, 3)
可以看出tensorflow 的使用和numpy有点相似。可以看教程手册。
2.下面使用tensorflow进行加法的运算:
# coding: UTF-8
import tensorflow as tf
import numpy as np
state = tf.Variable(0)
new_value = tf.add(state, tf.constant(1))
update = tf.assign(state, new_value)
with tf.Session() as sess:
sess.run(tf.initialize_all_variables())
print(sess.run(state))
for _ in range(3):
sess.run(update)
print(sess.run(state))
tensorflow的使用和numpy有点相似,之间也可以相互转化,下面的例子将numpy的数据转换成tensorflow的类型
# coding: UTF-8
import tensorflow as tf
import numpy as np
a = np.zeros((3, 3))
ta = tf.convert_to_tensor(a)
with tf.Session() as sess:
print(sess.run(ta))
使用placeholder进行站位操作:
# coding: UTF-8
import tensorflow as tf
import numpy as np
input1 = tf.placeholder(tf.float32)
input2 = tf.placeholder(tf.float32)
output = tf.mul(input1, input2)
with tf.Session() as sess:
print(sess.run([output], feed_dict={input1: [7, ], input2: [3, ]}))
先进行占位操作,然后在运行的时候进行赋值操作,将运行的值输出。
下面就进行一个经典的机器学习算法:线性回归模型
先构造数据:
# coding: UTF-8
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
#随机生成1000个点,围绕在y=01x+0.3的直线周围
num_points = 1000
vectors_set = []
for i in range(num_points):
x1 = np.random.normal(0.0, 0.55)
y1 = 0.1*x1 + 0.3 + np.random.normal(0.0, 0.3)
vectors_set.append([x1, y1])
#生成一些样本
x_data = [v[0] for v in vectors_set]
y_data = [v[1] for v in vectors_set]
plt.scatter(x_data, y_data, c='r')
plt.show()
运行的结果:
然后将整体的代码如下所示:
# coding: UTF-8
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
# 随机生成1000个点,围绕在y=01x+0.3的直线周围
num_points = 1000
vectors_set = []
for i in range(num_points):
x1 = np.random.normal(0.0, 0.55)
y1 = 0.1*x1 + 0.3 + np.random.normal(0.0, 0.3)
vectors_set.append([x1, y1])
# 生成一些样本
x_data = [v[0] for v in vectors_set]
y_data = [v[1] for v in vectors_set]
# 生成1维的w矩阵,取值是[-1,1]之间的随机数
w = tf.Variable(tf.random_uniform([1], -1, 1.0), name='w')
# 生成1维的b矩阵,初始值是0
b = tf.Variable(tf.zeros([1]), name='b')
# 经过计算得出预估值y
y = w*x_data + b
# 以预估值y和实际值用y_data之间的均方误差作为损失
loss = tf.reduce_mean(tf.square(y-y_data), name='loss')
# 采用梯度下降法来优化参数
optimizer = tf.train.GradientDescentOptimizer(0.5)
# 训练的过程就是最下化这个误差值
train = optimizer.minimize(loss, name='train')
sess = tf.Session()
init = tf.initialize_all_variables()
sess.run(init)
# 初始化的w和b是多少
print('w =', sess.run(w), 'b =', sess.run(b), 'loss =', sess.run(loss))
# 执行20次训练
for step in range(30):
sess.run(train)
print('w =', sess.run(w), 'b =', sess.run(b), 'loss =', sess.run(loss))
plt.scatter(x_data, y_data, c='r')
plt.plot(x_data, sess.run(w)*x_data+sess.run(b))
plt.show()
程序运行后,可视化如下: