TensorFlow安装

本文详细介绍了在MacOSX系统上使用Virtualenv安装TensorFlow的过程,并验证了安装的成功。此外,还提供了一个运行TensorFlow自带示例的完整过程,该示例通过训练一个简单的卷积神经网络来识别手写数字。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

TensorFlow安装

  使用TensorFlow前需要先安装TensorFlow的环境,这个官方提供的安装文档: https://www.tensorflow.org/versions/r0.10/get_started/os_setup,有多种安装方式可选:

  1. Pip install
  2. Virtualenv install
  3. Anaconda install
  4. Docker install
  5. Installing from sources

      我选择了第二种 Virtualenv install 的安装方式,环境是Mac OS X 10.12.2系统,Python 2.7.10。

安装步骤如下:

  1. Install pip and Virtualenv.
  2. Create a Virtualenv environment.
  3. Activate the Virtualenv environment and install TensorFlow in it.
  4. After the install you will activate the Virtualenv environment each time you want to use TensorFlow.
  5. When finish using tensorflow, deactivate the Virtualenv environment.

第一步:安装Python包管理工具 [pip](https://en.wikipedia.org/wiki/Pip_(package_manager) 和虚拟环境 Virtualenv,Virtualenv保证项目依赖的包不会覆盖已有的包:

# Mac OS X
$ sudo easy_install pip
$ sudo easy_install --upgrade six

第二步:在~/tensorflow环境下创建一个虚拟环境

virtualenv --system-site-packages ~/tensorflow

第三步:激活环境

$ source ~/tensorflow/bin/activate  # If using bash
$ source ~/tensorflow/bin/activate.csh  # If using csh
(tensorflow)$  # Your prompt should change

第四步:使用pip安装 TensorFlow 的二进制安装包,简单起见我选择安装了CPU版本:

# Mac OS X, CPU only, Python 2.7:
$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-0.10.0-py2-none-any.whl

执行安装

# Python 2
$ sudo pip install --upgrade $TF_BINARY_URL

# Python 3
$ sudo pip3 install --upgrade $TF_BINARY_URL

  最后要验证安装是否成功,在命令行下进入Python的REPL环境,执行下面的语句,或者直接新建Python文件进行验证更方便一些:

$ python
...
>>> import tensorflow as tf
>>> hello = tf.constant('Hello, TensorFlow!')
>>> sess = tf.Session()
>>> print(sess.run(hello))
Hello, TensorFlow!
>>> a = tf.constant(10)
>>> b = tf.constant(32)
>>> print(sess.run(a + b))
42
>>>

  有一点需要注意,使用完后通过 deactivate 命令可以退出Virtualenv环境,下次如果要再次使用TensorFlow,需要再次激活这个环境 source ~/tensorflow/bin/activate

  最后跑一个简单的 tensorflow 例子,是安装包自带的example,识别手写的数字[0-9],属于有监督学习的范畴,网上有不少文章分析了这个例子对应的算法,主要是逻辑回归算法的应用。要运行的Demo脚本在这个目录下:~/tensorflow/lib/python2.7/site-packages/tensorflow/models/image/mnist/convolutional.py,直接运行就可以了:

python ~/tensorflow/lib/python2.7/site-packages/tensorflow/models/image/mnist/convolutional.py
Successfully downloaded train-images-idx3-ubyte.gz 9912422 bytes.
Successfully downloaded train-labels-idx1-ubyte.gz 28881 bytes.
Successfully downloaded t10k-images-idx3-ubyte.gz 1648877 bytes.
Successfully downloaded t10k-labels-idx1-ubyte.gz 4542 bytes.
Extracting data/train-images-idx3-ubyte.gz
Extracting data/train-labels-idx1-ubyte.gz
Extracting data/t10k-images-idx3-ubyte.gz
Extracting data/t10k-labels-idx1-ubyte.gz
Initialized!
Step 0 (epoch 0.00), 2.7 ms
Minibatch loss: 12.054, learning rate: 0.010000
Minibatch error: 90.6%
Validation error: 84.6%
Step 100 (epoch 0.12), 160.3 ms
Minibatch loss: 3.280, learning rate: 0.010000
Minibatch error: 6.2%
Validation error: 7.0%
Step 200 (epoch 0.23), 159.7 ms
Minibatch loss: 3.491, learning rate: 0.010000
Minibatch error: 12.5%
Validation error: 3.8%
Step 300 (epoch 0.35), 162.0 ms
Minibatch loss: 3.231, learning rate: 0.010000
Minibatch error: 7.8%
Validation error: 3.3%
Step 400 (epoch 0.47), 163.0 ms
Minibatch loss: 3.229, learning rate: 0.010000
Minibatch error: 10.9%
Validation error: 2.7%
Step 500 (epoch 0.58), 160.9 ms
Minibatch loss: 3.290, learning rate: 0.010000
Minibatch error: 7.8%
Validation error: 2.6%
Step 600 (epoch 0.70), 160.4 ms
Minibatch loss: 3.172, learning rate: 0.010000
Minibatch error: 6.2%
Validation error: 2.5%
Step 700 (epoch 0.81), 159.1 ms
Minibatch loss: 3.023, learning rate: 0.010000
Minibatch error: 3.1%
Validation error: 2.5%
Step 800 (epoch 0.93), 167.9 ms
Minibatch loss: 3.093, learning rate: 0.010000
Minibatch error: 6.2%
Validation error: 2.1%
Step 900 (epoch 1.05), 166.8 ms
Minibatch loss: 2.934, learning rate: 0.009500
Minibatch error: 3.1%
Validation error: 1.6%
Step 1000 (epoch 1.16), 161.6 ms
Minibatch loss: 2.871, learning rate: 0.009500
Minibatch error: 1.6%
Validation error: 1.6%
Step 1100 (epoch 1.28), 162.5 ms
Minibatch loss: 2.831, learning rate: 0.009500
Minibatch error: 1.6%
Validation error: 1.5%
Step 1200 (epoch 1.40), 162.5 ms
Minibatch loss: 2.916, learning rate: 0.009500
Minibatch error: 4.7%
Validation error: 1.5%
Step 1300 (epoch 1.51), 162.6 ms
Minibatch loss: 2.768, learning rate: 0.009500
Minibatch error: 0.0%
Validation error: 1.7%
Step 1400 (epoch 1.63), 159.9 ms
Minibatch loss: 2.772, learning rate: 0.009500
Minibatch error: 1.6%
Validation error: 1.5%
Step 1500 (epoch 1.75), 161.2 ms
Minibatch loss: 2.842, learning rate: 0.009500
Minibatch error: 4.7%
Validation error: 1.3%
Step 1600 (epoch 1.86), 179.3 ms
Minibatch loss: 2.714, learning rate: 0.009500
Minibatch error: 1.6%
Validation error: 1.4%
Step 1700 (epoch 1.98), 167.7 ms
Minibatch loss: 2.650, learning rate: 0.009500
Minibatch error: 0.0%
Validation error: 1.3%
Step 1800 (epoch 2.09), 161.6 ms
Minibatch loss: 2.665, learning rate: 0.009025
Minibatch error: 1.6%
Validation error: 1.3%
Step 1900 (epoch 2.21), 170.9 ms
Minibatch loss: 2.633, learning rate: 0.009025
Minibatch error: 1.6%
Validation error: 1.2%
Step 2000 (epoch 2.33), 170.6 ms
Minibatch loss: 2.616, learning rate: 0.009025
Minibatch error: 3.1%
Validation error: 1.2%
Step 2100 (epoch 2.44), 166.5 ms
Minibatch loss: 2.590, learning rate: 0.009025
Minibatch error: 1.6%
Validation error: 1.1%
Step 2200 (epoch 2.56), 173.8 ms
Minibatch loss: 2.573, learning rate: 0.009025
Minibatch error: 1.6%
Validation error: 1.1%
Step 2300 (epoch 2.68), 165.6 ms
Minibatch loss: 2.553, learning rate: 0.009025
Minibatch error: 1.6%
Validation error: 1.1%
Step 2400 (epoch 2.79), 166.6 ms
Minibatch loss: 2.505, learning rate: 0.009025
Minibatch error: 0.0%
Validation error: 1.1%
Step 2500 (epoch 2.91), 163.8 ms
Minibatch loss: 2.482, learning rate: 0.009025
Minibatch error: 0.0%
Validation error: 1.2%
Step 2600 (epoch 3.03), 171.7 ms
Minibatch loss: 2.459, learning rate: 0.008574
Minibatch error: 0.0%
Validation error: 1.2%
Step 2700 (epoch 3.14), 168.4 ms
Minibatch loss: 2.485, learning rate: 0.008574
Minibatch error: 1.6%
Validation error: 1.1%
Step 2800 (epoch 3.26), 169.3 ms
Minibatch loss: 2.427, learning rate: 0.008574
Minibatch error: 1.6%
Validation error: 1.3%
Step 2900 (epoch 3.37), 170.1 ms
Minibatch loss: 2.446, learning rate: 0.008574
Minibatch error: 3.1%
Validation error: 1.1%
Step 3000 (epoch 3.49), 167.7 ms
Minibatch loss: 2.398, learning rate: 0.008574
Minibatch error: 0.0%
Validation error: 1.1%
Step 3100 (epoch 3.61), 165.4 ms
Minibatch loss: 2.374, learning rate: 0.008574
Minibatch error: 1.6%
Validation error: 1.0%
Step 3200 (epoch 3.72), 164.1 ms
Minibatch loss: 2.332, learning rate: 0.008574
Minibatch error: 0.0%
Validation error: 1.1%
Step 3300 (epoch 3.84), 176.3 ms
Minibatch loss: 2.310, learning rate: 0.008574
Minibatch error: 0.0%
Validation error: 1.3%
Step 3400 (epoch 3.96), 167.7 ms
Minibatch loss: 2.301, learning rate: 0.008574
Minibatch error: 1.6%
Validation error: 1.0%
Step 3500 (epoch 4.07), 172.6 ms
Minibatch loss: 2.276, learning rate: 0.008145
Minibatch error: 0.0%
Validation error: 1.0%
Step 3600 (epoch 4.19), 166.1 ms
Minibatch loss: 2.257, learning rate: 0.008145
Minibatch error: 0.0%
Validation error: 0.9%
Step 3700 (epoch 4.31), 173.9 ms
Minibatch loss: 2.233, learning rate: 0.008145
Minibatch error: 0.0%
Validation error: 0.9%
Step 3800 (epoch 4.42), 167.2 ms
Minibatch loss: 2.237, learning rate: 0.008145
Minibatch error: 1.6%
Validation error: 0.9%
Step 3900 (epoch 4.54), 173.0 ms
Minibatch loss: 2.285, learning rate: 0.008145
Minibatch error: 3.1%
Validation error: 0.9%
Step 4000 (epoch 4.65), 173.5 ms
Minibatch loss: 2.210, learning rate: 0.008145
Minibatch error: 1.6%
Validation error: 1.0%
Step 4100 (epoch 4.77), 177.2 ms
Minibatch loss: 2.167, learning rate: 0.008145
Minibatch error: 0.0%
Validation error: 0.9%
Step 4200 (epoch 4.89), 179.8 ms
Minibatch loss: 2.216, learning rate: 0.008145
Minibatch error: 1.6%
Validation error: 1.0%
Step 4300 (epoch 5.00), 171.8 ms
Minibatch loss: 2.190, learning rate: 0.007738
Minibatch error: 1.6%
Validation error: 0.9%
Step 4400 (epoch 5.12), 169.9 ms
Minibatch loss: 2.159, learning rate: 0.007738
Minibatch error: 3.1%
Validation error: 1.0%
Step 4500 (epoch 5.24), 176.5 ms
Minibatch loss: 2.160, learning rate: 0.007738
Minibatch error: 4.7%
Validation error: 0.9%
Step 4600 (epoch 5.35), 174.5 ms
Minibatch loss: 2.095, learning rate: 0.007738
Minibatch error: 0.0%
Validation error: 0.9%
Step 4700 (epoch 5.47), 167.3 ms
Minibatch loss: 2.086, learning rate: 0.007738
Minibatch error: 1.6%
Validation error: 0.9%
Step 4800 (epoch 5.59), 171.5 ms
Minibatch loss: 2.056, learning rate: 0.007738
Minibatch error: 0.0%
Validation error: 1.0%
Step 4900 (epoch 5.70), 171.8 ms
Minibatch loss: 2.050, learning rate: 0.007738
Minibatch error: 0.0%
Validation error: 1.0%
Step 5000 (epoch 5.82), 166.5 ms
Minibatch loss: 2.142, learning rate: 0.007738
Minibatch error: 3.1%
Validation error: 1.0%
Step 5100 (epoch 5.93), 180.8 ms
Minibatch loss: 2.007, learning rate: 0.007738
Minibatch error: 0.0%
Validation error: 1.0%
Step 5200 (epoch 6.05), 172.6 ms
Minibatch loss: 2.054, learning rate: 0.007351
Minibatch error: 1.6%
Validation error: 0.9%
Step 5300 (epoch 6.17), 168.5 ms
Minibatch loss: 1.991, learning rate: 0.007351
Minibatch error: 1.6%
Validation error: 1.0%
Step 5400 (epoch 6.28), 163.2 ms
Minibatch loss: 1.955, learning rate: 0.007351
Minibatch error: 0.0%
Validation error: 0.8%
Step 5500 (epoch 6.40), 172.3 ms
Minibatch loss: 1.954, learning rate: 0.007351
Minibatch error: 0.0%
Validation error: 0.9%
Step 5600 (epoch 6.52), 173.5 ms
Minibatch loss: 1.926, learning rate: 0.007351
Minibatch error: 0.0%
Validation error: 0.8%
Step 5700 (epoch 6.63), 175.1 ms
Minibatch loss: 1.913, learning rate: 0.007351
Minibatch error: 0.0%
Validation error: 0.9%
Step 5800 (epoch 6.75), 182.6 ms
Minibatch loss: 1.900, learning rate: 0.007351
Minibatch error: 0.0%
Validation error: 0.9%
Step 5900 (epoch 6.87), 171.0 ms
Minibatch loss: 1.886, learning rate: 0.007351
Minibatch error: 0.0%
Validation error: 0.8%
Step 6000 (epoch 6.98), 163.7 ms
Minibatch loss: 1.882, learning rate: 0.007351
Minibatch error: 0.0%
Validation error: 1.0%
Step 6100 (epoch 7.10), 172.6 ms
Minibatch loss: 1.862, learning rate: 0.006983
Minibatch error: 0.0%
Validation error: 0.8%
Step 6200 (epoch 7.21), 170.8 ms
Minibatch loss: 1.844, learning rate: 0.006983
Minibatch error: 0.0%
Validation error: 0.9%
Step 6300 (epoch 7.33), 164.1 ms
Minibatch loss: 1.837, learning rate: 0.006983
Minibatch error: 0.0%
Validation error: 0.9%
Step 6400 (epoch 7.45), 175.3 ms
Minibatch loss: 1.881, learning rate: 0.006983
Minibatch error: 1.6%
Validation error: 0.8%
Step 6500 (epoch 7.56), 166.3 ms
Minibatch loss: 1.809, learning rate: 0.006983
Minibatch error: 0.0%
Validation error: 0.8%
Step 6600 (epoch 7.68), 166.6 ms
Minibatch loss: 1.821, learning rate: 0.006983
Minibatch error: 1.6%
Validation error: 0.8%
Step 6700 (epoch 7.80), 161.2 ms
Minibatch loss: 1.783, learning rate: 0.006983
Minibatch error: 0.0%
Validation error: 0.8%
Step 6800 (epoch 7.91), 162.8 ms
Minibatch loss: 1.771, learning rate: 0.006983
Minibatch error: 0.0%
Validation error: 0.8%
Step 6900 (epoch 8.03), 172.2 ms
Minibatch loss: 1.760, learning rate: 0.006634
Minibatch error: 0.0%
Validation error: 0.9%
Step 7000 (epoch 8.15), 168.9 ms
Minibatch loss: 1.777, learning rate: 0.006634
Minibatch error: 1.6%
Validation error: 0.8%
Step 7100 (epoch 8.26), 171.7 ms
Minibatch loss: 1.744, learning rate: 0.006634
Minibatch error: 0.0%
Validation error: 0.9%
Step 7200 (epoch 8.38), 168.3 ms
Minibatch loss: 1.757, learning rate: 0.006634
Minibatch error: 1.6%
Validation error: 0.8%
Step 7300 (epoch 8.49), 175.4 ms
Minibatch loss: 1.734, learning rate: 0.006634
Minibatch error: 1.6%
Validation error: 0.8%
Step 7400 (epoch 8.61), 162.9 ms
Minibatch loss: 1.703, learning rate: 0.006634
Minibatch error: 0.0%
Validation error: 0.9%
Step 7500 (epoch 8.73), 171.5 ms
Minibatch loss: 1.705, learning rate: 0.006634
Minibatch error: 0.0%
Validation error: 0.8%
Step 7600 (epoch 8.84), 162.1 ms
Minibatch loss: 1.807, learning rate: 0.006634
Minibatch error: 1.6%
Validation error: 0.8%
Step 7700 (epoch 8.96), 181.6 ms
Minibatch loss: 1.667, learning rate: 0.006634
Minibatch error: 0.0%
Validation error: 1.0%
Step 7800 (epoch 9.08), 166.3 ms
Minibatch loss: 1.662, learning rate: 0.006302
Minibatch error: 0.0%
Validation error: 0.9%
Step 7900 (epoch 9.19), 169.1 ms
Minibatch loss: 1.646, learning rate: 0.006302
Minibatch error: 0.0%
Validation error: 0.9%
Step 8000 (epoch 9.31), 167.1 ms
Minibatch loss: 1.654, learning rate: 0.006302
Minibatch error: 0.0%
Validation error: 0.9%
Step 8100 (epoch 9.43), 163.9 ms
Minibatch loss: 1.626, learning rate: 0.006302
Minibatch error: 0.0%
Validation error: 0.9%
Step 8200 (epoch 9.54), 166.5 ms
Minibatch loss: 1.634, learning rate: 0.006302
Minibatch error: 0.0%
Validation error: 0.9%
Step 8300 (epoch 9.66), 169.6 ms
Minibatch loss: 1.608, learning rate: 0.006302
Minibatch error: 0.0%
Validation error: 0.9%
Step 8400 (epoch 9.77), 173.0 ms
Minibatch loss: 1.596, learning rate: 0.006302
Minibatch error: 0.0%
Validation error: 0.8%
Step 8500 (epoch 9.89), 163.6 ms
Minibatch loss: 1.615, learning rate: 0.006302
Minibatch error: 1.6%
Validation error: 0.9%
Test error: 0.8%

END

资源下载链接为: https://pan.quark.cn/s/1bfadf00ae14 “STC单片机电压测量”是一个以STC系列单片机为基础的电压检测应用案例,它涵盖了硬件电路设计、软件编程以及数据处理等核心知识点。STC单片机凭借其低功耗、高性价比和丰富的I/O接口,在电子工程领域得到了广泛应用。 STC是Specialized Technology Corporation的缩写,该公司的单片机基于8051内核,具备内部振荡器、高速运算能力、ISP(在系统编程)和IAP(在应用编程)功能,非常适合用于各种嵌入式控制系统。 在源代码方面,“浅雪”风格的代码通常简洁易懂,非常适合初学者学习。其中,“main.c”文件是程序的入口,包含了电压测量的核心逻辑;“STARTUP.A51”是启动代码,负责初始化单片机的硬件环境;“电压测量_uvopt.bak”和“电压测量_uvproj.bak”可能是Keil编译器的配置文件备份,用于设置编译选项和项目配置。 对于3S锂电池电压测量,3S锂电池由三节锂离子电池串联而成,标称电压为11.1V。测量时需要考虑电池的串联特性,通过分压电路将高电压转换为单片机可接受的范围,并实时监控,防止过充或过放,以确保电池的安全和寿命。 在电压测量电路设计中,“电压测量.lnp”文件可能包含电路布局信息,而“.hex”文件是编译后的机器码,用于烧录到单片机中。电路中通常会使用ADC(模拟数字转换器)将模拟电压信号转换为数字信号供单片机处理。 在软件编程方面,“StringData.h”文件可能包含程序中使用的字符串常量和数据结构定义。处理电压数据时,可能涉及浮点数运算,需要了解STC单片机对浮点数的支持情况,以及如何高效地存储和显示电压值。 用户界面方面,“电压测量.uvgui.kidd”可能是用户界面的配置文件,用于显示测量结果。在嵌入式系统中,用
资源下载链接为: https://pan.quark.cn/s/abbae039bf2a 在 Android 开发中,Fragment 是界面的一个模块化组件,可用于在 Activity 中灵活地添加、删除或替换。将 ListView 集成到 Fragment 中,能够实现数据的动态加载与列表形式展示,对于构建复杂且交互丰富的界面非常有帮助。本文将详细介绍如何在 Fragment 中使用 ListView。 首先,需要在 Fragment 的布局文件中添加 ListView 的 XML 定义。一个基本的 ListView 元素代码如下: 接着,创建适配器来填充 ListView 的数据。通常会使用 BaseAdapter 的子类,如 ArrayAdapter 或自定义适配器。例如,创建一个简单的 MyListAdapter,继承自 ArrayAdapter,并在构造函数中传入数据集: 在 Fragment 的 onCreateView 或 onActivityCreated 方法中,实例化 ListView 和适配器,并将适配器设置到 ListView 上: 为了提升用户体验,可以为 ListView 设置点击事件监听器: 性能优化也是关键。设置 ListView 的 android:cacheColorHint 属性可提升滚动流畅度。在 getView 方法中复用 convertView,可减少视图创建,提升性能。对于复杂需求,如异步加载数据,可使用 LoaderManager 和 CursorLoader,这能更好地管理数据加载,避免内存泄漏,支持数据变更时自动刷新。 总结来说,Fragment 中的 ListView 使用涉及布局设计、适配器创建与定制、数据绑定及事件监听。掌握这些步骤,可构建功能强大的应用。实际开发中,还需优化 ListView 性能,确保应用流畅运
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值