Torch7 doc

torch

[res] torch.clamp([res,] tensor1, min_value, max_value)

--[[
Clamp all elements in the tensor into the range [min_value, max_value].
ie:
    y_i = x_i,  if x_i >= min_value or x_i <= max_value
    y_i = min_value,  if x_i < min_value
    y_i = max_value,  if x_i > max_value 

z=torch.clamp(x,0,1) will return a new tensor 
with the result of x bounded between 0 and 1.

torch.clamp(z,x,0,1) will put the result in z .

x:clamp(0,1) will perform the clamp operation in place
(putting the result in x ).

 z:clamp(x,0,1) will put the result in z .  
 ]]

nn

nn.SplitTable()    -- (N)dim Tensor -> table of (N-1)dim Tensors
nn.JoinTable()    -- table of (N-1)dim Tensors -> (N)dim Tensor
--[[
This function returns noutput number of new nodes 
that each take a single component of the output of this 
node in the order they are returned.
]]
nngraph.Node:split(noutput)

torch.Tensor

[result] view([result,] tensor, sizes)

Creates a view with different dimensions of the storage associated with tensor.
If result is not passed, then a new tensor is returned, otherwise its storage is
made to point to storage of tensor.

sizes can either be a torch.LongStorage or numbers. If one of the dimensions
is -1, the size of that dimension is inferred from the rest of the elements.

x = torch.zeros(4)
> x:view(2,2)
 0 0
 0 0
[torch.DoubleTensor of dimension 2x2]

> x:view(2,-1)
 0 0
 0 0
[torch.DoubleTensor of dimension 2x2]

> x:view(torch.LongStorage{2,2})
 0 0
 0 0
[torch.DoubleTensor of dimension 2x2]

> x
 0
 0
 0
 0
[torch.DoubleTensor of dimension 4]

[result] split([result,] tensor, size, [dim])

Splits Tensor tensor along dimension dim
into a result table of Tensors of size size (a number)
or less (in the case of the last Tensor). The sizes of the non-dim
dimensions remain unchanged. Internally, a series of
narrows are performed along
dimensions dim. Argument dim defaults to 1.

If result is not passed, then a new table is returned, otherwise it
is emptied and reused.

Example:

x = torch.randn(3,4,5)

> x:split(2,1)
{
  1 : DoubleTensor - size: 2x4x5
  2 : DoubleTensor - size: 1x4x5
}

> x:split(3,2)
{
  1 : DoubleTensor - size: 3x3x5
  2 : DoubleTensor - size: 3x1x5
}

> x:split(2,3)
{
  1 : DoubleTensor - size: 3x4x2
  2 : DoubleTensor - size: 3x4x2
  3 : DoubleTensor - size: 3x4x1
}

[Tensor] index(dim, index)

Returns a new Tensor which indexes the original Tensor along dimension dim
using the entries in torch.LongTensor index.
The returned Tensor has the same number of dimensions as the original Tensor.
The returned Tensor does not use the same storage as the original Tensor – see below for storing the result
in an existing Tensor.

x = torch.rand(5,5)
> x
 0.8020  0.7246  0.1204  0.3419  0.4385
 0.0369  0.4158  0.0985  0.3024  0.8186
 0.2746  0.9362  0.2546  0.8586  0.6674
 0.7473  0.9028  0.1046  0.9085  0.6622
 0.1412  0.6784  0.1624  0.8113  0.3949
[torch.DoubleTensor of dimension 5x5]

y = x:index(1,torch.LongTensor{3,1})
> y
 0.2746  0.9362  0.2546  0.8586  0.6674
 0.8020  0.7246  0.1204  0.3419  0.4385
[torch.DoubleTensor of dimension 2x5]

y:fill(1)
> y
 1  1  1  1  1
 1  1  1  1  1
[torch.DoubleTensor of dimension 2x5]

> x
 0.8020  0.7246  0.1204  0.3419  0.4385
 0.0369  0.4158  0.0985  0.3024  0.8186
 0.2746  0.9362  0.2546  0.8586  0.6674
 0.7473  0.9028  0.1046  0.9085  0.6622
 0.1412  0.6784  0.1624  0.8113  0.3949
[torch.DoubleTensor of dimension 5x5]

Note the explicit index function is different than the indexing operator [].
The indexing operator [] is a syntactic shortcut for a series of select and narrow operations,
therefore it always returns a new view on the original tensor that shares the same storage.
However, the explicit index function can not use the same storage.

It is possible to store the result into an existing Tensor with result:index(source, ...):

x = torch.rand(5,5)
> x
 0.8020  0.7246  0.1204  0.3419  0.4385
 0.0369  0.4158  0.0985  0.3024  0.8186
 0.2746  0.9362  0.2546  0.8586  0.6674
 0.7473  0.9028  0.1046  0.9085  0.6622
 0.1412  0.6784  0.1624  0.8113  0.3949
[torch.DoubleTensor of dimension 5x5]

y = torch.Tensor()
y:index(x,1,torch.LongTensor{3,1})
> y
 0.2746  0.9362  0.2546  0.8586  0.6674
 0.8020  0.7246  0.1204  0.3419  0.4385
[torch.DoubleTensor of dimension 2x5]

nn.Module

:training()
--[[
This sets the mode of the Module (or sub-modules) to train=true. 
This is useful for modules like Dropout that 
have a different behaviour during training vs evaluation.
]]
### 检查 Docker 镜像中的 PyTorch 版本 为了确认 Docker 容器中已安装的 PyTorch 版本,可以在启动容器后执行 Python 解释器并导入 `torch` 库来获取版本信息。具体操作方法如下: ```bash docker run --rm -it apachecn0/pytorch-doc-zh python3 ``` 进入 Python 环境后输入以下命令以显示当前使用的 PyTorch 及其 CUDA 的版本号[^1]: ```python import torch print(torch.__version__) print(torch.version.cuda) ``` 如果希望验证 GPU 是否可用,则可继续测试: ```python print(torch.cuda.is_available()) ``` 上述过程能够帮助确定所用镜像里预装的是哪个版本的 PyTorch 和 CUDA。 ### 在 Docker 镜像中安装指定版本的 PyTorch 对于需要自定义安装特定版本的情况,在构建 Docker 文件时可以通过 Conda 或 pip 来实现这一点。下面是一个基于 Ubuntu 18.04 并通过 Conda 安装特定版本 PyTorch 的例子[^4]: 创建一个新的 Dockerfile 如下所示: ```Dockerfile FROM ubuntu:18.04 # 设置环境变量避免交互式配置工具中断构建流程 ENV DEBIAN_FRONTEND=noninteractive # 更新包列表并且安装必要的依赖项 RUN apt-get update && \ apt-get install -y wget bzip2 ca-certificates libglib2.0-0 libxext6 libsm6 libxrender1 git mercurial subversion # 下载 Miniconda 脚本并安装 Anaconda 发行版 RUN cd /tmp && \ wget https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh && \ bash Miniconda3-latest-Linux-x86_64.sh -bfp /opt/conda && \ rm Miniconda3-latest-Linux-x86_64.sh # 将 conda 添加至 PATH 中以便后续可以直接调用它 ENV PATH=/opt/conda/bin:$PATH # 创建新的 conda 环境并激活该环境 RUN conda create -n pytorch_env python=3.9 -y && \ echo "source activate pytorch_env" >> ~/.bashrc # 切换到新创建的 conda 环境 SHELL ["/bin/bash", "-c"] # 使用 conda 安装指定版本的 PyTorch (此处假设要安装 PyTorch 1.13.1 和 CUDA 11.7) RUN conda install -c pytorch pytorch==1.13.1 torchvision torchaudio cudatoolkit=11.7 -y # 设定默认入口点为 Bash shell CMD [ "/bin/bash" ] ``` 此 Dockerfile 构建了一个带有特定版本 PyTorch 和相应 CUDA 工具链的新镜像。请注意调整 `pytorch`, `torchaudio` 和 `cudatoolkit` 参数以匹配所需的具体版本。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值