torch
[res] torch.clamp([res,] tensor1, min_value, max_value)
--[[
Clamp all elements in the tensor into the range [min_value, max_value].
ie:
y_i = x_i, if x_i >= min_value or x_i <= max_value
y_i = min_value, if x_i < min_value
y_i = max_value, if x_i > max_value
z=torch.clamp(x,0,1) will return a new tensor
with the result of x bounded between 0 and 1.
torch.clamp(z,x,0,1) will put the result in z .
x:clamp(0,1) will perform the clamp operation in place
(putting the result in x ).
z:clamp(x,0,1) will put the result in z .
]]
nn
nn.SplitTable() -- (N)dim Tensor -> table of (N-1)dim Tensors
nn.JoinTable() -- table of (N-1)dim Tensors -> (N)dim Tensor
--[[
This function returns noutput number of new nodes
that each take a single component of the output of this
node in the order they are returned.
]]
nngraph.Node:split(noutput)
torch.Tensor
[result] view([result,] tensor, sizes)
Creates a view with different dimensions of the storage associated with tensor
.
If result
is not passed, then a new tensor is returned, otherwise its storage is
made to point to storage of tensor
.
sizes
can either be a torch.LongStorage
or numbers. If one of the dimensions
is -1, the size of that dimension is inferred from the rest of the elements.
x = torch.zeros(4)
> x:view(2,2)
0 0
0 0
[torch.DoubleTensor of dimension 2x2]
> x:view(2,-1)
0 0
0 0
[torch.DoubleTensor of dimension 2x2]
> x:view(torch.LongStorage{2,2})
0 0
0 0
[torch.DoubleTensor of dimension 2x2]
> x
0
0
0
0
[torch.DoubleTensor of dimension 4]
[result] split([result,] tensor, size, [dim])
Splits Tensor tensor
along dimension dim
into a result
table of Tensors of size size
(a number)
or less (in the case of the last Tensor). The sizes of the non-dim
dimensions remain unchanged. Internally, a series of
narrows are performed along
dimensions dim
. Argument dim
defaults to 1.
If result
is not passed, then a new table is returned, otherwise it
is emptied and reused.
Example:
x = torch.randn(3,4,5)
> x:split(2,1)
{
1 : DoubleTensor - size: 2x4x5
2 : DoubleTensor - size: 1x4x5
}
> x:split(3,2)
{
1 : DoubleTensor - size: 3x3x5
2 : DoubleTensor - size: 3x1x5
}
> x:split(2,3)
{
1 : DoubleTensor - size: 3x4x2
2 : DoubleTensor - size: 3x4x2
3 : DoubleTensor - size: 3x4x1
}
[Tensor] index(dim, index)
Returns a new Tensor
which indexes the original Tensor
along dimension dim
using the entries in torch.LongTensor
index
.
The returned Tensor
has the same number of dimensions as the original Tensor
.
The returned Tensor
does not use the same storage as the original Tensor
– see below for storing the result
in an existing Tensor
.
x = torch.rand(5,5)
> x
0.8020 0.7246 0.1204 0.3419 0.4385
0.0369 0.4158 0.0985 0.3024 0.8186
0.2746 0.9362 0.2546 0.8586 0.6674
0.7473 0.9028 0.1046 0.9085 0.6622
0.1412 0.6784 0.1624 0.8113 0.3949
[torch.DoubleTensor of dimension 5x5]
y = x:index(1,torch.LongTensor{3,1})
> y
0.2746 0.9362 0.2546 0.8586 0.6674
0.8020 0.7246 0.1204 0.3419 0.4385
[torch.DoubleTensor of dimension 2x5]
y:fill(1)
> y
1 1 1 1 1
1 1 1 1 1
[torch.DoubleTensor of dimension 2x5]
> x
0.8020 0.7246 0.1204 0.3419 0.4385
0.0369 0.4158 0.0985 0.3024 0.8186
0.2746 0.9362 0.2546 0.8586 0.6674
0.7473 0.9028 0.1046 0.9085 0.6622
0.1412 0.6784 0.1624 0.8113 0.3949
[torch.DoubleTensor of dimension 5x5]
Note the explicit index
function is different than the indexing operator []
.
The indexing operator []
is a syntactic shortcut for a series of select and narrow operations,
therefore it always returns a new view on the original tensor that shares the same storage.
However, the explicit index
function can not use the same storage.
It is possible to store the result into an existing Tensor with result:index(source, ...)
:
x = torch.rand(5,5)
> x
0.8020 0.7246 0.1204 0.3419 0.4385
0.0369 0.4158 0.0985 0.3024 0.8186
0.2746 0.9362 0.2546 0.8586 0.6674
0.7473 0.9028 0.1046 0.9085 0.6622
0.1412 0.6784 0.1624 0.8113 0.3949
[torch.DoubleTensor of dimension 5x5]
y = torch.Tensor()
y:index(x,1,torch.LongTensor{3,1})
> y
0.2746 0.9362 0.2546 0.8586 0.6674
0.8020 0.7246 0.1204 0.3419 0.4385
[torch.DoubleTensor of dimension 2x5]
nn.Module
:training()
--[[
This sets the mode of the Module (or sub-modules) to train=true.
This is useful for modules like Dropout that
have a different behaviour during training vs evaluation.
]]