ToTensor

class hybrid_learning.datasets.transforms.image_transforms.ToTensor(device=None, dtype=None, sparse=None, requires_grad=None)[source]

Bases: ImageTransform

Turn objects into tensors or move tensors to given device or dtype. The operation avoids copying of data if possible. For details see torch.as_tensor().

Note

The default return type for PIL.Image.Image instances is a tensor of dtype torch.float with value range in [0, 1].

Public Data Attributes:

DTYPE_SIZES

settings

Settings.

Inherited from : py: class:Transform

IDENTITY_CLASS

The identity class or classes for composition / addition.

settings

Settings.

Public Methods:

apply_to(tens)

Create tensor from tens with configured device and dtype.

Inherited from : py: class:ImageTransform

apply_to(tens)

Create tensor from tens with configured device and dtype.

Inherited from : py: class:Transform

apply_to(tens)

Create tensor from tens with configured device and dtype.

Special Methods:

__init__([device, dtype, sparse, requires_grad])

Inherited from : py: class:ImageTransform

__call__(img)

Application of transformation.

Inherited from : py: class:Transform

__repr__()

Return repr(self).

__eq__(other)

Return self==value.

__copy__()

Return a shallow copy of self using settings.

__add__(other)

Return a flat composition of self with other.

__radd__(other)

Return a flat composition of other and self.

__call__(img)

Application of transformation.


Parameters
__init__(device=None, dtype=None, sparse=None, requires_grad=None)[source]
Parameters
apply_to(tens)[source]

Create tensor from tens with configured device and dtype. See device and dtype.

Parameters

tens (Union[Tensor, ndarray, Image]) –

Return type

Tensor

classmethod is_sparse_smaller(tens)[source]

Given a tensor, return whether its sparse representation occupies less storage. Given the size formulas

\[\begin{split}\text{sparse size:}\quad \text{numel} \cdot d \cdot (d\cdot s_{ind} + s_{val}) \\ \text{dense size:}\quad \text{numel} \cdot s_{val}\end{split}\]

for the size in bit of one index resp. value entry \(s_{val}, s_{ind}\), the dimension of the tensor \(d\), the formula whether the sparse representation is better is:

\[p < \frac {s_{val}} {d \cdot s_{ind} + s_{val}}\]

and the proportion of non-zero elements \(p\).

static to_sparse(tens, device=None, dtype=None, requires_grad=None)[source]

Convert dense tensor tens to sparse tensor. Scalars are not sparsified but returned as normal tensors.

Parameters
Return type

Tensor

classmethod to_tens(tens, device=None, dtype=None, sparse=None, requires_grad=None)[source]

See apply_to and __init__.

Parameters
DTYPE_SIZES: Dict[torch.dtype, int] = {torch.bool: 1, torch.uint8: 8, torch.int8: 8, torch.int16: 16, torch.float16: 16, torch.bfloat16: 16, torch.int32: 32, torch.float32: 32, torch.complex32: 32, torch.int64: 64, torch.float64: 64, torch.complex64: 64, torch.complex128: 128}
device: torch.device

The device to move tensors to.

dtype: torch.dtype

The dtype created tensors shall have.

requires_grad: Optional[bool]

Whether the new tensor should require grad.

property settings: Dict[str, Any]

Settings.

sparse: Optional[bool]

Whether the tensor should be sparse or dense or dynamically choose the smaller one (option ‘smallest’). No modification is made if set to None.