PTCache

class hybrid_learning.datasets.caching.PTCache(cache_root=None, device=None, sparse='smallest', dtype=None, before_put=None, after_load=None)[source]

Bases: FileCache

File cache that uses torch saving and loading mechanism. All objects are moved to the given device during loading. For further details see super class.

Note

The file sizes may become quite large for larger tensors. Consider a file cache applying compression if saving/loading times or storage space get a problem.

Public Data Attributes:

FILE_ENDING

The file ending to append to descriptors to get the file path.

Inherited from : py: class:FileCache

FILE_ENDING

The file ending to append to descriptors to get the file path.

Public Methods:

put_file(filepath, obj)

Save obj to filepath using torch.save().

load_file(filepath)

Load obj from filepath using torch.load().

Inherited from : py: class:FileCache

put(descriptor, obj)

Store obj under the cache root using put_file().

load(descriptor)

Load object from file descriptor + FILE_ENDING under cache root.

clear()

Remove all files from cache root.

descriptors()

Provide paths of all cached files with ending stripped and relative to cache root.

descriptor_to_fp(descriptor)

Return the file path of the cache file for a given descriptor.

put_file(filepath, obj)

Save obj to filepath using torch.save().

load_file(filepath)

Load obj from filepath using torch.load().

Inherited from : py: class:Cache

put(descriptor, obj)

Store obj under the cache root using put_file().

load(descriptor)

Load object from file descriptor + FILE_ENDING under cache root.

put_batch(descriptors, objs)

Store a batch of objs in this cache using according descriptors.

load_batch(descriptors[, return_none_if])

Load a batch of objects.

clear()

Remove all files from cache root.

descriptors()

Provide paths of all cached files with ending stripped and relative to cache root.

as_dict()

Return a dict with all cached descriptors and objects.

wrap(getitem[, descriptor_map])

Add this cache to the deterministic function getitem (which should have no side effects).

Special Methods:

__init__([cache_root, device, sparse, ...])

Init.

__repr__()

Return repr(self).

Inherited from : py: class:FileCache

__init__([cache_root, device, sparse, ...])

Init.

__repr__()

Return repr(self).

Inherited from : py: class:Cache

__repr__()

Return repr(self).

__add__(other)

Return a (cascaded) cache which will first lookup self then other with default sync mode.

__radd__(other)

Return a (cascaded) cache which will first lookup other then self with default sync mode.


Parameters
__init__(cache_root=None, device=None, sparse='smallest', dtype=None, before_put=None, after_load=None)[source]

Init.

Parameters
__repr__()[source]

Return repr(self).

load_file(filepath)[source]

Load obj from filepath using torch.load(). Move them to device before return. (Note that the tensors may be sparse.)

Parameters

filepath (str) –

Return type

Optional[Tensor]

put_file(filepath, obj)[source]

Save obj to filepath using torch.save(). Move obj to device before saving.

Parameters
FILE_ENDING = '.pt'

The file ending to append to descriptors to get the file path. See descriptor_to_fp(). This is the standard for torch.save().

after_load: ToTensor

The transformation to call on loaded tensors.

before_put: ToTensor

The transformation to call to obtain a tensor with desired properties for saving.

device: Union[str, torch.device]

The device to load elements to. See load_file() and put_file().