PTCache
- class hybrid_learning.datasets.caching.PTCache(cache_root=None, device=None, sparse='smallest', dtype=None, before_put=None, after_load=None)[source]
Bases:
FileCacheFile cache that uses torch saving and loading mechanism. All objects are moved to the given
deviceduring loading. For further details see super class.Note
The file sizes may become quite large for larger tensors. Consider a file cache applying compression if saving/loading times or storage space get a problem.
Public Data Attributes:
The file ending to append to descriptors to get the file path.
Inherited from : py: class:FileCache
The file ending to append to descriptors to get the file path.
Public Methods:
put_file(filepath, obj)Save
objtofilepathusingtorch.save().load_file(filepath)Load
objfromfilepathusingtorch.load().Inherited from : py: class:FileCache
put(descriptor, obj)Store
objunder the cache root usingput_file().load(descriptor)Load object from file
descriptor+FILE_ENDINGunder cache root.clear()Remove all files from cache root.
descriptors()Provide paths of all cached files with ending stripped and relative to cache root.
descriptor_to_fp(descriptor)Return the file path of the cache file for a given
descriptor.put_file(filepath, obj)Save
objtofilepathusingtorch.save().load_file(filepath)Load
objfromfilepathusingtorch.load().Inherited from : py: class:Cache
put(descriptor, obj)Store
objunder the cache root usingput_file().load(descriptor)Load object from file
descriptor+FILE_ENDINGunder cache root.put_batch(descriptors, objs)Store a batch of
objsin this cache using accordingdescriptors.load_batch(descriptors[, return_none_if])Load a batch of objects.
clear()Remove all files from cache root.
descriptors()Provide paths of all cached files with ending stripped and relative to cache root.
as_dict()Return a dict with all cached descriptors and objects.
wrap(getitem[, descriptor_map])Add this cache to the deterministic function
getitem(which should have no side effects).Special Methods:
__init__([cache_root, device, sparse, ...])Init.
__repr__()Return repr(self).
Inherited from : py: class:FileCache
__init__([cache_root, device, sparse, ...])Init.
__repr__()Return repr(self).
Inherited from : py: class:Cache
__repr__()Return repr(self).
__add__(other)Return a (cascaded) cache which will first lookup
selfthenotherwith default sync mode.__radd__(other)Return a (cascaded) cache which will first lookup
otherthenselfwith default sync mode.
- Parameters
- __init__(cache_root=None, device=None, sparse='smallest', dtype=None, before_put=None, after_load=None)[source]
Init.
- Parameters
cache_root (Optional[str]) – see
cache_rootsparse (Optional[Union[bool, str]]) – sparse option of the default
before_putdtype (Optional[dtype]) – dtype option of the default
before_putbefore_put (Optional[Callable[[Any], Tensor]]) – see
before_put; overridessparseanddtypeafter_load (Optional[Callable[[Any], Tensor]]) – see
after_load
- load_file(filepath)[source]
Load
objfromfilepathusingtorch.load(). Move them todevicebefore return. (Note that the tensors may be sparse.)
- put_file(filepath, obj)[source]
Save
objtofilepathusingtorch.save(). Moveobjtodevicebefore saving.
- FILE_ENDING = '.pt'
The file ending to append to descriptors to get the file path. See
descriptor_to_fp(). This is the standard fortorch.save().
- before_put: ToTensor
The transformation to call to obtain a tensor with desired properties for saving.
- device: Union[str, torch.device]
The device to load elements to. See
load_file()andput_file().