PTCache
- class hybrid_learning.datasets.caching.PTCache(cache_root=None, device=None, sparse='smallest', dtype=None, before_put=None, after_load=None)[source]
Bases:
FileCache
File cache that uses torch saving and loading mechanism. All objects are moved to the given
device
during loading. For further details see super class.Note
The file sizes may become quite large for larger tensors. Consider a file cache applying compression if saving/loading times or storage space get a problem.
Public Data Attributes:
The file ending to append to descriptors to get the file path.
Inherited from : py: class:FileCache
The file ending to append to descriptors to get the file path.
Public Methods:
put_file
(filepath, obj)Save
obj
tofilepath
usingtorch.save()
.load_file
(filepath)Load
obj
fromfilepath
usingtorch.load()
.Inherited from : py: class:FileCache
put
(descriptor, obj)Store
obj
under the cache root usingput_file()
.load
(descriptor)Load object from file
descriptor
+FILE_ENDING
under cache root.clear
()Remove all files from cache root.
descriptors
()Provide paths of all cached files with ending stripped and relative to cache root.
descriptor_to_fp
(descriptor)Return the file path of the cache file for a given
descriptor
.put_file
(filepath, obj)Save
obj
tofilepath
usingtorch.save()
.load_file
(filepath)Load
obj
fromfilepath
usingtorch.load()
.Inherited from : py: class:Cache
put
(descriptor, obj)Store
obj
under the cache root usingput_file()
.load
(descriptor)Load object from file
descriptor
+FILE_ENDING
under cache root.put_batch
(descriptors, objs)Store a batch of
objs
in this cache using accordingdescriptors
.load_batch
(descriptors[, return_none_if])Load a batch of objects.
clear
()Remove all files from cache root.
descriptors
()Provide paths of all cached files with ending stripped and relative to cache root.
as_dict
()Return a dict with all cached descriptors and objects.
wrap
(getitem[, descriptor_map])Add this cache to the deterministic function
getitem
(which should have no side effects).Special Methods:
__init__
([cache_root, device, sparse, ...])Init.
__repr__
()Return repr(self).
Inherited from : py: class:FileCache
__init__
([cache_root, device, sparse, ...])Init.
__repr__
()Return repr(self).
Inherited from : py: class:Cache
__repr__
()Return repr(self).
__add__
(other)Return a (cascaded) cache which will first lookup
self
thenother
with default sync mode.__radd__
(other)Return a (cascaded) cache which will first lookup
other
thenself
with default sync mode.
- Parameters
- __init__(cache_root=None, device=None, sparse='smallest', dtype=None, before_put=None, after_load=None)[source]
Init.
- Parameters
cache_root (Optional[str]) – see
cache_root
sparse (Optional[Union[bool, str]]) – sparse option of the default
before_put
dtype (Optional[dtype]) – dtype option of the default
before_put
before_put (Optional[Callable[[Any], Tensor]]) – see
before_put
; overridessparse
anddtype
after_load (Optional[Callable[[Any], Tensor]]) – see
after_load
- load_file(filepath)[source]
Load
obj
fromfilepath
usingtorch.load()
. Move them todevice
before return. (Note that the tensors may be sparse.)
- put_file(filepath, obj)[source]
Save
obj
tofilepath
usingtorch.save()
. Moveobj
todevice
before saving.
- FILE_ENDING = '.pt'
The file ending to append to descriptors to get the file path. See
descriptor_to_fp()
. This is the standard fortorch.save()
.
- before_put: ToTensor
The transformation to call to obtain a tensor with desired properties for saving.
- device: Union[str, torch.device]
The device to load elements to. See
load_file()
andput_file()
.