TensorDictCache
- class hybrid_learning.datasets.caching.TensorDictCache(sparse=False, thread_safe=None)[source]
Bases:
DictCacheIn-memory cache specifically for torch tensors. Other than a normal
DictCacheit takes care to move atorch.Tensorto CPU before saving it to the shared memory, since at the time being sharing of CUDA-tensors between sub-processes is not supported.Note
Do not expect speed improvements if CUDA based tensors are to be cached: Copying tensors from and to CPU is quite costly and comparable if not less efficient than loading from file. Consider using a
PTCachein such cases.Public Methods:
put(descriptor, obj)Store torch
objunder keydescriptorin a in-memory cache.load(descriptor)Load and densify tensors from in-memory cache.
Inherited from : py: class:DictCache
put(descriptor, obj)Store torch
objunder keydescriptorin a in-memory cache.load(descriptor)Load and densify tensors from in-memory cache.
clear()Empty cache dict.
descriptors()Return the keys (descriptors) of the cache dict.
Inherited from : py: class:Cache
put(descriptor, obj)Store torch
objunder keydescriptorin a in-memory cache.load(descriptor)Load and densify tensors from in-memory cache.
put_batch(descriptors, objs)Store a batch of
objsin this cache using accordingdescriptors.load_batch(descriptors[, return_none_if])Load a batch of objects.
clear()Empty cache dict.
descriptors()Return the keys (descriptors) of the cache dict.
as_dict()Return a dict with all cached descriptors and objects.
wrap(getitem[, descriptor_map])Add this cache to the deterministic function
getitem(which should have no side effects).Special Methods:
__init__([sparse, thread_safe])Init.
Inherited from : py: class:DictCache
__init__([sparse, thread_safe])Init.
Inherited from : py: class:Cache
__repr__()Return repr(self).
__add__(other)Return a (cascaded) cache which will first lookup
selfthenotherwith default sync mode.__radd__(other)Return a (cascaded) cache which will first lookup
otherthenselfwith default sync mode.