Content
Given descriptors load model outputs either from cache or generate them from input_batch.
descriptors
model
cache
input_batch
descriptors (List[str]) –
input_batch (Any) –
model (Callable[[Any], Union[Tensor, Dict[str, Tensor]]]) –
cache (Optional[Cache]) –
device (Optional[Union[str, device]]) –