loader

hybrid_learning.concepts.train_eval.train_eval_funs.loader(data=None, *, batch_size=None, shuffle=False, device=None, model=None, num_workers=None)[source]

Prepare and return a torch data loader with device-dependent multi-processing settings.

Note

If using num_workers > 1 the dataset must be pickleable, including caches and all transformations. This is usually no problem except for datasets or transformations holding references to models: These will not be pickleable once the forward method was called without disabling the gradients (with torch.set_grad_enabled(False): ...). Make sure this is not the case.

Parameters
  • data (Optional[Union[torch.utils.data.Dataset, BaseDataset]]) – data to obtain loader for

  • batch_size (Optional[int]) – the batch size to apply

  • shuffle (bool) – Whether the loader should shuffle the data or not; e.g. shuffle training data and do not shuffle evaluation data

  • device (Optional[device]) – the desired device to work on (determines whether to pin memory); defaults to cuda if it is available

  • model (Optional[Module]) – if device is not given, the device is determined from model if this is given

  • num_workers (Optional[int]) – if a positive integer, multi-process data loading is done with this amount of worker processes (otherwise a blocking single-process is used)

Returns

a data loader for the given data