evaluate

hybrid_learning.concepts.train_eval.train_eval_funs.evaluate(model, kpi_fns, val_loader, prefix='val', callbacks=None, callback_context=None, ensemble_count=None)[source]

Evaluate the model wrt loss and metric_fns on the test data. The reduction method for the KPI values is mean. The device used is the one of the model lies on (see device_of()). Distributed models are not supported.

Parameters
  • model (Module) – the model to evaluate; must return a single tensor or sequence of tensors on call to forward

  • kpi_fns (Dict[str, Callable]) – dictionary with KPI IDs and evaluation functions for the KPIs to evaluate

  • val_loader (torch.utils.data.DataLoader) – data loader with data to evaluate on

  • prefix (str) – prefix to prepend to KPI names for the final pandas.Series naming

  • callbacks (Optional[List[Mapping[CallbackEvents, Callable]]]) – callbacks to feed with callback context after each batch and after finishing evaluation

  • callback_context (Optional[Dict[str, Any]]) – dict with any additional context to be handed over to the callbacks as keyword arguments

  • ensemble_count (Optional[int]) – if set to a value >0 treat the output of the model as ensemble_count outputs stacked in dim 0

Returns

Dictionary of all KPI values in the format: {<KPI-name>: <KPI value as float>}

Return type

Series