auc_for

hybrid_learning.experimentation.fuzzy_exp.fuzzy_exp_eval.auc_for(metrics_pd, model_key, logic_type, formula, formula_name_col='formula_attrs', x='false_positive_rate', y='recall', other_metrics={'f1score': 'F1', 'precision': 'precision', 'recall': 'recall'}, other_by='img_mon_thresh', other_at=(0.1, 0.5, 0.9), precision=3)[source]

Collect area under curve of x-y-plots for the given experiment series. The output dictionary may contain further values at specific points on the curve. Precisely, values of the other_metrics are collected at the values other_at of the setting other_by (by default: F1 score, precision and recall at values 0.1, 0.5 and 0.9 of the threshold img_mon_thresh).

Parameters
  • metrics_pd (DataFrame) – DataFrame with metric results (see get_metrics())

  • model_key (str) – the model key and directory name

  • logic_type (str) – the logic type

  • formula (str) – the formula specifier

  • x (str) – column with x-values in the x-y-plot

  • y (str) – column with y-values in the x-y-plot

  • other_metrics (Dict[str, str]) – dictionary where keys are column names of other metrics to sample points values from, dict values are pretty names thereof to use as keys in the output dict

  • formula_name_col (str) – column with values to match with formula

  • precision (int) – number display precision used to create keys for the other_metrics values in the output dict

  • other_by (str) –

  • other_at (Sequence[float]) –

Returns

dictionary of the form {'auc': float, 'num_points': int, '<other_metric>@<other_by_value>': float}

Return type

Dict[str, float]