BalancedPenaltyReducedFocalLoss

class hybrid_learning.concepts.train_eval.kpis.batch_kpis.BalancedPenaltyReducedFocalLoss(factor_pos_class=0.5, alpha=2, beta=4)[source]

Bases: Module

Balanced version of the penalty reduced focal loss from CenterNet. Formula (with focal loss parameters \(\alpha,\beta\) and balancing factor \(b\in[0,1]\)):

\[L(M, M^{pred}) = - \frac{1}{#(M\eqiv 1} \cdot \left( b\cdot \sum_{xy, M_{xy} \equiv 1} (1-M_{xy}^{pred})^\alpha \log(M_{xy}^{pred}) + (1-b)\cdot \sum_{xy, M_{xy} < 1} (1-M_{xy})^\beta (M_{xy}^{pred})^\alpha \log(M_{xy}^{pred}) \right)\]

Public Data Attributes:

settings

Settings dict to reproduce the instance

Inherited from : py: class:Module

dump_patches

This allows better BC support for load_state_dict().

T_destination

alias of TypeVar('T_destination', bound=Mapping[str, Tensor])

Public Methods:

forward(inputs, targets)

Pytorch forward method.

Inherited from : py: class:Module

forward(inputs, targets)

Pytorch forward method.

register_buffer(name, tensor[, persistent])

Adds a buffer to the module.

register_parameter(name, param)

Adds a parameter to the module.

add_module(name, module)

Adds a child module to the current module.

get_submodule(target)

Returns the submodule given by target if it exists, otherwise throws an error.

get_parameter(target)

Returns the parameter given by target if it exists, otherwise throws an error.

get_buffer(target)

Returns the buffer given by target if it exists, otherwise throws an error.

apply(fn)

Applies fn recursively to every submodule (as returned by .children()) as well as self.

cuda([device])

Moves all model parameters and buffers to the GPU.

xpu([device])

Moves all model parameters and buffers to the XPU.

cpu()

Moves all model parameters and buffers to the CPU.

type(dst_type)

Casts all parameters and buffers to dst_type.

float()

Casts all floating point parameters and buffers to float datatype.

double()

Casts all floating point parameters and buffers to double datatype.

half()

Casts all floating point parameters and buffers to half datatype.

bfloat16()

Casts all floating point parameters and buffers to bfloat16 datatype.

to_empty(*, device)

Moves the parameters and buffers to the specified device without copying storage.

to(*args, **kwargs)

Moves and/or casts the parameters and buffers.

register_backward_hook(hook)

Registers a backward hook on the module.

register_full_backward_hook(hook)

Registers a backward hook on the module.

register_forward_pre_hook(hook)

Registers a forward pre-hook on the module.

register_forward_hook(hook)

Registers a forward hook on the module.

state_dict([destination, prefix, keep_vars])

Returns a dictionary containing a whole state of the module.

load_state_dict(state_dict[, strict])

Copies parameters and buffers from state_dict into this module and its descendants.

parameters([recurse])

Returns an iterator over module parameters.

named_parameters([prefix, recurse])

Returns an iterator over module parameters, yielding both the name of the parameter as well as the parameter itself.

buffers([recurse])

Returns an iterator over module buffers.

named_buffers([prefix, recurse])

Returns an iterator over module buffers, yielding both the name of the buffer as well as the buffer itself.

children()

Returns an iterator over immediate children modules.

named_children()

Returns an iterator over immediate children modules, yielding both the name of the module as well as the module itself.

modules()

Returns an iterator over all modules in the network.

named_modules([memo, prefix, remove_duplicate])

Returns an iterator over all modules in the network, yielding both the name of the module as well as the module itself.

train([mode])

Sets the module in training mode.

eval()

Sets the module in evaluation mode.

requires_grad_([requires_grad])

Change if autograd should record operations on parameters in this module.

zero_grad([set_to_none])

Sets gradients of all model parameters to zero.

share_memory()

See torch.Tensor.share_memory_()

extra_repr()

Set the extra representation of the module

Special Methods:

__init__([factor_pos_class, alpha, beta])

Initializes internal Module state, shared by both nn.Module and ScriptModule.

__repr__()

Return repr(self).

Inherited from : py: class:Module

__init__([factor_pos_class, alpha, beta])

Initializes internal Module state, shared by both nn.Module and ScriptModule.

__call__(*input, **kwargs)

Call self as a function.

__setstate__(state)

__getattr__(name)

__setattr__(name, value)

Implement setattr(self, name, value).

__delattr__(name)

Implement delattr(self, name).

__repr__()

Return repr(self).

__dir__()

Default dir() implementation.


Parameters
__init__(factor_pos_class=0.5, alpha=2, beta=4)[source]

Initializes internal Module state, shared by both nn.Module and ScriptModule.

Parameters
__repr__()[source]

Return repr(self).

Return type

str

forward(inputs, targets)[source]

Pytorch forward method.

Parameters
  • inputs (Tensor) – tensor of shape (batch, channels, height, width)

  • targets (Tensor) – same as inputs

Return type

Tensor

alpha: float

Focal loss hyper-param:: potency for pixels of the predicted mask.

beta: float

Focal loss hyper-param:: potency for pixels of the target mask.

factor_pos_class

Balancing factor \(b\) applied to the positive class; \((1-b)\) is applied to the negative class.

property settings: Dict[str, Any]

Settings dict to reproduce the instance

training: bool