gluoncv.loss

Custom losses. Losses are subclasses of gluon.loss.Loss which is a HybridBlock actually.

FocalLoss Focal Loss for inbalanced classification.
SSDMultiBoxLoss Single-Shot Multibox Object Detection Loss.

API Reference

Custom losses. Losses are subclasses of gluon.loss.Loss which is a HybridBlock actually.

class gluoncv.loss.FocalLoss(axis=-1, alpha=0.25, gamma=2, sparse_label=True, from_logits=False, batch_axis=0, weight=None, num_class=None, eps=1e-12, size_average=True, **kwargs)[source]

Focal Loss for inbalanced classification. Focal loss was described in https://arxiv.org/abs/1708.02002

Parameters:
  • axis (int, default -1) – The axis to sum over when computing softmax and entropy.
  • alpha (float, default 0.25) – The alpha which controls loss curve.
  • gamma (float, default 2) – The gamma which controls loss curve.
  • sparse_label (bool, default True) – Whether label is an integer array instead of probability distribution.
  • from_logits (bool, default False) – Whether input is a log probability (usually from log_softmax) instead.
  • batch_axis (int, default 0) – The axis that represents mini-batch.
  • weight (float or None) – Global scalar weight for loss.
  • num_class (int) – Number of classification categories. It is required is sparse_label is True.
  • eps (float) – Eps to avoid numerical issue.
  • size_average (bool, default True) – If True, will take mean of the output loss on every axis except batch_axis.
  • Inputs
    • pred: the prediction tensor, where the batch_axis dimension ranges over batch size and axis dimension ranges over the number of classes.
    • label: the truth tensor. When sparse_label is True, label’s shape should be pred’s shape with the axis dimension removed. i.e. for pred with shape (1,2,3,4) and axis = 2, label’s shape should be (1,2,4) and values should be integers between 0 and 2. If sparse_label is False, label’s shape must be the same as pred and values should be floats in the range [0, 1].
    • sample_weight: element-wise weighting tensor. Must be broadcastable to the same shape as label. For example, if label has shape (64, 10) and you want to weigh each sample in the batch separately, sample_weight should have shape (64, 1).
  • Outputs
    • loss: loss tensor with shape (batch_size,). Dimenions other than batch_axis are averaged out.
hybrid_forward(F, pred, label, sample_weight=None)[source]

Loss forward

class gluoncv.loss.SSDMultiBoxLoss(negative_mining_ratio=3, rho=1.0, lambd=1.0, **kwargs)[source]

Single-Shot Multibox Object Detection Loss.

Note

Since cross device synchronization is required to compute batch-wise statistics, it is slightly sub-optimal compared with non-sync version. However, we find this is better for converged model performance.

Parameters:
  • negative_mining_ratio (float, default is 3) – Ratio of negative vs. positive samples.
  • rho (float, default is 1.0) – Threshold for trimmed mean estimator. This is the smooth parameter for the L1-L2 transition.
  • lambd (float, default is 1.0) – Relative weight between classification and box regression loss. The overall loss is computed as \(L = loss_{class} + \lambda \times loss_{loc}\).
forward(cls_pred, box_pred, cls_target, box_target)[source]

Compute loss in entire batch across devices.