focalloss安装与使用tensorflow2.x版本(代码片段)

一颗小树x 一颗小树x     2022-12-10     151

关键词:

前言

本文介绍在TensorFlow2.x中,如何简便地使用 Focal Loss 损失函数;它可以通过 pip 来安装的;调用也比较方便。

一、安装

方式1:直接通过pip安装

pip install focal-loss

当前版本:focal-loss 0.0.7

支持的python版本:python3.6、python3.7、python3.9

方式2:源码安装

源码地址:https://github.com/artemmavrin/focal-loss

我选择目前最新的0.0.7,然后下载源代码:focal-loss-master.zip

这里先看一下,安装的脚步代码:setup.py,看看它依赖那些库;

"""Setup script."""

import os
import pathlib
import re

from setuptools import setup, find_packages

# Set the environment variable TF_CPU (to anything) to use tensorflow-cpu
_TENSORFLOW_CPU = os.environ.get('TF_CPU', None)

# TensorFlow package name and version
_TENSORFLOW = 'tensorflow' if _TENSORFLOW_CPU is None else 'tensorflow-cpu'
_MIN_TENSORFLOW_VERSION = '2.2'
_TENSORFLOW += f'>=_MIN_TENSORFLOW_VERSION'

# Directory of this setup.py file
_HERE = pathlib.Path(__file__).parent


def _resolve_path(*parts):
    """Get a filename from a list of path components, relative to this file."""
    return _HERE.joinpath(*parts).absolute()


def _read(*parts):
    """Read a file's contents into a string."""
    filename = _resolve_path(*parts)
    return filename.read_text()


__INIT__ = _read('src', 'focal_loss', '__init__.py')


def _get_package_variable(name):
    pattern = rf'^name = [\\'"](?P<value>[^\\'"]*)[\\'"]'
    match = re.search(pattern, __INIT__, flags=re.M)
    if match:
        return match.group('value')
    raise RuntimeError(f'Cannot find variable name')


setup(
    name=_get_package_variable('__package__'),
    version=_get_package_variable('__version__'),
    description=_get_package_variable('__description__'),
    url=_get_package_variable('__url__'),
    author=_get_package_variable('__author__'),
    author_email=_get_package_variable('__author_email__'),
    long_description=_read('README.rst'),
    long_description_content_type='text/x-rst',
    packages=find_packages('src', exclude=['*.tests']),
    package_dir='': 'src',
    license='Apache 2.0',
    classifiers=[
        'Development Status :: 1 - Planning',
        'Intended Audience :: Developers',
        'Intended Audience :: Education',
        'Intended Audience :: Science/Research',
        'License :: OSI Approved :: Apache Software License',
        'Operating System :: OS Independent',
        'Programming Language :: Python :: 3',
        'Programming Language :: Python :: 3.6',
        'Programming Language :: Python :: 3.7',
        'Programming Language :: Python :: 3.8',
        'Topic :: Scientific/Engineering',
        'Topic :: Scientific/Engineering :: Artificial Intelligence',
        'Topic :: Scientific/Engineering :: Mathematics',
        'Topic :: Software Development',
        'Topic :: Software Development :: Libraries',
        'Topic :: Software Development :: Libraries :: Python Modules',
    ],
    install_requires=[
        _TENSORFLOW,
    ],
    extras_require=
        # The 'dev' extra is for development, including running tests and
        # generating documentation
        'dev': [
            'numpy',
            'scipy',
            'matplotlib',
            'seaborn',
            'pytest',
            'coverage',
            'sphinx',
            'sphinx_rtd_theme',
        ],
    ,
    zip_safe=False,
)

能看到它需要 tensorflow2.2 以上的版本;支持python3.6、python3.7、python3.9;

依赖这些库

            'numpy',
            'scipy',
            'matplotlib',
            'seaborn',
            'pytest',
            'coverage',
            'sphinx',
            'sphinx_rtd_theme',

如果已经安装了tensorflow2.2 以上的版本,建议注释tensorflow、numpy、scipy这些库;这些它在安装focal-loss时,会跳过检测是否存在这些库,和版本是否符合。

然后执行进行tensorflow的开发环境,进行focal-loss-master的目录,执行如下代码,进行源码安装:

python setup.py install

等待安装完成~

二、使用Focal Loss

使用说明: 

  • 二分类的Focal Loss :BinaryFocalLoss (use like tf.keras.losses.BinaryCrossentropy)

  • 多分类的Focal Loss:SparseCategoricalFocalLoss (use like tf.keras.losses.SparseCategoricalCrossentropy)

示例1:二分类的Focal Loss

# Typical tf.keras API usage
import tensorflow as tf
from focal_loss import BinaryFocalLoss

model = tf.keras.Model(...)
model.compile(
    optimizer=...,
    loss=BinaryFocalLoss(gamma=2),  # Used here like a tf.keras loss
    metrics=...,
)
history = model.fit(...)

示例2:多分类的Focal Loss

# Typical tf.keras API usage
import tensorflow as tf
from focal_loss import SparseCategoricalFocalLoss

model = tf.keras.Model(...)
model.compile(
    optimizer=...,
    loss=SparseCategoricalFocalLoss(gamma=2),  # Used here like a tf.keras loss
    metrics=...,
)
history = model.fit(...)

关于它的参数,如下图所示:

 

三、看看源码

二分类的:

"""Binary focal loss implementation."""
#  ____  __    ___   __   __      __     __   ____  ____
# (  __)/  \\  / __) / _\\ (  )    (  )   /  \\ / ___)/ ___)
#  ) _)(  O )( (__ /    \\/ (_/\\  / (_/\\(  O )\\___ \\\\___ \\
# (__)  \\__/  \\___)\\_/\\_/\\____/  \\____/ \\__/ (____/(____/

from functools import partial

import tensorflow as tf

from .utils.validation import check_bool, check_float

_EPSILON = tf.keras.backend.epsilon()


def binary_focal_loss(y_true, y_pred, gamma, *, pos_weight=None,
                      from_logits=False, label_smoothing=None):
    r"""Focal loss function for binary classification.

    This loss function generalizes binary cross-entropy by introducing a
    hyperparameter :math:`\\gamma` (gamma), called the *focusing parameter*,
    that allows hard-to-classify examples to be penalized more heavily relative
    to easy-to-classify examples.

    The focal loss [1]_ is defined as

    .. math::

        L(y, \\hatp)
        = -\\alpha y \\left(1 - \\hatp\\right)^\\gamma \\log(\\hatp)
        - (1 - y) \\hatp^\\gamma \\log(1 - \\hatp)

    where

    *   :math:`y \\in \\0, 1\\` is a binary class label,
    *   :math:`\\hatp \\in [0, 1]` is an estimate of the probability of the
        positive class,
    *   :math:`\\gamma` is the *focusing parameter* that specifies how much
        higher-confidence correct predictions contribute to the overall loss
        (the higher the :math:`\\gamma`, the higher the rate at which
        easy-to-classify examples are down-weighted).
    *   :math:`\\alpha` is a hyperparameter that governs the trade-off between
        precision and recall by weighting errors for the positive class up or
        down (:math:`\\alpha=1` is the default, which is the same as no
        weighting),

    The usual weighted binary cross-entropy loss is recovered by setting
    :math:`\\gamma = 0`.

    Parameters
    ----------
    y_true : tensor-like
        Binary (0 or 1) class labels.

    y_pred : tensor-like
        Either probabilities for the positive class or logits for the positive
        class, depending on the `from_logits` parameter. The shapes of `y_true`
        and `y_pred` should be broadcastable.

    gamma : float
        The focusing parameter :math:`\\gamma`. Higher values of `gamma` make
        easy-to-classify examples contribute less to the loss relative to
        hard-to-classify examples. Must be non-negative.

    pos_weight : float, optional
        The coefficient :math:`\\alpha` to use on the positive examples. Must be
        non-negative.

    from_logits : bool, optional
        Whether `y_pred` contains logits or probabilities.

    label_smoothing : float, optional
        Float in [0, 1]. When 0, no smoothing occurs. When positive, the binary
        ground truth labels `y_true` are squeezed toward 0.5, with larger values
        of `label_smoothing` leading to label values closer to 0.5.

    Returns
    -------
    :class:`tf.Tensor`
        The focal loss for each example (assuming `y_true` and `y_pred` have the
        same shapes). In general, the shape of the output is the result of
        broadcasting the shapes of `y_true` and `y_pred`.

    Warnings
    --------
    This function does not reduce its output to a scalar, so it cannot be passed
    to :meth:`tf.keras.Model.compile` as a `loss` argument. Instead, use the
    wrapper class :class:`~focal_loss.BinaryFocalLoss`.

    Examples
    --------

    This function computes the per-example focal loss between a label and
    prediction tensor:

    >>> import numpy as np
    >>> from focal_loss import binary_focal_loss
    >>> loss = binary_focal_loss([0, 1, 1], [0.1, 0.7, 0.9], gamma=2)
    >>> np.set_printoptions(precision=3)
    >>> print(loss.numpy())
    [0.001 0.032 0.001]

    Below is a visualization of the focal loss between the positive class and
    predicted probabilities between 0 and 1. Note that as :math:`\\gamma`
    increases, the losses for predictions closer to 1 get smoothly pushed to 0.

    .. plot::
        :include-source:
        :align: center

        import numpy as np
        import matplotlib.pyplot as plt

        from focal_loss import binary_focal_loss

        ps = np.linspace(0, 1, 100)
        gammas = (0, 0.5, 1, 2, 5)

        plt.figure()
        for gamma in gammas:
            loss = binary_focal_loss(1, ps, gamma=gamma)
            label = rf'$\\gamma$=gamma'
            if gamma == 0:
                label += ' (cross-entropy)'
            plt.plot(ps, loss, label=label)
        plt.legend(loc='best', frameon=True, shadow=True)
        plt.xlim(0, 1)
        plt.ylim(0, 4)
        plt.xlabel(r'Probability of positive class $\\hatp$')
        plt.ylabel('Loss')
        plt.title(r'Plot of focal loss $L(1, \\hatp)$ for different $\\gamma$',
                  fontsize=14)
        plt.show()

    Notes
    -----
    A classifier often estimates the positive class probability :math:`\\hatp`
    by computing a real-valued *logit* :math:`\\haty \\in \\mathbbR` and
    applying the *sigmoid function* :math:`\\sigma : \\mathbbR \\to (0, 1)`
    defined by

    .. math::

        \\sigma(t) = \\frac11 + e^-t, \\qquad (t \\in \\mathbbR).

    That is, :math:`\\hatp = \\sigma(\\haty)`. In this case, the focal loss
    can be written as a function of the logit :math:`\\haty` instead of the
    predicted probability :math:`\\hatp`:

    .. math::

        L(y, \\haty)
        = -\\alpha y \\left(1 - \\sigma(\\haty)\\right)^\\gamma
        \\log(\\sigma(\\haty))
        - (1 - y) \\sigma(\\haty)^\\gamma \\log(1 - \\sigma(\\haty)).

    This is the formula that is computed when specifying `from_logits=True`.
    However, this formula is not very numerically stable if implemented
    directly; for example, there are multiple log and sigmoid computations
    involved. Instead, we use some tricks to rewrite it in the more numerically
    stable form

    .. math::

        L(y, \\haty)
        = (1 - y) \\hatp^\\gamma \\haty
        + \\left(\\alpha y \\hatq^\\gamma + (1 - y) \\hatp^\\gamma\\right)
        \\left(\\log(1 + e^-|\\haty|) + \\max\\-\\haty, 0\\\\right),

    where :math:`\\hatp = \\sigma(\\haty)` and :math:`\\hatq = 1 - \\hatp`
    denote the estimates of the probabilities of the positive and negative
    classes, respectively.

    Indeed, starting with the observations that

    .. math::

        \\log(\\sigma(\\haty))
        = \\log\\left(\\frac11 + e^-\\haty\\right)
        = -\\log(1 + e^-\\haty)

    and

    .. math::

        \\log(1 - \\sigma(\\haty))
        = \\log\\left(\\frace^-\\haty1 + e^-\\haty\\right)
        = -\\haty - \\log(1 + e^-\\haty),

    we obtain

    .. math::

        \\beginaligned
        L(y, \\haty)
        &= -\\alpha y \\hatq^\\gamma \\log(\\sigma(\\haty))
        - (1 - y) \\hatp^\\gamma \\log(1 - \\sigma(\\haty)) \\\\
        &= \\alpha y \\hatq^\\gamma \\log(1 + e^-\\haty)
        + (1 - y) \\hatp^\\gamma \\left(\\haty + \\log(1 + e^-\\haty)\\right)\\\\
        &= (1 - y) \\hatp^\\gamma \\haty
        + \\left(\\alpha y \\hatq^\\gamma + (1 - y) \\hatp^\\gamma\\right)
        \\log(1 + e^-\\haty).
        \\endaligned

    Note that if :math:`\\haty < 0`, then the exponential term
    :math:`e^-\\haty` could become very large. In this case, we can instead
    observe that

    .. math::

        \\beginalign*
        \\log(1 + e^-\\haty)
        &= \\log(1 + e^-\\haty) + \\haty - \\haty \\\\
        &= \\log(1 + e^-\\haty) + \\log(e^\\haty) - \\haty \\\\
        &= \\log(1 + e^\\haty) - \\haty.
        \\endalign*

    Moreover, the :math:`\\haty < 0` and :math:`\\haty \\geq 0` cases can be
    unified by writing

    .. math::

        \\log(1 + e^-\\haty)
        = \\log(1 + e^-|\\haty|) + \\max\\-\\haty, 0\\.

    Thus, we arrive at the numerically stable formula shown earlier.

    References
    ----------
    .. [1] T. Lin, P. Goyal, R. Girshick, K. He and P. Dollár. Focal loss for
        dense object detection. IEEE Transactions on Pattern Analysis and
        Machine Intelligence, 2018.
        (`DOI <https://doi.org/10.1109/TPAMI.2018.2858826>`__)
        (`arXiv preprint <https://arxiv.org/abs/1708.02002>`__)

    See Also
    --------
    :meth:`~focal_loss.BinaryFocalLoss`
        A wrapper around this function that makes it a
        :class:`tf.keras.losses.Loss`.
    """
    # Validate arguments
    gamma = check_float(gamma, name='gamma', minimum=0)
    pos_weight = check_float(pos_weight, name='pos_weight', minimum=0,
                             allow_none=True)
    from_logits = check_bool(from_logits, name='from_logits')
    label_smoothing = check_float(label_smoothing, name='label_smoothing',
                                  minimum=0, maximum=1, allow_none=True)

    # Ensure predictions are a floating point tensor; converting labels to a
    # tensor will be done in the helper functions
    y_pred = tf.convert_to_tensor(y_pred)
    if not y_pred.dtype.is_floating:
        y_pred = tf.dtypes.cast(y_pred, dtype=tf.float32)

    # Delegate per-example loss computation to helpers depending on whether
    # predictions are logits or probabilities
    if from_logits:
        return _binary_focal_loss_from_logits(labels=y_true, logits=y_pred,
                                              gamma=gamma,
                                              pos_weight=pos_weight,
                                              label_smoothing=label_smoothing)
    else:
        return _binary_focal_loss_from_probs(labels=y_true, p=y_pred,
                                             gamma=gamma, pos_weight=pos_weight,
                                             label_smoothing=label_smoothing)


@tf.keras.utils.register_keras_serializable()
class BinaryFocalLoss(tf.keras.losses.Loss):
    r"""Focal loss function for binary classification.

    This loss function generalizes binary cross-entropy by introducing a
    hyperparameter called the *focusing parameter* that allows hard-to-classify
    examples to be penalized more heavily relative to easy-to-classify examples.

    This class is a wrapper around :class:`~focal_loss.binary_focal_loss`. See
    the documentation there for details about this loss function.

    Parameters
    ----------
    gamma : float
        The focusing parameter :math:`\\gamma`. Must be non-negative.

    pos_weight : float, optional
        The coefficient :math:`\\alpha` to use on the positive examples. Must be
        non-negative.

    from_logits : bool, optional
        Whether model prediction will be logits or probabilities.

    label_smoothing : float, optional
        Float in [0, 1]. When 0, no smoothing occurs. When positive, the binary
        ground truth labels are squeezed toward 0.5, with larger values of
        `label_smoothing` leading to label values closer to 0.5.

    **kwargs : keyword arguments
        Other keyword arguments for :class:`tf.keras.losses.Loss` (e.g., `name`
        or `reduction`).

    Examples
    --------

    An instance of this class is a callable that takes a tensor of binary ground
    truth labels `y_true` and a tensor of model predictions `y_pred` and returns
    a scalar tensor obtained by reducing the per-example focal loss (the default
    reduction is a batch-wise average).

    >>> from focal_loss import BinaryFocalLoss
    >>> loss_func = BinaryFocalLoss(gamma=2)
    >>> loss = loss_func([0, 1, 1], [0.1, 0.7, 0.9])  # A scalar tensor
    >>> print(f'Mean focal loss: loss.numpy():.3f')
    Mean focal loss: 0.011

    Use this class in the :mod:`tf.keras` API like any other binary
    classification loss function class found in :mod:`tf.keras.losses` (e.g.,
    :class:`tf.keras.losses.BinaryCrossentropy`:

    .. code-block:: python

        # Typical usage
        model = tf.keras.Model(...)
        model.compile(
            optimizer=...,
            loss=BinaryFocalLoss(gamma=2),  # Used here like a tf.keras loss
            metrics=...,
        )
        history = model.fit(...)

    See Also
    --------
    :meth:`~focal_loss.binary_focal_loss`
        The function that performs the focal loss computation, taking a label
        tensor and a prediction tensor and outputting a loss.
    """

    def __init__(self, gamma, *, pos_weight=None, from_logits=False,
                 label_smoothing=None, **kwargs):
        # Validate arguments
        gamma = check_float(gamma, name='gamma', minimum=0)
        pos_weight = check_float(pos_weight, name='pos_weight', minimum=0,
                                 allow_none=True)
        from_logits = check_bool(from_logits, name='from_logits')
        label_smoothing = check_float(label_smoothing, name='label_smoothing',
                                      minimum=0, maximum=1, allow_none=True)

        super().__init__(**kwargs)
        self.gamma = gamma
        self.pos_weight = pos_weight
        self.from_logits = from_logits
        self.label_smoothing = label_smoothing

    def get_config(self):
        """Returns the config of the layer.

        A layer config is a Python dictionary containing the configuration of a
        layer. The same layer can be re-instantiated later (without its trained
        weights) from this configuration.

        Returns
        -------
        dict
            This layer's config.
        """
        config = super().get_config()
        config.update(gamma=self.gamma, pos_weight=self.pos_weight,
                      from_logits=self.from_logits,
                      label_smoothing=self.label_smoothing)
        return config

    def call(self, y_true, y_pred):
        """Compute the per-example focal loss.

        This method simply calls :meth:`~focal_loss.binary_focal_loss` with the
        appropriate arguments.

        Parameters
        ----------
        y_true : tensor-like
            Binary (0 or 1) class labels.

        y_pred : tensor-like
            Either probabilities for the positive class or logits for the
            positive class, depending on the `from_logits` attribute. The shapes
            of `y_true` and `y_pred` should be broadcastable.

        Returns
        -------
        :class:`tf.Tensor`
            The per-example focal loss. Reduction to a scalar is handled by
            this layer's :meth:`~focal_loss.BinaryFocalLoss.__call__` method.
        """
        return binary_focal_loss(y_true=y_true, y_pred=y_pred, gamma=self.gamma,
                                 pos_weight=self.pos_weight,
                                 from_logits=self.from_logits,
                                 label_smoothing=self.label_smoothing)


# Helper functions below


def _process_labels(labels, label_smoothing, dtype):
    """Pre-process a binary label tensor, maybe applying smoothing.

    Parameters
    ----------
    labels : tensor-like
        Tensor of 0's and 1's.

    label_smoothing : float or None
        Float in [0, 1]. When 0, no smoothing occurs. When positive, the binary
        ground truth labels `y_true` are squeezed toward 0.5, with larger values
        of `label_smoothing` leading to label values closer to 0.5.

    dtype : tf.dtypes.DType
        Desired type of the elements of `labels`.

    Returns
    -------
    tf.Tensor
        The processed labels.
    """
    labels = tf.dtypes.cast(labels, dtype=dtype)
    if label_smoothing is not None:
        labels = (1 - label_smoothing) * labels + label_smoothing * 0.5
    return labels


def _binary_focal_loss_from_logits(labels, logits, gamma, pos_weight,
                                   label_smoothing):
    """Compute focal loss from logits using a numerically stable formula.

    Parameters
    ----------
    labels : tensor-like
        Tensor of 0's and 1's: binary class labels.

    logits : tf.Tensor
        Logits for the positive class.

    gamma : float
        Focusing parameter.

    pos_weight : float or None
        If not None, losses for the positive class will be scaled by this
        weight.

    label_smoothing : float or None
        Float in [0, 1]. When 0, no smoothing occurs. When positive, the binary
        ground truth labels `y_true` are squeezed toward 0.5, with larger values
        of `label_smoothing` leading to label values closer to 0.5.

    Returns
    -------
    tf.Tensor
        The loss for each example.
    """
    labels = _process_labels(labels=labels, label_smoothing=label_smoothing,
                             dtype=logits.dtype)

    # Compute probabilities for the positive class
    p = tf.math.sigmoid(logits)

    # Without label smoothing we can use TensorFlow's built-in per-example cross
    # entropy loss functions and multiply the result by the modulating factor.
    # Otherwise, we compute the focal loss ourselves using a numerically stable
    # formula below
    if label_smoothing is None:
        # The labels and logits tensors' shapes need to be the same for the
        # built-in cross-entropy functions. Since we want to allow broadcasting,
        # we do some checks on the shapes and possibly broadcast explicitly
        # Note: tensor.shape returns a tf.TensorShape, whereas tf.shape(tensor)
        # returns an int tf.Tensor; this is why both are used below
        labels_shape = labels.shape
        logits_shape = logits.shape
        if not labels_shape.is_fully_defined() or labels_shape != logits_shape:
            labels_shape = tf.shape(labels)
            logits_shape = tf.shape(logits)
            shape = tf.broadcast_dynamic_shape(labels_shape, logits_shape)
            labels = tf.broadcast_to(labels, shape)
            logits = tf.broadcast_to(logits, shape)
        if pos_weight is None:
            loss_func = tf.nn.sigmoid_cross_entropy_with_logits
        else:
            loss_func = partial(tf.nn.weighted_cross_entropy_with_logits,
                                pos_weight=pos_weight)
        loss = loss_func(labels=labels, logits=logits)
        modulation_pos = (1 - p) ** gamma
        modulation_neg = p ** gamma
        mask = tf.dtypes.cast(labels, dtype=tf.bool)
        modulation = tf.where(mask, modulation_pos, modulation_neg)
        return modulation * loss

    # Terms for the positive and negative class components of the loss
    pos_term = labels * ((1 - p) ** gamma)
    neg_term = (1 - labels) * (p ** gamma)

    # Term involving the log and ReLU
    log_weight = pos_term
    if pos_weight is not None:
        log_weight *= pos_weight
    log_weight += neg_term
    log_term = tf.math.log1p(tf.math.exp(-tf.math.abs(logits)))
    log_term += tf.nn.relu(-logits)
    log_term *= log_weight

    # Combine all the terms into the loss
    loss = neg_term * logits + log_term
    return loss


def _binary_focal_loss_from_probs(labels, p, gamma, pos_weight,
                                  label_smoothing):
    """Compute focal loss from probabilities.

    Parameters
    ----------
    labels : tensor-like
        Tensor of 0's and 1's: binary class labels.

    p : tf.Tensor
        Estimated probabilities for the positive class.

    gamma : float
        Focusing parameter.

    pos_weight : float or None
        If not None, losses for the positive class will be scaled by this
        weight.

    label_smoothing : float or None
        Float in [0, 1]. When 0, no smoothing occurs. When positive, the binary
        ground truth labels `y_true` are squeezed toward 0.5, with larger values
        of `label_smoothing` leading to label values closer to 0.5.

    Returns
    -------
    tf.Tensor
        The loss for each example.
    """
    # Predicted probabilities for the negative class
    q = 1 - p

    # For numerical stability (so we don't inadvertently take the log of 0)
    p = tf.math.maximum(p, _EPSILON)
    q = tf.math.maximum(q, _EPSILON)

    # Loss for the positive examples
    pos_loss = -(q ** gamma) * tf.math.log(p)
    if pos_weight is not None:
        pos_loss *= pos_weight

    # Loss for the negative examples
    neg_loss = -(p ** gamma) * tf.math.log(q)

    # Combine loss terms
    if label_smoothing is None:
        labels = tf.dtypes.cast(labels, dtype=tf.bool)
        loss = tf.where(labels, pos_loss, neg_loss)
    else:
        labels = _process_labels(labels=labels, label_smoothing=label_smoothing,
                                 dtype=p.dtype)
        loss = labels * pos_loss + (1 - labels) * neg_loss

    return loss

多分类的:

"""Multiclass focal loss implementation."""
#    __                          _     _
#   / _|                        | |   | |
#  | |_    ___     ___    __ _  | |   | |   ___    ___   ___
#  |  _|  / _ \\   / __|  / _` | | |   | |  / _ \\  / __| / __|
#  | |   | (_) | | (__  | (_| | | |   | | | (_) | \\__ \\ \\__ \\
#  |_|    \\___/   \\___|  \\__,_| |_|   |_|  \\___/  |___/ |___/

import itertools
from typing import Any, Optional

import tensorflow as tf

_EPSILON = tf.keras.backend.epsilon()


def sparse_categorical_focal_loss(y_true, y_pred, gamma, *,
                                  class_weight: Optional[Any] = None,
                                  from_logits: bool = False, axis: int = -1
                                  ) -> tf.Tensor:
    r"""Focal loss function for multiclass classification with integer labels.

    This loss function generalizes multiclass softmax cross-entropy by
    introducing a hyperparameter called the *focusing parameter* that allows
    hard-to-classify examples to be penalized more heavily relative to
    easy-to-classify examples.

    See :meth:`~focal_loss.binary_focal_loss` for a description of the focal
    loss in the binary setting, as presented in the original work [1]_.

    In the multiclass setting, with integer labels :math:`y`, focal loss is
    defined as

    .. math::

        L(y, \\hat\\mathbfp)
        = -\\left(1 - \\hatp_y\\right)^\\gamma \\log(\\hatp_y)

    where

    *   :math:`y \\in \\0, \\ldots, K - 1\\` is an integer class label (:math:`K`
        denotes the number of classes),
    *   :math:`\\hat\\mathbfp = (\\hatp_0, \\ldots, \\hatp_K-1)
        \\in [0, 1]^K` is a vector representing an estimated probability
        distribution over the :math:`K` classes,
    *   :math:`\\gamma` (gamma, not :math:`y`) is the *focusing parameter* that
        specifies how much higher-confidence correct predictions contribute to
        the overall loss (the higher the :math:`\\gamma`, the higher the rate at
        which easy-to-classify examples are down-weighted).

    The usual multiclass softmax cross-entropy loss is recovered by setting
    :math:`\\gamma = 0`.

    Parameters
    ----------
    y_true : tensor-like
        Integer class labels.

    y_pred : tensor-like
        Either probabilities or logits, depending on the `from_logits`
        parameter.

    gamma : float or tensor-like of shape (K,)
        The focusing parameter :math:`\\gamma`. Higher values of `gamma` make
        easy-to-classify examples contribute less to the loss relative to
        hard-to-classify examples. Must be non-negative. This can be a
        one-dimensional tensor, in which case it specifies a focusing parameter
        for each class.

    class_weight: tensor-like of shape (K,)
        Weighting factor for each of the :math:`k` classes. If not specified,
        then all classes are weighted equally.

    from_logits : bool, optional
        Whether `y_pred` contains logits or probabilities.

    axis : int, optional
        Channel axis in the `y_pred` tensor.

    Returns
    -------
    :class:`tf.Tensor`
        The focal loss for each example.

    Examples
    --------

    This function computes the per-example focal loss between a one-dimensional
    integer label vector and a two-dimensional prediction matrix:

    >>> import numpy as np
    >>> from focal_loss import sparse_categorical_focal_loss
    >>> y_true = [0, 1, 2]
    >>> y_pred = [[0.8, 0.1, 0.1], [0.2, 0.7, 0.1], [0.2, 0.2, 0.6]]
    >>> loss = sparse_categorical_focal_loss(y_true, y_pred, gamma=2)
    >>> np.set_printoptions(precision=3)
    >>> print(loss.numpy())
    [0.009 0.032 0.082]

    Warnings
    --------
    This function does not reduce its output to a scalar, so it cannot be passed
    to :meth:`tf.keras.Model.compile` as a `loss` argument. Instead, use the
    wrapper class :class:`~focal_loss.SparseCategoricalFocalLoss`.

    References
    ----------
    .. [1] T. Lin, P. Goyal, R. Girshick, K. He and P. Dollár. Focal loss for
        dense object detection. IEEE Transactions on Pattern Analysis and
        Machine Intelligence, 2018.
        (`DOI <https://doi.org/10.1109/TPAMI.2018.2858826>`__)
        (`arXiv preprint <https://arxiv.org/abs/1708.02002>`__)

    See Also
    --------
    :meth:`~focal_loss.SparseCategoricalFocalLoss`
        A wrapper around this function that makes it a
        :class:`tf.keras.losses.Loss`.
    """
    # Process focusing parameter
    gamma = tf.convert_to_tensor(gamma, dtype=tf.dtypes.float32)
    gamma_rank = gamma.shape.rank
    scalar_gamma = gamma_rank == 0

    # Process class weight
    if class_weight is not None:
        class_weight = tf.convert_to_tensor(class_weight,
                                            dtype=tf.dtypes.float32)

    # Process prediction tensor
    y_pred = tf.convert_to_tensor(y_pred)
    y_pred_rank = y_pred.shape.rank
    if y_pred_rank is not None:
        axis %= y_pred_rank
        if axis != y_pred_rank - 1:
            # Put channel axis last for sparse_softmax_cross_entropy_with_logits
            perm = list(itertools.chain(range(axis),
                                        range(axis + 1, y_pred_rank), [axis]))
            y_pred = tf.transpose(y_pred, perm=perm)
    elif axis != -1:
        raise ValueError(
            f'Cannot compute sparse categorical focal loss with axis=axis on '
            'a prediction tensor with statically unknown rank.')
    y_pred_shape = tf.shape(y_pred)

    # Process ground truth tensor
    y_true = tf.dtypes.cast(y_true, dtype=tf.dtypes.int64)
    y_true_rank = y_true.shape.rank

    if y_true_rank is None:
        raise NotImplementedError('Sparse categorical focal loss not supported '
                                  'for target/label tensors of unknown rank')

    reshape_needed = (y_true_rank is not None and y_pred_rank is not None and
                      y_pred_rank != y_true_rank + 1)
    if reshape_needed:
        y_true = tf.reshape(y_true, [-1])
        y_pred = tf.reshape(y_pred, [-1, y_pred_shape[-1]])

    if from_logits:
        logits = y_pred
        probs = tf.nn.softmax(y_pred, axis=-1)
    else:
        probs = y_pred
        logits = tf.math.log(tf.clip_by_value(y_pred, _EPSILON, 1 - _EPSILON))

    xent_loss = tf.nn.sparse_softmax_cross_entropy_with_logits(
        labels=y_true,
        logits=logits,
    )

    y_true_rank = y_true.shape.rank
    probs = tf.gather(probs, y_true, axis=-1, batch_dims=y_true_rank)
    if not scalar_gamma:
        gamma = tf.gather(gamma, y_true, axis=0, batch_dims=y_true_rank)
    focal_modulation = (1 - probs) ** gamma
    loss = focal_modulation * xent_loss

    if class_weight is not None:
        class_weight = tf.gather(class_weight, y_true, axis=0,
                                 batch_dims=y_true_rank)
        loss *= class_weight

    if reshape_needed:
        loss = tf.reshape(loss, y_pred_shape[:-1])

    return loss


@tf.keras.utils.register_keras_serializable()
class SparseCategoricalFocalLoss(tf.keras.losses.Loss):
    r"""Focal loss function for multiclass classification with integer labels.

    This loss function generalizes multiclass softmax cross-entropy by
    introducing a hyperparameter :math:`\\gamma` (gamma), called the
    *focusing parameter*, that allows hard-to-classify examples to be penalized
    more heavily relative to easy-to-classify examples.

    This class is a wrapper around
    :class:`~focal_loss.sparse_categorical_focal_loss`. See the documentation
    there for details about this loss function.

    Parameters
    ----------
    gamma : float or tensor-like of shape (K,)
        The focusing parameter :math:`\\gamma`. Higher values of `gamma` make
        easy-to-classify examples contribute less to the loss relative to
        hard-to-classify examples. Must be non-negative. This can be a
        one-dimensional tensor, in which case it specifies a focusing parameter
        for each class.

    class_weight: tensor-like of shape (K,)
        Weighting factor for each of the :math:`k` classes. If not specified,
        then all classes are weighted equally.

    from_logits : bool, optional
        Whether model prediction will be logits or probabilities.

    **kwargs : keyword arguments
        Other keyword arguments for :class:`tf.keras.losses.Loss` (e.g., `name`
        or `reduction`).

    Examples
    --------

    An instance of this class is a callable that takes a rank-one tensor of
    integer class labels `y_true` and a tensor of model predictions `y_pred` and
    returns a scalar tensor obtained by reducing the per-example focal loss (the
    default reduction is a batch-wise average).

    >>> from focal_loss import SparseCategoricalFocalLoss
    >>> loss_func = SparseCategoricalFocalLoss(gamma=2)
    >>> y_true = [0, 1, 2]
    >>> y_pred = [[0.8, 0.1, 0.1], [0.2, 0.7, 0.1], [0.2, 0.2, 0.6]]
    >>> loss_func(y_true, y_pred)
    <tf.Tensor: shape=(), dtype=float32, numpy=0.040919524>

    Use this class in the :mod:`tf.keras` API like any other multiclass
    classification loss function class that accepts integer labels found in
    :mod:`tf.keras.losses` (e.g.,
    :class:`tf.keras.losses.SparseCategoricalCrossentropy`:

    .. code-block:: python

        # Typical usage
        model = tf.keras.Model(...)
        model.compile(
            optimizer=...,
            loss=SparseCategoricalFocalLoss(gamma=2),  # Used here like a tf.keras loss
            metrics=...,
        )
        history = model.fit(...)

    See Also
    --------
    :meth:`~focal_loss.sparse_categorical_focal_loss`
        The function that performs the focal loss computation, taking a label
        tensor and a prediction tensor and outputting a loss.
    """

    def __init__(self, gamma, class_weight: Optional[Any] = None,
                 from_logits: bool = False, **kwargs):
        super().__init__(**kwargs)
        self.gamma = gamma
        self.class_weight = class_weight
        self.from_logits = from_logits

    def get_config(self):
        """Returns the config of the layer.

        A layer config is a Python dictionary containing the configuration of a
        layer. The same layer can be re-instantiated later (without its trained
        weights) from this configuration.

        Returns
        -------
        dict
            This layer's config.
        """
        config = super().get_config()
        config.update(gamma=self.gamma, class_weight=self.class_weight,
                      from_logits=self.from_logits)
        return config

    def call(self, y_true, y_pred):
        """Compute the per-example focal loss.

        This method simply calls
        :meth:`~focal_loss.sparse_categorical_focal_loss` with the appropriate
        arguments.

        Parameters
        ----------
        y_true : tensor-like, shape (N,)
            Integer class labels.

        y_pred : tensor-like, shape (N, K)
            Either probabilities or logits, depending on the `from_logits`
            parameter.

        Returns
        -------
        :class:`tf.Tensor`
            The per-example focal loss. Reduction to a scalar is handled by
            this layer's
            :meth:`~focal_loss.SparseCateogiricalFocalLoss.__call__` method.
        """
        return sparse_categorical_focal_loss(y_true=y_true, y_pred=y_pred,
                                             class_weight=self.class_weight,
                                             gamma=self.gamma,
                                             from_logits=self.from_logits)

对应:focal-loss/_categorical_focal_loss.py at master · artemmavrin/focal-loss · GitHub

focalloss安装与使用tensorflow2.x版本(代码片段)

前言本文介绍在TensorFlow2.x中,如何简便地使用FocalLoss损失函数;它可以通过pip来安装的;调用也比较方便。一、安装方式1:直接通过pip安装pipinstallfocal-loss当前版本:focal-loss0.0.7支持的python版本:python3.6... 查看详情

睿智的目标检测61——tensorflow2focalloss详解与在yolov4当中的实现(代码片段)

睿智的目标检测61——Tensorflow2Focalloss详解与在YoloV4当中的实现学习前言什么是FocalLoss一、控制正负样本的权重二、控制容易分类和难分类样本的权重三、两种权重控制方法合并实现方式学习前言TF2的也补上咯。其实和Keras的一摸... 查看详情

《5分钟理解focalloss与ghm——解决样本不平衡利器》(代码片段)

 5分钟理解FocalLoss与GHM——解决样本不平衡利器 中国移不动计算机视觉,深度学习,玩机数码高科技。 ------------------------------2019.12.10更新了代码解析------------------------------------ FocalLossforDenseObjectDetection是ICC... 查看详情

《5分钟理解focalloss与ghm——解决样本不平衡利器》(代码片段)

 5分钟理解FocalLoss与GHM——解决样本不平衡利器 中国移不动计算机视觉,深度学习,玩机数码高科技。 ------------------------------2019.12.10更新了代码解析------------------------------------ FocalLossforDenseObjectDetection是ICC... 查看详情

TensorFlow 2.5 Mac M1 - 安装与 NumPy 库/Conda env 的兼容性问题

】TensorFlow2.5MacM1-安装与NumPy库/Condaenv的兼容性问题【英文标题】:TensorFlow2.5MacM1-InstallingproblemcompatibilitywithNumPylibrary/Condaenv【发布时间】:2021-10-2922:01:45【问题描述】:我在使用针对M1(Macbookpro-2020)优化的新Tensorflow2.5创建conda... 查看详情

知识图谱中“三元组”抽取——python中模型总结实战(基于tensorflow2.5)(代码片段)

...、openNRE1、安装:我安装到windows上了2、使用五、基于TensorFlow2自定义NER模型(构建、训练与保存模型范例)1、BiLSTM+CRF模型2、BERT+CRF(或softmax)模型1、使用keras_bert:2、使用transformers3、BERT+SPAN模... 查看详情

focalloss分析

1)Classimbalance问题的提出Focalloss的提出就是问了解决Classimbalance问题,在两阶段目标检测算法中,这一问题是通过两阶段级联与启发式采样策略解决的(ClassimbalanceisaddressedinR-CNN-likedetectorsbyatwo-stagecascadeandsamplingheur... 查看详情

focalloss

...:https://www.cnblogs.com/king-lps/p/9497836.html在yolov3中可以使用focalloss,这是什么东西呢,这个loss主要是解决正负样本不均衡的问题的,该损失函数降低了大量简单负样本在训练中所占的权重,也可理解为一种困难样本挖掘。原来的交... 查看详情

tensorflow2.1采坑记

tf2.1安装了好多遍,把python从3.6搞到了3.7还是没办法安装成功问题出在这里要使用这些新软件包,用户必须安装「MicrosoftVisualC++RedistributableforVisualStudio2015、2017和2019」,下载地址传送:https://support.microsoft.com/zh-cn/help/2977003/the-lates... 查看详情

focalloss两点理解

博客给出了三个算例。可以看出,focalloss对可很好分类的样本赋予了较小的权重,但是对分错和不易分的样本添加了较大的权重。对于类别不平衡,使用了$alpha_t$进行加权,文章中提到较好的值是0.25,说明在训练过程中仍然需... 查看详情

windows10系统下安装tensorflow2.0(代码片段)

...用tensorflow框架构建神经网络结构,在此记录下今天安装tensorflow2.0的过程和踩过的坑,方便以后复习,以及给后来者一个参考。1.安装过程1)需要安装的东西:i.anaconda(或者miniconda,可以理解为精简版的anaconda,只保留了一些必... 查看详情

tensorflow2.报错与解决cannotconvertasymbolictensor

问题TensorFlow2.4.1(Macbookm1conda-forge)报错:NotImplementedError:CannotconvertasymbolicTensor(sequential/simple_rnn/strided_slice:0)toanumpyarray.Thiserrormayindicatethatyou’retryingtopassaTensortoaNumPy 查看详情

windows下tensorflow2.*的安装(anaconda环境)(代码片段)

...windows10python3.7anaconda3安装tesdorflow创建虚拟环境condacreate-ntensorflow2python=3.8#指定python版本号为3.8.虚拟环境的名字为tensorflow2.#后续的安装在这个环境下进行,免得出错弄坏本机的环境安装tensorflow2以上的版本激活虚拟环境activ... 查看详情

教程:windows10下如何安装使用多版本tensorflow2.x/pytorch/paddlepaddle的gpu版本[和cuda的安装及问题详解]亲测可行详细和持续更新(代码片段)

...2.x开始不再区分CPU版和GPU版。一、本文初衷鄙人最初是学TensorFlow2.x的,然后最近由于学术界很多新算法都是pytorch实现的,然后最近想去学学pytorch,后来发现paddlepaddle在国内的发展数一数二是在太强了然后也想接触一下PaddlePaddle... 查看详情

tensorflow2.1安装(代码片段)

...交流。本文主要包括以下内容:1.Anacondapython3.7版本安装2.Tensorflow2.1安装3.PyCharm安装4.JupyterNotebook安装Anaconda安装前往Anaconda官网 Python3.7版本,选择64-BitGraphicalInstaller 下载。下载完成后按照引导安装,采用默认路径。在Advan... 查看详情

tensorflow2.0新特性(代码片段)

安装TensorFlow2.0Alpha本文仅仅介绍Windows的安装方式:pipinstalltensorflow==2.0.0-alpha0#cpu版本pipinstalltensorflow==2.0.0-alpha0#gpu版本针对GPU版的安装完毕后还需要设置环境变量:SETPATH=C:ProgramFilesNVIDIAGPUComputingToolkitCUDAv10.0in;%PATH%SETPATH=C:ProgramFi... 查看详情

tensorflow2添加命令使用cpu训练

https://developer.nvidia.com/compute/cuda/10.0/Prod/local_installers/cuda_10.0.130_411.31_win10昨天把GPU版本的tf2.0安装成功之后,现在所有的代码运行居然都在gpu上跑了,并且在对gpu使用情况没有限制的条件下,既然gpu内存跑满了,代码就崩了怎么... 查看详情

windows11搭建tensorflow2.6gpu开发环境rtx3060(代码片段)

文章大纲简介windows本地原生方式主要步骤CUDA本地安装cuDNN本地安装主要特性环境变量配置anaconda环境构建WSL2docker方式(待续)简介使用wsl的docker与原生方式的对比主要步骤参考文献简介CUDA®isaparallelcomputingplatformandprogrammingmodelinvente... 查看详情