The output of the sigmoid function always ranges between 0 and 1


hyperparameter (a-trainable parameter)



Download 172.44 Kb.
Page3/3
Date11.06.2021
Size172.44 Kb.
#56847
1   2   3
Readings on Activation Function
hyperparameter (a-trainable parameter)” which seems to work better the leaky ReLU. This extension to leaky ReLU is known as Parametric ReLU.

  • The parameter α is generally a number between 0 and 1, and it is generally relatively small.

  • Have slight advantage over Leaky Relu due to trainable parameter.

  • Handle the problem of dying neuron.

    • Same as leaky Relu.

    • f(x) is monotonic when a> or =0 and f’(x) is monotonic when a =1






    Swish

    • Google Brain Team has proposed a new activation function, named Swish, which is simply f(x) = x · sigmoid(x).

    • Their experiments show that Swish tends to work better than ReLU on deeper models across a number of challenging data sets.

    • The curve of the Swish function is smooth and the function is differentiable at all points. This is helpful during the model optimization process and is considered to be one of the reasons that swish outperforms ReLU.

    • Swish function is “not monotonic”. This means that the value of the function may decrease even when the input values are increasing.

    • Function is unbounded above and bounded below.






    No

    Author / Reference / Tittle

    Activation function

    Equation

    1.

    Rethinking the Role of Activation Functions in Deep Convolutional Neural Networks for Image Classification

    Sigmoid-RELU



    2.

    Research on convolutional neural network based on improved Relu

    piecewise activation function



    Softsign-RELU



    3.

    X-ray weld image classification using

    improved convolutional neural network



    LRELU-Softplus



    4.

    Improved Convolutional Neural Network

    Based on Fast Exponentially Linear

    Unit Activation Function


    FELU from ELU



    5.

    The Influence of the Activation Function in a

    Convolution Neural Network Model of Facial

    Expression Recognition


    LS-RELU



    6.

    A new Conv2D model with modified ReLU activation function for

    identication of disease type and severity in cucumber plant



    M-RELU (extension of RELU)



    7.

    RSigELU: A nonlinear activation function for deep neural networks

    RSigELU (combinations of ReLU, sigmoid, and ELU)



    8.

    Funnel Activation for Visual Recognition

    FRELU (RELU-PRELU)



    9.

    ALReLU: A different approach on Leaky ReLU activation function to improve

    Neural Networks Performance



    ALReLU



    10.

    Parametric Deformable Exponential Linear Units for deep neural

    networks


    PDELU




    11.

    Elastic exponential linear units for convolutional neural networks

    EELU



    12

    QReLU and m-QReLU: Two novel quantum activation functions to aid medical diagnostics

    QReLU and m-QReLU

    A two-step quantum approach was applied to ReLU first, by selecting its solution for positive values ( 𝑅(𝑧)= 𝑧,∀ 𝑧>0), and the Leaky ReLU’s solution for negative values (𝑅(𝑧)=𝛼×𝑧, ∀ 𝑧 ≤ 0, 𝑤ℎ𝑒𝑟𝑒 𝛼=0.01) as a starting point to improve quantistically.

    13

    A Dynamic ReLU on Neural Network

    DRELU

    we propose a modified ReLU function, the D-ReLU, with a dynamic threshold that changes in real-time according to the range of numbers in each batch.

    14

    Improving Convolutional Neural Network Expression via Difference

    Exponentially Linear Units



    DELU



    15

    Algorithm Research on Improving Activation Function of Convolutional Neural Networks

    Softplus-RELU



    Download 172.44 Kb.

    Share with your friends:
    1   2   3




    The database is protected by copyright ©ininet.org 2024
    send message

        Main page