192. SiLU

SILU

SiLU is proposed as an activation function for neural network function approximation in reinforcement learning, and DSiLU is the derivative function for SiLU.

DSiLU is a steeper and “overshot” version of the sigmoid function and it is proposed as a competitive alternative for the sigmoid.

As a result, better scores were achieved by both.

Reference: Sigmoid-Weighted Linear Units for Neural Network Function Approximation in Reinforcement Learning