Note

The following cart or pol means either type A (cartesian) or type B (polar) according to [CIT2003-KUROE] notation.

# TYPE A: Cartesian form¶

cart_sigmoid(z)

Called with 'cart_sigmoid' string.

Applies the sigmoid function to both the real and imag part of z.

$\frac{1}{1 + e^{-x}} + j \frac{1}{1 + e^{-y}}$

where

$z = x + j y$
Parameters: z – Tensor to be used as input of the activation function Tensor result of the applied activation function
cart_elu(x, alpha=0.1)

Applies the Exponential linear unit. To both the real and imaginary part of z.

$x if x > 0 and alpha * (exp(x)-1) if x < 0$
Parameters: z – Input tensor. alpha – A scalar, slope of negative section. Tensor result of the applied activation function
cart_exponential(z)

Exponential activation function. Applies to both the real and imag part of z the exponential activation:

$e^x$
Parameters: z – Input tensor. Tensor result of the applied activation function
cart_hard_sigmoid(z)
Applies the hard Sigmoid function to both the real and imag part of z.

The hard sigmoid function is faster to compute than sigmoid activation. Hard sigmoid activation:

$\begin{split} 0 ,\quad x < -2.5 \\ 1 ,\quad x > 2.5 \\ 0.2 * x + 0.5 ,\quad -2.5 <= x <= 2.5\end{split}$
Parameters: z – Input tensor. Tensor result of the applied activation function
cart_relu(z, alpha=0.0, max_value=None, threshold=0)
Applies Rectified Linear Unit to both the real and imag part of z.

The relu function, with default values, it returns element-wise max(x, 0).

Otherwise, it follows:

$\begin{split} f(x) = \textrm{max_value}, \quad \textrm{for} \quad x >= \textrm{max_value} \\ f(x) = x, \quad \textrm{for} \quad \textrm{threshold} <= x < \textrm{max_value} \\ f(x) = \alpha * (x - \textrm{threshold}), \quad \textrm{otherwise} \\\end{split}$
Parameters: z – Input tensor. Tensor result of the applied activation function
cart_leaky_relu(z, alpha=0.2, name=None)
Applies Leaky Rectified Linear Unit [CIT2013-MAAS] (source) to both the real and imag part of z.
Parameters: z – Input tensor. alpha – Slope of the activation function at x < 0. Default: 0.2 name – A name for the operation (optional). Tensor result of the applied activation function
cart_selu(z)

Applies Scaled Exponential Linear Unit (SELU) [CIT2017-KLAMBAUER] (source) to both the real and imag part of z.

The scaled exponential unit activation:

$\textrm{scale} * \textrm{elu}(x, \alpha)$
Parameters: z – Input tensor. Tensor result of the applied activation function
cart_softplus(z):

Applies Softplus activation function to both the real and imag part of z. The Softplus function:

$log(e^x + 1)$
Parameters: z – Input tensor. Tensor result of the applied activation function
cart_softsign(z):

Applies Softsign activation function to both the real and imag part of z. The softsign activation:

$\frac{x}{\lvert x \rvert + 1}$
Parameters: z – Input tensor. Tensor result of the applied activation function
cart_tanh(z)
Applies Hyperbolic Tangent (tanh) activation function to both the real and imag part of z.

The tanh activation:

$tanh(x) = \frac{sinh(x)}{cosh(x)} = \frac{e^x - e^{-x}}{e^x + e^{-x}}.$

The derivative if tanh is computed as $$1 - tanh^2$$ so it should be fast to compute for backprop.

Parameters: z – Input tensor. Tensor result of the applied activation function

# TYPE B: Polar form¶

pol_selu(z)

Applies Scaled Exponential Linear Unit (SELU) [CIT2017-KLAMBAUER] (source) to the absolute value of z, keeping the phase unchanged.

The scaled exponential unit activation:

$\textrm{scale} * \textrm{elu}(x, \alpha)$
Parameters: z – Input tensor. Tensor result of the applied activation function
 [CIT2003-KUROE] Kuroe, Yasuaki, Mitsuo Yoshid, and Takehiro Mori. “On activation functions for complex-valued neural networks—existence of energy functions—.” Artificial Neural Networks and Neural Information Processing—ICANN/ICONIP 2003. Springer, Berlin, Heidelberg, 2003. 985-992.
 [CIT2013-MAAS] Maas, A. Y. Hannun, and A. Y. Ng, “Rectifier Nonlinearities Improve Neural Network Acoustic Models,” 2013.
 [CIT2017-KLAMBAUER] (1, 2) Klambauer, T. Unterthiner, A. Mayr, and S. Hochreiter, “Self-Normalizing Neural Networks,” ArXiv170602515 Cs Stat, Sep. 2017. Available: http://arxiv.org/abs/1706.02515.