Note

The following cart or pol means either type A (cartesian) or type B (polar) according to [CIT2003-KUROE] notation.

TYPE A: Cartesian form

cart_sigmoid(z)

Called with 'cart_sigmoid' string.

Applies the sigmoid function to both the real and imag part of z.

\[\frac{1}{1 + e^{-x}} + j \frac{1}{1 + e^{-y}}\]

where

\[z = x + j y\]
Parameters:z – Tensor to be used as input of the activation function
Returns:Tensor result of the applied activation function
cart_elu(x, alpha=0.1)

Applies the Exponential linear unit. To both the real and imaginary part of z.

\[x if x > 0 and alpha * (exp(x)-1) if x < 0\]
Parameters:
  • z – Input tensor.
  • alpha – A scalar, slope of negative section.
Returns:

Tensor result of the applied activation function

cart_exponential(z)

Exponential activation function. Applies to both the real and imag part of z the exponential activation:

\[e^x\]
Parameters:z – Input tensor.
Returns:Tensor result of the applied activation function
cart_hard_sigmoid(z)
Applies the hard Sigmoid function to both the real and imag part of z.

The hard sigmoid function is faster to compute than sigmoid activation. Hard sigmoid activation:

\[\begin{split} 0 ,\quad x < -2.5 \\ 1 ,\quad x > 2.5 \\ 0.2 * x + 0.5 ,\quad -2.5 <= x <= 2.5\end{split}\]
Parameters:z – Input tensor.
Returns:Tensor result of the applied activation function
cart_relu(z, alpha=0.0, max_value=None, threshold=0)
Applies Rectified Linear Unit to both the real and imag part of z.

The relu function, with default values, it returns element-wise max(x, 0).

Otherwise, it follows:

\[\begin{split} f(x) = \textrm{max_value}, \quad \textrm{for} \quad x >= \textrm{max_value} \\ f(x) = x, \quad \textrm{for} \quad \textrm{threshold} <= x < \textrm{max_value} \\ f(x) = \alpha * (x - \textrm{threshold}), \quad \textrm{otherwise} \\\end{split}\]
Parameters:z – Input tensor.
Returns:Tensor result of the applied activation function
cart_leaky_relu(z, alpha=0.2, name=None)
Applies Leaky Rectified Linear Unit [CIT2013-MAAS] (source) to both the real and imag part of z.
Parameters:
  • z – Input tensor.
  • alpha – Slope of the activation function at x < 0. Default: 0.2
  • name – A name for the operation (optional).
Returns:

Tensor result of the applied activation function

cart_selu(z)

Applies Scaled Exponential Linear Unit (SELU) [CIT2017-KLAMBAUER] (source) to both the real and imag part of z.

The scaled exponential unit activation:

\[\textrm{scale} * \textrm{elu}(x, \alpha)\]
Parameters:z – Input tensor.
Returns:Tensor result of the applied activation function
cart_softplus(z):

Applies Softplus activation function to both the real and imag part of z. The Softplus function:

\[log(e^x + 1)\]
Parameters:z – Input tensor.
Returns:Tensor result of the applied activation function
cart_softsign(z):

Applies Softsign activation function to both the real and imag part of z. The softsign activation:

\[\frac{x}{\lvert x \rvert + 1}\]
Parameters:z – Input tensor.
Returns:Tensor result of the applied activation function
cart_tanh(z)
Applies Hyperbolic Tangent (tanh) activation function to both the real and imag part of z.

The tanh activation:

\[tanh(x) = \frac{sinh(x)}{cosh(x)} = \frac{e^x - e^{-x}}{e^x + e^{-x}}.\]

The derivative if tanh is computed as \(1 - tanh^2\) so it should be fast to compute for backprop.

Parameters:z – Input tensor.
Returns:Tensor result of the applied activation function
cart_softmax(z)
Applies the softmax function to both the real and imag part of z.

The softmax activation function transforms the outputs so that all values are in range (0, 1) and sum to 1. It is often used as the activation for the last layer of a classification network because the result could be interpreted as a probability distribution. The softmax of x is calculated by exp(x)/tf.reduce_sum(exp(x)).

Parameters:z – Input tensor.
Returns:Tensor result of the applied activation function

TYPE B: Polar form

pol_selu(z)

Applies Scaled Exponential Linear Unit (SELU) [CIT2017-KLAMBAUER] (source) to the absolute value of z, keeping the phase unchanged.

The scaled exponential unit activation:

\[\textrm{scale} * \textrm{elu}(x, \alpha)\]
Parameters:z – Input tensor.
Returns:Tensor result of the applied activation function
[CIT2003-KUROE]Kuroe, Yasuaki, Mitsuo Yoshid, and Takehiro Mori. “On activation functions for complex-valued neural networks—existence of energy functions—.” Artificial Neural Networks and Neural Information Processing—ICANN/ICONIP 2003. Springer, Berlin, Heidelberg, 2003. 985-992.
[CIT2013-MAAS]
    1. Maas, A. Y. Hannun, and A. Y. Ng, “Rectifier Nonlinearities Improve Neural Network Acoustic Models,” 2013.
[CIT2017-KLAMBAUER](1, 2)
  1. Klambauer, T. Unterthiner, A. Mayr, and S. Hochreiter, “Self-Normalizing Neural Networks,” ArXiv170602515 Cs Stat, Sep. 2017. Available: http://arxiv.org/abs/1706.02515.