The activation ops provide different types of onolinearities for use in neural networks. These include:
- smooth nonlinearities (sigmoid, tanh, elu, softplus, and softsign)
- continous but not everywhere differentiable functions (relu, relu6, and relu_x) and
- random regularization (dropout)
All activation ops apply componentwise, and produce a tensor of the same shape as the input tensor.
Computes rectified linear: max(features, 0).
Computes Rectified Linear 6: min(max(features, 0), 6).
Compute exponential linear:
exp(feature) - 1 if < 0,
See Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs)
log(exp(features) + 1).
features / (abs(features) + 1).
tf.nn.dropout(x, keep_prob, noise_shape=None, seed=None, name=None)
keep_prob, outputs the input element scaled up by
1 / keep_prob, otherwise outputs
0. The scaling is so that the expected sum is unchanged.
By default, each element is kept or dropped independently. If
noise_shape is specified, it must be broadcastable to the shape of , and only dimensions with
noise_shape[i] will make independent decisions.
tf.nn.bias_add(value, bias, data_format=None, name=None)
This is (mostly) a special case of
bias is restricted to 1-D. Broadcasting is supported, so
value may have any number of dimensions.
Computes sigmoid of
x element-wise. Specifically,
y = 1 / (1 + exp(-x)).
Computes hyperbolic tangent of