TensorFlow API – Activation Function

Activation Functions The activation ops provide different types of onolinearities for use in neural networks. These include: smooth nonlinearities (sigmoid, tanh, elu, softplus, and softsign) continous but not everywhere differentiable functions (relu, relu6, and relu_x) and random regularization (dropout) All activation ops apply componentwise, and produce a tensor of the same shape as the input … Continue reading TensorFlow API – Activation Function

Machine_Learning_with_TensorFlow (6)

Reinforcement Learning All these examples can be unified under a general formulation: performing an action in a scenario can yield a reward. A more technical term for scenario is a state. And we call the collection of all possible states a state-space. Performing of an action causes the state to change. But the question is, … Continue reading Machine_Learning_with_TensorFlow (6)