tf.nn.dropout() allows us to create a dropout layer for tensorflow model. In this tutorial, we will introduce how to use it.
Syntax
tf.nn.dropout() function is defined as:
tf.nn.dropout( x, keep_prob, noise_shape=None, seed=None, name=None )
It will output the input element x scaled up by 1 / keep_prob, otherwise outputs 0.
We will use an example to help you understand it.
How to use tf.nn.dropout()?
We will create a 5*5 matrix filled with 1.
import numpy as np import tensorflow as tf x = tf.Variable(tf.ones([5, 5]))
Then we will apply a dropout layer on \(x\) with keep_prob = 0.8
inputs = tf.nn.dropout(x, 0.8)
Finally, we will output all tensor values.
input_sum = tf.reduce_sum(inputs) init = tf.initialize_all_variables() with tf.Session() as sess: sess.run(init) print (sess.run(inputs)) print (sess.run(input_sum))
Run this code, we will find \(x\) is:
[[1.25 1.25 1.25 1.25 1.25] [1.25 1.25 1.25 0. 1.25] [0. 1.25 1.25 0. 0. ] [1.25 1.25 0. 1.25 1.25] [1.25 1.25 1.25 1.25 1.25]]
25*(1-0.8) = 5 elements in \(x\) is set to 0, other 20 elements are 1/0.8 = 1.25
However, you should notice: Applying dropout layer in model can decrease the speed of your model training. Meanwhile, you also can use L2 regularization in tensorflow. Here is an tutorial:
Multi-layer Neural Network Implements L2 Regularization in TensorFlow – TensorFLow Tutorial