Understand the Difference Between ‘SAME’ and ‘VALID’ Padding in Convolution Networks

By | August 9, 2020

When you are using convolution networks in deep learning, you may notice ‘SAME‘ and ‘VALID‘ padding. In this tutorial, we will introduce the difference between them.

As to tensorflow tf.nn.conv2d() function, parameter padding can be: SAME or VALID.

The Difference Between ‘SAME’ and ‘VALID’ Padding

We will use an example to illustrate the difference between them.

If Input width = 13, Filter width = 6 ,Stride = 5

Understand the Difference Between 'SAME' and 'VALID' Padding in Convolution Networks

“VALID” only ever drops the right-most columns (or bottom-most rows).

“SAME” tries to pad evenly left and right (right and bottom first), but if the amount of columns to be added is odd, it will add the extra column to the right, as is the case in this example (the same logic applies vertically: there may be an extra row of zeros at the bottom).

Here we will use an example to valid the difference between SAME and VALID padding.

import tensorflow as tf 
import numpy as np
from tensorflow.examples.tutorials.mnist import input_data
 
mnist = input_data.read_data_sets("./MNIST_data/", one_hot=True)
 
xs = tf.placeholder(tf.float32, [None, 784])
ys = tf.placeholder(tf.float32, [None, 10])
 
x = tf.reshape(xs, [-1, 28, 28, 1])
conv1 = tf.layers.conv2d(inputs=x, use_bias=True, filters=3, kernel_size=[3,3], strides=2, padding='SAME')
conv2 = tf.layers.conv2d(inputs=x, use_bias=True, filters=3, kernel_size=[3,3], strides=2, padding='VALID')
 
sess = tf.Session()
init = tf.global_variables_initializer()
sess.run(init)
 
batch_xs, batch_ys = mnist.train.next_batch(1)
print("SAME:", np.shape(sess.run(conv1, feed_dict={xs:batch_xs, ys:batch_ys})))
print("VALID:", np.shape(sess.run(conv2, feed_dict={xs:batch_xs, ys:batch_ys})))

Run this code, we can find:

SAME: (1, 14, 14, 3)

VALID: (1, 13, 13, 3)

Leave a Reply