Implement Instance Normalization in TensorFlow – TensorFlow Tutorial

By | February 28, 2022

Instance Normalization is proposed in paper: Instance Normalization: The Missing Ingredient for Fast Stylization. It also be used in CycleGAN. In this tutorial, we will introduce how to implement it using tensorflow.

Method 1: use tf.contrib.layers.instance_norm()

In tensorflow 1.x, we can use tf.contrib.layers.instance_norm() to implement.

This function is defined as:

  1. tf.contrib.layers.instance_norm(
  2. inputs, center=True, scale=True, epsilon=1e-06, activation_fn=None,
  3. param_initializers=None, reuse=None, variables_collections=None,
  4. outputs_collections=None, trainable=True, data_format=DATA_FORMAT_NHWC,
  5. scope=None
  6. )
tf.contrib.layers.instance_norm(
    inputs, center=True, scale=True, epsilon=1e-06, activation_fn=None,
    param_initializers=None, reuse=None, variables_collections=None,
    outputs_collections=None, trainable=True, data_format=DATA_FORMAT_NHWC,
    scope=None
)

We should notice:

inputs: A tensor with 2 or more dimensions, where the first dimension has batch_size. The normalization is over all but the last dimension if data_format is NHWC and the second dimension if data_format is NCHW

The function is created in normalization.py, we can find:

the implementation of instance normalization in tensorflow

Method 2: Create a function to implement instance normalization

We also can build a function to implement. Here is an example:

  1. # x is: batch_size * h * w * c
  2. def instance_norm(x):
  3. with tf.variable_scope("instance_norm"):
  4. epsilon = 1e-6
  5. mean, var = tf.nn.moments(x, [1, 2], keep_dims=True)
  6. scale = tf.get_variable('scale', [x.get_shape()[-1]],
  7. initializer=tf.ones_initializer())
  8. offset = tf.get_variable(
  9. 'offset', [x.get_shape()[-1]],
  10. initializer=tf.constant_initializer(0.0)
  11. )
  12. out = scale * tf.div(x - mean, tf.sqrt(var + epsilon)) + offset
  13. return out
# x is: batch_size * h * w * c
def instance_norm(x):

    with tf.variable_scope("instance_norm"):
        epsilon = 1e-6
        mean, var = tf.nn.moments(x, [1, 2], keep_dims=True)
        scale = tf.get_variable('scale', [x.get_shape()[-1]],
                                initializer=tf.ones_initializer())
        offset = tf.get_variable(
            'offset', [x.get_shape()[-1]],
            initializer=tf.constant_initializer(0.0)
        )
        out = scale * tf.div(x - mean, tf.sqrt(var + epsilon)) + offset

        return out

We can find scale and offset is initialized as tf.contrib.layers.instance_norm(). We will use an example to compare them.

First, we should set a random seed to keep the result stable.

  1. import tensorflow as tf
  2. import random
  3. import os
  4. import numpy as np
  5. import seed_util
import tensorflow as tf
import random
import os
import numpy as np
import seed_util

seed_util library can be foud in this tutorial:

A Beginner Guide to Get Stable Result in TensorFlow – TensorFlow Tutorial

Then we can use these two methods to compute instance normalization.

  1. inputs = tf.Variable(tf.random_uniform([4, 64, 64, 3], -0.01, 0.01), name = "inputs")
  2. im = tf.contrib.layers.instance_norm(
  3. inputs
  4. )
  5. im_2 = instance_norm(inputs)
  6. init = tf.initialize_all_variables()
  7. with tf.Session() as sess:
  8. sess.run(init)
  9. print(sess.run(im))
  10. print (sess.run(im_2))
inputs = tf.Variable(tf.random_uniform([4, 64, 64, 3], -0.01, 0.01), name = "inputs")

im = tf.contrib.layers.instance_norm(
    inputs
)

im_2 = instance_norm(inputs)
init = tf.initialize_all_variables()
with tf.Session() as sess:
    sess.run(init)
    print(sess.run(im))
    print (sess.run(im_2))

In this code, the inputs is 4*64*64*3. You can regard it as a 4 64*64 images. Run this code, we will get:

  1. tf.contrib.layers.instance_norm
  2. [[[[ 1.2823033 1.1130477 0.7026206 ]
  3. [-0.68963706 0.85418946 -1.2168176 ]
  4. [ 0.2991913 1.4322673 0.07365511]
  5. [ 1.8026736 -1.3883623 0.9475166 ]
  6. [ 0.8514857 0.25432152 -1.2900805 ]]
  7. ....
  8. [[-1.6274862 0.68731815 0.09268501]
  9. [-0.89866513 -0.84922534 -1.7221147 ]
  10. [ 1.5203117 -0.0474546 -0.6735875 ]
  11. [-0.07008501 1.3956294 -0.0356905 ]
  12. [ 0.09230857 -0.86366445 -0.48223162]]]]
  13. instance_norm
  14. [[[[ 1.2823032 1.1130476 0.7026206 ]
  15. [-0.689637 0.8541894 -1.2168176 ]
  16. [ 0.2991913 1.4322673 0.07365511]
  17. [ 1.8026736 -1.3883623 0.9475166 ]
  18. [ 0.8514857 0.25432152 -1.2900805 ]]
  19. ....
  20. [[-1.6274862 0.68731815 0.09268501]
  21. [-0.8986652 -0.8492254 -1.7221147 ]
  22. [ 1.5203117 -0.04745463 -0.67358744]
  23. [-0.07008501 1.3956294 -0.0356905 ]
  24. [ 0.09230857 -0.8636645 -0.4822316 ]]]]
tf.contrib.layers.instance_norm
[[[[ 1.2823033   1.1130477   0.7026206 ]
   [-0.68963706  0.85418946 -1.2168176 ]
   [ 0.2991913   1.4322673   0.07365511]
   [ 1.8026736  -1.3883623   0.9475166 ]
   [ 0.8514857   0.25432152 -1.2900805 ]]

  ....

  [[-1.6274862   0.68731815  0.09268501]
   [-0.89866513 -0.84922534 -1.7221147 ]
   [ 1.5203117  -0.0474546  -0.6735875 ]
   [-0.07008501  1.3956294  -0.0356905 ]
   [ 0.09230857 -0.86366445 -0.48223162]]]]
instance_norm
[[[[ 1.2823032   1.1130476   0.7026206 ]
   [-0.689637    0.8541894  -1.2168176 ]
   [ 0.2991913   1.4322673   0.07365511]
   [ 1.8026736  -1.3883623   0.9475166 ]
   [ 0.8514857   0.25432152 -1.2900805 ]]
  ....

  [[-1.6274862   0.68731815  0.09268501]
   [-0.8986652  -0.8492254  -1.7221147 ]
   [ 1.5203117  -0.04745463 -0.67358744]
   [-0.07008501  1.3956294  -0.0356905 ]
   [ 0.09230857 -0.8636645  -0.4822316 ]]]]

From the result, we can find the instance_norm() is same to tf.contrib.layers.instance_norm().

Implement Instance Normalization in TensorFlow - TensorFlow Tutorial

Leave a Reply