Compute Kullback-Leibler Divergence in TensorFlow – TensorFlow Example

By | May 29, 2019

Kullback-Leibler Divergence can measure the difference between two probability distributions p(x) and q(x), it is often used in deep learning application, such as tensorflow.

In this tutorial, we write an example to compute kl divergence in tensorflow, you can learn how to use this code by our tutorial.

How to compute Kullback-Leibler Divergence in TensorFlow?

import numpy as np
import tensorflow as tf
def fixprob(att):
    att = att + 1e-9
    _sum = tf.reduce_sum(att, reduction_indices=1, keep_dims=True)
    att = att / _sum
    att = tf.clip_by_value(att, 1e-9, 1.0, name=None)
    return att
def kl(x, y):
    x = fixprob(x)
    y = fixprob(y)
    X = tf.distributions.Categorical(probs=x)
    Y = tf.distributions.Categorical(probs=y)
    return tf.distributions.kl_divergence(X, Y)

a = np.array([[0.05, 0.06, 0.9], [0.8, 0.15, 0.05]])
b = np.array([[0.7, 0.2, 0.1], [0.15, 0.9, 0.05]])
aa = tf.convert_to_tensor(a, tf.float32)
bb = tf.convert_to_tensor(b, tf.float32)


sess = tf.Session()

kl_v = kl(aa, bb)
kl_v_2 = kl(bb, aa)
init = tf.global_variables_initializer() 
init_local = tf.local_variables_initializer()
with tf.Session() as sess:
    sess.run([init, init_local])
    np.set_printoptions(precision=4, suppress=True)
   
    k1,k2= (sess.run([kl_v, kl_v_2]))
   
    print 'k1='
    print k1
    print 'k2='
    print k2

The result is:

Leave a Reply