Kullback-Leibler Divergence to NaN or INF in TensorFlow – TensorFlow Example

By | May 29, 2019

When we compute Kullback-Leibler Divergence in tensorflow, the result may be nan or inf.

Here is an example code.

import numpy as np
import tensorflow as tf
def kl(x, y):
    X = tf.distributions.Categorical(probs=x)
    Y = tf.distributions.Categorical(probs=y)
    return tf.distributions.kl_divergence(X, Y)
a = np.array([[0.05, 0.06, 0.9,0], [0.8, 0.15, 0.05,0]])
b = np.array([[0.7, 0.1, 0.1,0.1], [0.15, 0.8, 0.05,0.1]])
aa = tf.convert_to_tensor(a, tf.float32)
bb = tf.convert_to_tensor(b, tf.float32)
kl_v = kl(aa, bb)
kl_v_2 = kl(bb, aa)
init = tf.global_variables_initializer() 
init_local = tf.local_variables_initializer()
with tf.Session() as sess:
    sess.run([init, init_local])
    np.set_printoptions(precision=4, suppress=True)
   
    k1,k2= (sess.run([kl_v, kl_v_2]))
    
    print 'k1='
    print k1
    print 'k2='
    print k2

The example result is:

How to solve this problem?

We should limit the value of x or y, make them in (0,1)

You can check the this example. which can compute kl value correctly.

Leave a Reply