We often use tensorflow tf.concat() function to concatenate tensors, however, we may encounter TypeError: Tensors in list passed to ‘values’ of ‘ConcatV2’ Op have types [int32, float32] that don’t all match. How to fix this error? Why does it occur? In this tutorial, we will discuss it and tell you how to fix it.
Why does this TypeError occur?
The reason is the tensors which will be concatenated by tf.concat() have different data types. Here is an example.
Create two tensors: x and y, x tensor is tf.int32 and y is tf.float32
import tensorflow as tf x = tf.Variable(np.array([[1, 9, 3],[4, 5, 6]]), dtype = tf.int32) y = tf.Variable(np.array([[1, 9, 3],[4, 5, 6]]), dtype = tf.float32)
Use tf.concat() function to concatenate them
z = tf.concat([x, y], axis = 1) init = tf.global_variables_initializer() init_local = tf.local_variables_initializer() with tf.Session() as sess: sess.run([init, init_local]) print(sess.run([z]))
Run this python code, you will get this error:
How to fix this typeerror?
You should make the tensors which tf.concat() will concatenate have the same data types, we can use tf.cast() to convert tensor data type.
As to example above, we can convert tensor x to tf.float32 using tf.cast()
Convert tensor x to tf.float32
x = tf.cast(x, dtype = tf.float32)
Then use tf.concat() to concatenate tensor x and y again.
z = tf.concat([x, y], axis = 1)
The tensor z is:
[array([[1., 9., 3., 1., 9., 3.], [4., 5., 6., 4., 5., 6.]], dtype=float32)]
You will find this typeerror is fixed.
If you are using tf.concat() to concatenate output of a bilstm and this typeerror occured, how to fix?
As code below, we will concatenate outputs of forward and backward lstm and we set the outputs is tf.float32.
outputs, state = tf.nn.bidirectional_dynamic_rnn( cell_fw=tf.nn.rnn_cell.LSTMCell(hidden_size, forget_bias=1.0), cell_bw=tf.nn.rnn_cell.LSTMCell(hidden_size, forget_bias=1.0), inputs=inputs, initial_state_fw = initial_state_fw, initial_state_bw = initial_state_bw, sequence_length=sequence_length, dtype=tf.float32, time_major = False, scope=name+"_bistm" ) #outputs: [batch_size, max_time, cell_fw.output_size] outputs = tf.concat(outputs, 2)
Then you should check the type of inputs, check it is tf.float32 or not.