tf.TensorArray is a class, not a function. You can use tf.TensorArray to construct a TensorArray operation:

ta = tf.TensorArray(dtype=tf.float32, size=2)

Here, the tensorarray contains 2 tensors. How to get the size of a tensorarray? tf.TensorArray has a method size() which seems to get the size of the object.

ta = tf.TensorArray(dtype=tf.float32, size=2) print(ta.size())

But the result is not 2, but

Tensor("TensorArraySizeV3:0", shape=(), dtype=int32)

In fact, TensorArray.size() constructs another operation TensorArraySize in the graph, which takes the tensorarray as the input:

This means you can only get the size of the TensorArray object when running the graph, by evaluating the size tensor:

ta = tf.TensorArray(dtype=tf.float32, size=2) tasize=ta.size() with tf.Session(): print(tasize.eval())

The output is 2, good! Since tf.TensorArray is a class, not an ordinary function that returns a tensor, you cannot evaluate ta:

ta = tf.TensorArray(dtype=tf.float32, size=2) tasize=ta.size() with tf.Session(): print(ta.eval())

This will produce the following error:

`AttributeError: 'TensorArray' object has no attribute 'eval'`

Yes, tf.TensorArray has no method called eval() as of an ordinary tensor object. So **how to evaluate a TensorArray object**? You may try the following code to print the content of a TensorArray object:

ta = tf.TensorArray(dtype=tf.float32, size=2) tasize=ta.size() with tf.Session(): ta_output=tf.get_default_graph().get_tensor_by_name("TensorArray:0") print(ta_output.eval())

Unfortunately, you will get the following error:

`InternalError: ndarray was 1 bytes but TF_Tensor was 134 bytes`

Is it possible to display the tensors contained in TensorArray? Looking at the methods tf.TensorArray provides, TensorArray.read seems to get a tensor in TensorArray. In fact, TensorArray.read creates another operation to read from TensorArray:

But misery is not over, if you evaluate the tensor TensorArray.read to print the value of a tensor as follow:

ta = tf.TensorArray(dtype=tf.float32, size=2) ta_read=ta.read(0) with tf.Session(): print(ta_read.eval())

You still get this error:

`InvalidArgumentError: TensorArray TensorArray_78: Could not read from TensorArray index 0. Furthermore, the element shape is not fully defined: <unknown>. It is possible you are working with a resizeable TensorArray and stop_gradients is not allowing the gradients to be written. If you set the full element_shape property on the forward TensorArray, the proper all-zeros tensor will be returned instead of incurring this error.`

This is understandable because we have not put a tensor in the array, how could we retrieve the value of the tensor? So, let’s write a tensor to the array first using TensorArray.write. Note that TensorArray.write creates another set of operations to write a tensor to TensorArray, but does not execute the operations. You need to execute the operations in a session.

Now, we have the node to write to the tensor array, we have the node to read from the tensor array. Is the code below ok?

ta = tf.TensorArray(dtype=tf.float32, size=2) ta_write=ta.write(0,[1.0,2.0,3.0]) ta_read=ta.read(0) with tf.Session(): ta_write.eval() print(ta_read.eval())

No, it is not okay. You will get the error:

`AttributeError: 'TFShouldUseWarningWrapper' object has no attribute 'eval'`

It turns out the return value of TensorArray.write is a TFShouldUseWarningWrapper object, not a tensor(TensorArray.read does return a tensor). Can I use the following code?

ta = tf.TensorArray(dtype=tf.float32, size=2) ta_write=ta.write(0,[1.0,2.0,3.0]) ta_read=ta.read(0) with tf.Session(): tf.get_default_graph().get_tensor_by_name("TensorArrayWrite/TensorArrayWriteV3:0").eval() print(ta_read.eval())

The TensorArray.write is ok now, you’ve successfully written a tensor [1.0,2.0,3.0] to the first location of the tensor array. But the TensorArray.read problem comes back. The reason is when you evaluate multiple times in a session, the later evaluation does not use the result of previous evaluation, so there is still no tensor in the array when reading from it. You should use the control dependency to let the write operation happen before the read operation:

ta = tf.TensorArray(dtype=tf.float32, size=2) ta_write=ta.write(0,[1.0,2.0,3.0]) with tf.control_dependencies([tf.get_default_graph().get_tensor_by_name("TensorArrayWrite/TensorArrayWriteV3:0")]): ta_read=ta.read(0) with tf.Session(): print(ta_read.eval())

Finally, we get the correct result: [1.0,2.0,3.0]! Here is the final computation graph, where there exists a control dependency edge between write and read.

It would be troublesome to add control dependency between the read and the write operation. You can use the returned value(ta_write) of tf.TensorArray.write to read the original tensorarray:

ta = tf.TensorArray(dtype=tf.float32, size=2) ta_write=ta.write(0,[1.0,2.0,3.0]) ta_read=ta_write.read(0) with tf.Session(): print(ta_read.eval())

Now the read operation TensorArrayReadV3 is chained with the write operation TensorArrayWrite

Now, every time you evaluate ta_read, the write operation is evaluated first. Here is an example about how to use TensorArray.

Apart from TensorArray.write, you can also use TensorArray.scatter to store tensors in a tensor array.

ta = tf.TensorArray(dtype=tf.float32, size=3) ta_scatter=ta.scatter([0,2],[[1.0,2.0],[3.0,4.0]]) ta_read=ta_scatter.read(2) with tf.Session(): print(ta_read.eval()) #[3.0,4.0]

Here is the graph:

The execution of the TensorArrayScatter operation will put two tensors [1.0,2.0] and [3.0,4.0] into position 0 and position 2 of the tensorarray.

Note that both TensorArrayScatter and TensorArrayWrite

TensorArray.stack() will stack the tensors contained in the tensor array into one tensor that has one higher rank. For example,

ta = tf.TensorArray(dtype=tf.float32, size=3) ta_scatter=ta.scatter([0,2],[[1.0,2.0],[3.0,4.0]]) ta_stack=ta_scatter.stack() with tf.Session(): print(ta_stack.eval()) #[[1.,2.],[0.,0.],[3.,4.]]

It will produce the following compute graph:

Note that although we call the stack function of the returned object of TensorArray.scatter, the created TensorArrayStack uses the original tensorarray for the stacking. The TensorArrayScatter only provides TensorArrayStack with the flow scalar to sync TensorArrayStack with TensorArrayScatter. Note also that the TensorArray.stack() returns a tensor that we can evaluate to get the stacked result.

TensorArray.unstack() is the counterpart of TensorArray.stack() , which un-stacks a one higher dimensional tensor to tensors and save them to current tensorarray. It implements this using a TensorArrayScatter operation.

ta = tf.TensorArray(dtype=tf.float32, size=3) ta_unstack=ta.unstack([[1.,2.],[3.,4.],[5.,6.]])

TensorArray.unstack() returns a TFShouldUseWarningWrapper object that cannot be evaluated to get the unstacked tensors. You need TensorArray.read to read the individual tensors in the tensor array.

ta = tf.TensorArray(dtype=tf.float32, size=3) ta_unstack=ta.unstack([[1.,2.],[3.,4.],[5.,6.]]) ta_read=ta_unstack.read(1)

Note that although we use the returned object of TensorArray.unstack() to call its read function, the created TensorArrayReadV3 operation uses the original tensorarray as its input. TensorArrayUnstack only outputs a scalar to TensorArrayReadV3 to sync TensorArrayReadV3 with TensorArrayUnstack.

To summarize, TensorArray.write, TensorArray.scatter, TensorArray.unstack won’t output new data tensors, they just update the original tensorarray, while TensorArray.stack outputs new data tensor.