ChatGPT解决这个技术问题 Extra ChatGPT

What's the difference between tf.placeholder and tf.Variable?

I'm a newbie to TensorFlow. I'm confused about the difference between tf.placeholder and tf.Variable. In my view, tf.placeholder is used for input data, and tf.Variable is used to store the state of data. This is all what I know.

Could someone explain to me more in detail about their differences? In particular, when to use tf.Variable and when to use tf.placeholder?

Intuitively, you'll want gradients with respect to Variables, but not placeholders (whose values must always be provided).
A course like cs231n.stanford.edu can help those confused. I liked it a lot! Obviously there are others

m
moffeltje

In short, you use tf.Variable for trainable variables such as weights (W) and biases (B) for your model.

weights = tf.Variable(
    tf.truncated_normal([IMAGE_PIXELS, hidden1_units],
                    stddev=1.0 / math.sqrt(float(IMAGE_PIXELS))), name='weights')

biases = tf.Variable(tf.zeros([hidden1_units]), name='biases')

tf.placeholder is used to feed actual training examples.

images_placeholder = tf.placeholder(tf.float32, shape=(batch_size, IMAGE_PIXELS))
labels_placeholder = tf.placeholder(tf.int32, shape=(batch_size))

This is how you feed the training examples during the training:

for step in xrange(FLAGS.max_steps):
    feed_dict = {
       images_placeholder: images_feed,
       labels_placeholder: labels_feed,
     }
    _, loss_value = sess.run([train_op, loss], feed_dict=feed_dict)

Your tf.variables will be trained (modified) as the result of this training.

See more at https://www.tensorflow.org/versions/r0.7/tutorials/mnist/tf/index.html. (Examples are taken from the web page.)


What if I want to preprocess my image before feeding it in? (e.g. rescale the contrast). Do I now need a variable for this? If so, does it have any memory or speed implications?
Any preprocessing you do is going to come before feeding the data into the Tensorflow graph (i.e. network), so that work doesn't technically require any code tools from Tensorflow. For example a variable would be unnecessary 1. because it is input data, which is passed through tf.placeholders (not variables) in the graph and 2. Preprocessing occurs prior to it being loaded in to a placeholder for the current pass through the network.
Just wanted to note how much I appreciate this answer. The fact that there are far fewer upvotes on this answer than on the question just goes to show how instant-gratification people can be, and how trendy tags like tensorflow and deep learning and AI are.
So this means, tf.Variable => Updates while back-propagation; tf.placeholder => Doesn't update while back-propagation. Right?
f
fabrizioM

The difference is that with tf.Variable you have to provide an initial value when you declare it. With tf.placeholder you don't have to provide an initial value and you can specify it at run time with the feed_dict argument inside Session.run


-1. While true, this misses the point. The more important difference is their role within TensorFlow. Variables are trained over time, placeholders are are input data that doesn't change as your model trains (like input images, and class labels for those images). Like Sung Kim's answer says, you use variables for weights and biases in your model (though not limited to that - for style transfer, you optimize an image over time).
@ChrisAnderson could we say that this illustration is wrong?! youtu.be/MotG3XI2qSs?t=136
@ChrisAnderson Why does it matter what it was meant to be used for, if the differences are just one needs an initial value?
@Goldname It's not what it is "meant" to be used for - it's what is possible and not possible. They're totally different objects. They aren't interchangeable, and the differences are more than "one needs an initial value".
F
Francesco Boi

Since Tensor computations compose of graphs then it's better to interpret the two in terms of graphs.

Take for example the simple linear regression

WX+B=Y

where W and B stand for the weights and bias and X for the observations' inputs and Y for the observations' outputs.

Obviously X and Y are of the same nature (manifest variables) which differ from that of W and B (latent variables). X and Y are values of the samples (observations) and hence need a place to be filled, while W and B are the weights and bias, Variables (the previous values affect the latter) in the graph which should be trained using different X and Y pairs. We place different samples to the Placeholders to train the Variables.

We only need to save or restore the Variables (at checkpoints) to save or rebuild the graph with the code.

Placeholders are mostly holders for the different datasets (for example training data or test data). However, Variables are trained in the training process for the specific tasks, i.e., to predict the outcome of the input or map the inputs to the desired labels. They remain the same until you retrain or fine-tune the model using different or the same samples to fill into the Placeholders often through the dict. For instance:

 session.run(a_graph, dict = {a_placeholder_name : sample_values}) 

Placeholders are also passed as parameters to set models.

If you change placeholders (add, delete, change the shape etc) of a model in the middle of training, you can still reload the checkpoint without any other modifications. But if the variables of a saved model are changed, you should adjust the checkpoint accordingly to reload it and continue the training (all variables defined in the graph should be available in the checkpoint).

To sum up, if the values are from the samples (observations you already have) you safely make a placeholder to hold them, while if you need a parameter to be trained harness a Variable (simply put, set the Variables for the values you want to get using TF automatically).

In some interesting models, like a style transfer model, the input pixes are going to be optimized and the normally-called model variables are fixed, then we should make the input (usually initialized randomly) as a variable as implemented in that link.

For more information please infer to this simple and illustrating doc.


J
James

TL;DR

Variables

For parameters to learn

Values can be derived from training

Initial values are required (often random)

Placeholders

Allocated storage for data (such as for image pixel data during a feed)

Initial values are not required (but can be set, see tf.placeholder_with_default)


n
nbro

The most obvious difference between the tf.Variable and the tf.placeholder is that

you use variables to hold and update parameters. Variables are in-memory buffers containing tensors. They must be explicitly initialized and can be saved to disk during and after training. You can later restore saved values to exercise or analyze the model.

Initialization of the variables is done with sess.run(tf.global_variables_initializer()). Also while creating a variable, you need to pass a Tensor as its initial value to the Variable() constructor and when you create a variable you always know its shape.

On the other hand, you can't update the placeholder. They also should not be initialized, but because they are a promise to have a tensor, you need to feed the value into them sess.run(<op>, {a: <some_val>}). And at last, in comparison to a variable, placeholder might not know the shape. You can either provide parts of the dimensions or provide nothing at all.

There other differences:

the values inside the variable can be updated during optimizations

variables can be shared, and can be non-trainable

the values inside the variable can be stored after training

when the variable is created, 3 ops are added to a graph (variable op, initializer op, ops for the initial value)

placeholder is a function, Variable is a class (hence an uppercase)

when you use TF in a distributed environment, variables are stored in a special place (parameter server) and are shared between the workers.

Interesting part is that not only placeholders can be fed. You can feed the value to a Variable and even to a constant.


n
nbro

Adding to other's answers, they also explain it very well in this MNIST tutorial on Tensoflow website:

We describe these interacting operations by manipulating symbolic variables. Let's create one: x = tf.placeholder(tf.float32, [None, 784]), x isn't a specific value. It's a placeholder, a value that we'll input when we ask TensorFlow to run a computation. We want to be able to input any number of MNIST images, each flattened into a 784-dimensional vector. We represent this as a 2-D tensor of floating-point numbers, with a shape [None, 784]. (Here None means that a dimension can be of any length.) We also need the weights and biases for our model. We could imagine treating these like additional inputs, but TensorFlow has an even better way to handle it: Variable. A Variable is a modifiable tensor that lives in TensorFlow's graph of interacting operations. It can be used and even modified by the computation. For machine learning applications, one generally has the model parameters be Variables. W = tf.Variable(tf.zeros([784, 10])) b = tf.Variable(tf.zeros([10])) We create these Variables by giving tf.Variable the initial value of the Variable: in this case, we initialize both W and b as tensors full of zeros. Since we are going to learn W and b, it doesn't matter very much what they initially are.


hi thank you for your answer! In the example you give, we have x with shape [batch size, features] , we have the weights going from input to first layer of size [features, hidden units] and the biases [hidden units]. So my question is: how do we multiply them together? If we do tf.matmul(x, w) then we are gonna get [batch size, hidden units] and we cannot b to it, since it has shape [hidden units]
M.Gorner explains all this in his slideshows "Learn TensorFlow and deep learning, without a Ph.D." better than I could ever do here in this comment. So, please allow me to refer to this slide: docs.google.com/presentation/d/…
e
eyllanesc

Tensorflow uses three types of containers to store/execute the process

Constants :Constants holds the typical data. variables: Data values will be changed, with respective the functions such as cost_function.. placeholders: Training/Testing data will be passed in to the graph.


N
Nabeel Ahmed

Example snippet:

import numpy as np
import tensorflow as tf

### Model parameters ###
W = tf.Variable([.3], tf.float32)
b = tf.Variable([-.3], tf.float32)

### Model input and output ###
x = tf.placeholder(tf.float32)
linear_model = W * x + b
y = tf.placeholder(tf.float32)

### loss ###
loss = tf.reduce_sum(tf.square(linear_model - y)) # sum of the squares

### optimizer ###
optimizer = tf.train.GradientDescentOptimizer(0.01)
train = optimizer.minimize(loss)

### training data ###
x_train = [1,2,3,4]
y_train = [0,-1,-2,-3]

### training loop ###
init = tf.global_variables_initializer()
sess = tf.Session()
sess.run(init) # reset values to wrong
for i in range(1000):
  sess.run(train, {x:x_train, y:y_train})

As the name say placeholder is a promise to provide a value later i.e.

Variable are simply the training parameters (W(matrix), b(bias) same as the normal variables you use in your day to day programming, which the trainer updates/modify on each run/step.

While placeholder doesn't require any initial value, that when you created x and y TF doesn't allocated any memory, instead later when you feed the placeholders in the sess.run() using feed_dict, TensorFlow will allocate the appropriately sized memory for them (x and y) - this unconstrained-ness allows us to feed any size and shape of data.

In nutshell:

Variable - is a parameter you want trainer (i.e. GradientDescentOptimizer) to update after each step.

Placeholder demo -

a = tf.placeholder(tf.float32)
b = tf.placeholder(tf.float32)
adder_node = a + b  # + provides a shortcut for tf.add(a, b)

Execution:

print(sess.run(adder_node, {a: 3, b:4.5}))
print(sess.run(adder_node, {a: [1,3], b: [2, 4]}))

resulting in the output

7.5
[ 3.  7.]

In the first case 3 and 4.5 will be passed to a and b respectively, and then to adder_node ouputting 7. In second case there's a feed list, first step 1 and 2 will be added, next 3 and 4 (a and b).

Relevant reads:

tf.placeholder doc.

tf.Variable doc.

Variable VS placeholder.


e
empty

Variables

A TensorFlow variable is the best way to represent shared, persistent state manipulated by your program. Variables are manipulated via the tf.Variable class. Internally, a tf.Variable stores a persistent tensor. Specific operations allow you to read and modify the values of this tensor. These modifications are visible across multiple tf.Sessions, so multiple workers can see the same values for a tf.Variable. Variables must be initialized before using.

Example:

x = tf.Variable(3, name="x")
y = tf.Variable(4, name="y")
f = x*x*y + y + 2

This creates a computation graph. The variables (x and y) can be initialized and the function (f) evaluated in a tensorflow session as follows:

with tf.Session() as sess:
     x.initializer.run()
     y.initializer.run()
     result = f.eval()
print(result)
42

Placeholders

A placeholder is a node (same as a variable) whose value can be initialized in the future. These nodes basically output the value assigned to them during runtime. A placeholder node can be assigned using the tf.placeholder() class to which you can provide arguments such as type of the variable and/or its shape. Placeholders are extensively used for representing the training dataset in a machine learning model as the training dataset keeps changing.

Example:

A = tf.placeholder(tf.float32, shape=(None, 3))
B = A + 5

Note: 'None' for a dimension means 'any size'.

with tf.Session as sess:
    B_val_1 = B.eval(feed_dict={A: [[1, 2, 3]]})
    B_val_2 = B.eval(feed_dict={A: [[4, 5, 6], [7, 8, 9]]})

print(B_val_1)
[[6. 7. 8.]]
print(B_val_2)
[[9. 10. 11.]
 [12. 13. 14.]]

References:

https://www.tensorflow.org/guide/variables https://www.tensorflow.org/api_docs/python/tf/placeholder O'Reilly: Hands-On Machine Learning with Scikit-Learn & Tensorflow


M
Muhammad Usman

Think of Variable in tensorflow as a normal variables which we use in programming languages. We initialize variables, we can modify it later as well. Whereas placeholder doesn’t require initial value. Placeholder simply allocates block of memory for future use. Later, we can use feed_dict to feed the data into placeholder. By default, placeholder has an unconstrained shape, which allows you to feed tensors of different shapes in a session. You can make constrained shape by passing optional argument -shape, as I have done below.

x = tf.placeholder(tf.float32,(3,4))
y =  x + 2

sess = tf.Session()
print(sess.run(y)) # will cause an error

s = np.random.rand(3,4)
print(sess.run(y, feed_dict={x:s}))

While doing Machine Learning task, most of the time we are unaware of number of rows but (let’s assume) we do know the number of features or columns. In that case, we can use None.

x = tf.placeholder(tf.float32, shape=(None,4))

Now, at run time we can feed any matrix with 4 columns and any number of rows.

Also, Placeholders are used for input data ( they are kind of variables which we use to feed our model), where as Variables are parameters such as weights that we train over time.


J
Jitesh Mohite

Placeholder :

A placeholder is simply a variable that we will assign data to at a later date. It allows us to create our operations and build our computation graph, without needing the data. In TensorFlow terminology, we then feed data into the graph through these placeholders. Initial values are not required but can have default values with tf.placeholder_with_default) We have to provide value at runtime like : a = tf.placeholder(tf.int16) // initialize placeholder value b = tf.placeholder(tf.int16) // initialize placeholder value use it using session like : sess.run(add, feed_dict={a: 2, b: 3}) // this value we have to assign at runtime

Variable :

A TensorFlow variable is the best way to represent shared, persistent state manipulated by your program. Variables are manipulated via the tf.Variable class. A tf.Variable represents a tensor whose value can be changed by running ops on it.

Example : tf.Variable("Welcome to tensorflow!!!")


T
Tensorflow Support

Tensorflow 2.0 Compatible Answer: The concept of Placeholders, tf.placeholder will not be available in Tensorflow 2.x (>= 2.0) by default, as the Default Execution Mode is Eager Execution.

However, we can use them if used in Graph Mode (Disable Eager Execution).

Equivalent command for TF Placeholder in version 2.x is tf.compat.v1.placeholder.

Equivalent Command for TF Variable in version 2.x is tf.Variable and if you want to migrate the code from 1.x to 2.x, the equivalent command is

tf.compat.v2.Variable.

Please refer this Tensorflow Page for more information about Tensorflow Version 2.0.

Please refer the Migration Guide for more information about migration from versions 1.x to 2.x.


A
Ali Salehi

Think of a computation graph. In such graph, we need an input node to pass our data to the graph, those nodes should be defined as Placeholder in tensorflow.

Do not think as a general program in Python. You can write a Python program and do all those stuff that guys explained in other answers just by Variables, but for computation graphs in tensorflow, to feed your data to the graph, you need to define those nods as Placeholders.


Z
Z.Wei

For TF V1:

Constant is with initial value and it won't change in the computation; Variable is with initial value and it can change in the computation; (so good for parameters) Placeholder is without initial value and it won't change in the computation. (so good for inputs like prediction instances)

For TF V2, same but they try to hide Placeholder (graph mode is not preferred).


M
Matias Molinas

In TensorFlow, a variable is just another tensor (like tf.constant or tf.placeholder). It just so happens that variables can be modified by the computation. tf.placeholder is used for inputs that will be provided externally to the computation at run-time (e.g. training data). tf.Variable is used for inputs that are part of the computation and are going to be modified by the computation (e.g. weights of a neural network).