-
Notifications
You must be signed in to change notification settings - Fork 2
TensorFlow
BK Jackson edited this page Nov 16, 2016
·
14 revisions
- TensorFlow programs are usually structured into a construction phase and an execution phase.
- construction phase: assembles variables, etc. into computation graph
- execution phase: uses a session to execute ops in the computation graph
# Set up an interactive session within python
tf.InteractiveSession()
ta = tf.zeros((2,2))
print(ta.eval()) # to see values in the matrix, use eval()
# Add two numbers and print the results
a = tf.constant(5.0)
b = tf.constant(6.0)
c = a * b
with tf.Session() as sess:
print(sess.run(c))
print(c.eval())
# Create a counter
state = tf.Variable(0, name="counter")
new_value = tf.add(state, tf.constant(1))
update = tf.assign(state, new_value)
with tf.Session() as sess:
sess.run(tf.initialize_all_variables())
print(sess.run(state))
for _ in range(3)):
sess.run(update)
print(sess.run(state)) # Fetch Variable State from Computation Graph
- tf.convert_to_tensor() is convenient but it does not scale
- Use tf.placeholder variables - dummy nodes that provide entry points for data to computational graph
- feed_dict - maps from tf.placeholder vars (or their names) to python data (numpy arrays, lists, etc.)
# Multiply two floats
input1 = tf.placeholder(tf.float32)
input2 = tf.placeholder(tf.float32)
output = tf.mul(input1, input2)
with tf.Session() as sess:
# use feed_dict to put data into computation graph
# [output] fetches value of output from the computation graph
print(sess.run([output], feed_dict={input1:[7.0], input2:[2.0]}))
- tf.variable_scope() - provides simple name-spacing to avoid clashes
- tf.get_variable() - creates/accesses variables from within a variable scope
- tf.get_variable_scope().reuse_variables() - reuses weights at each time step in RNNs, needed to avoid memory blowup
with tf.variable_scope("foo"):
with tf.variable_scope("bar"):
v = tf.get_variable("v", [1])
assert v.name == "foo/bar/v:0"
- TensorFlow is finicky about shapes, so learn to resize (for broadcasting)
n_samples = 1000
batch_size = 100
X_data = np.reshape(X_data, (n_samples,1))
X = tf.placeholder(tf.float32, shape=(batch_size, 1))
# Define variables to be learned
with tf.variable_scope("linear-regression"):
W = tf.get_variable("weights", (1,1)), initializer=tf.random_normal_initializer())
b = tf.get_variable("bias", (1,), initalizer=tf.constant_initializer(0.0))
y_pred = tf.matmul(X, W) + b
# mean sum of squared error loss
loss = tf.reduce_sum((y - y_pred)**2/n_samples)