Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Multiple TS for the same target variable #7

Open
carloandrea opened this issue Jun 1, 2017 · 3 comments
Open

Multiple TS for the same target variable #7

carloandrea opened this issue Jun 1, 2017 · 3 comments

Comments

@carloandrea
Copy link

carloandrea commented Jun 1, 2017

Hi there, thank you for this awesome tutorial, it is of great interest for my master research work!

I was trying to adapt the code to my problem, as for each user I want to classify I have three time series of the same length. Firstly I edited load_data and sample_batch in order to have: t1_train,t1_val,t1_test,t2_train,t2_val,t2_test,t3_train,t3_val,t3_test,y_train,y_val,y_test
and
t1_batch,t2_batch,t3_batch,y_batch

In the Model itself is where I'm sure I'm doing something wrong, up to now I've edited your code this way:

self.input1 = tf.placeholder(tf.float32, [None, sl], name = 'input_1')
    self.input2 = tf.placeholder(tf.float32, [None, sl], name = 'input_2')
    self.input3 = tf.placeholder(tf.float32, [None, sl], name = 'input_3')
    self.labels = tf.placeholder(tf.int64, [None], name='labels')
    self.keep_prob = tf.placeholder("float", name = 'Drop_out_keep_prob')

    with tf.name_scope("LSTM_setup") as scope:
      def single_cell1():
        return tf.contrib.rnn.DropoutWrapper(LSTMCell(hidden_size),output_keep_prob=self.keep_prob)

      cell1 = tf.contrib.rnn.MultiRNNCell([single_cell1() for _ in range(num_layers)])
      initial_state1 = cell1.zero_state(self.batch_size, tf.float32)
      
      def single_cell2():
        return tf.contrib.rnn.DropoutWrapper(LSTMCell(hidden_size),output_keep_prob=self.keep_prob)

      cell2 = tf.contrib.rnn.MultiRNNCell([single_cell2() for _ in range(num_layers)])
      initial_state2 = cell2.zero_state(self.batch_size, tf.float32)

      def single_cell3():
        return tf.contrib.rnn.DropoutWrapper(LSTMCell(hidden_size),output_keep_prob=self.keep_prob)

      cell3 = tf.contrib.rnn.MultiRNNCell([single_cell3() for _ in range(num_layers)])
      initial_state3 = cell3.zero_state(self.batch_size, tf.float32)
    
    input_list1 = tf.unstack(tf.expand_dims(self.input1,axis=2),axis=1)
    input_list2 = tf.unstack(tf.expand_dims(self.input2,axis=2),axis=1)    
    input_list3 = tf.unstack(tf.expand_dims(self.input3,axis=2),axis=1)

    outputs1,_ = core_rnn.static_rnn(cell1, input_list1, dtype=tf.float32)
    outputs2,_ = core_rnn.static_rnn(cell2, input_list2, dtype=tf.float32)
    outputs3,_ = core_rnn.static_rnn(cell3, input_list3, dtype=tf.float32)

    output = tf.concat([outputs1[-1], outputs2[-1], outputs3[-1]], 0)

When I try to train the network this is the error that I get:
ValueError: Attempt to have a second RNNCell use the weights of a variable scope that already has weights: 'rnn/multi_rnn_cell/cell_0/lstm_cell'; and the cell was not constructed as LSTMCell(..., reuse=True). To share the weights of an RNNCell, simply reuse it in your second calculation, or create a new one with the argument reuse=True.

Would you have any suggestion or tip for me to get it working? Thank you in advance!

@RobRomijnders
Copy link
Owner

RobRomijnders commented Jun 1, 2017 via email

@carloandrea
Copy link
Author

Good evening Rob and thank you for your clear answer. I was thinking about the same, as the three rnn would know nothing about each other and that seems wrong to me.
The only doubt that I have left is if I have to concatenate the time series by column, so to have 3sl, or by rows, so to have 3N. And in the last case, should I also make y_train three times longer?
Thank you again!

@RobRomijnders
Copy link
Owner

This should help you

import tensorflow as tf
sl = 63
x1 = tf.placeholder(tf.float32, [None,sl],name="x1")
x2 = tf.placeholder(tf.float32, [None,sl],name="x2")
x3 = tf.placeholder(tf.float32, [None,sl],name="x3")
x123 = tf.stack([x1,x2,x3]) # Stacks on inner dimension
x123t = tf.transpose(x123,[1,0,2]) # transpose to whatever you need
print(x123)
print(x123t)
>>> Tensor("stack_1:0", shape=(3, ?, 63), dtype=float32)
>>> Tensor("transpose_1:0", shape=(?, 3, 63), dtype=float32)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants