Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

tensorflow server signature #27

Open
nidhikamath91 opened this issue Oct 16, 2018 · 17 comments
Open

tensorflow server signature #27

nidhikamath91 opened this issue Oct 16, 2018 · 17 comments

Comments

@nidhikamath91
Copy link

I am trying to serve the model over tensorflow serving and I have created the below signature. But it doesnt seem to work. Please help me @pskrunner14

encode_seqs = tf.placeholder(dtype=tf.int64, shape=[batch_size, None], name="encode_seqs")
decode_seqs = tf.placeholder(dtype=tf.int64, shape=[batch_size, None], name="decode_seqs")

Inference Data Placeholders

encode_seqs2 = tf.placeholder(dtype=tf.int64, shape=[1, None], name="encode_seqs")
decode_seqs2 = tf.placeholder(dtype=tf.int64, shape=[1, None], name="decode_seqs")

export_path_base = './export_base/'
export_path = os.path.join(
tf.compat.as_bytes(export_path_base),
tf.compat.as_bytes(str(1)))
print('Exporting trained model to', export_path)
builder = tf.saved_model.builder.SavedModelBuilder(export_path)

    classification_inputs = tf.saved_model.utils.build_tensor_info(
        encode_seqs)
    classification_outputs_classes = tf.saved_model.utils.build_tensor_info(
        decode_seqs)
    #classification_outputs_scores = tf.saved_model.utils.build_tensor_info(loss)

    classification_signature = (
        tf.saved_model.signature_def_utils.build_signature_def(
            inputs={
                tf.saved_model.signature_constants.CLASSIFY_INPUTS:
                    classification_inputs
            },
            outputs={
                tf.saved_model.signature_constants.CLASSIFY_OUTPUT_CLASSES:
                    classification_outputs_classes,
                #tf.saved_model.signature_constants.CLASSIFY_OUTPUT_SCORES:
                    #classification_outputs_scores
            },
            method_name=tf.saved_model.signature_constants.CLASSIFY_METHOD_NAME))

    tensor_info_x = tf.saved_model.utils.build_tensor_info(encode_seqs2)
    tensor_info_y = tf.saved_model.utils.build_tensor_info(decode_seqs2)

    prediction_signature = (
        tf.saved_model.signature_def_utils.build_signature_def(
            inputs={'issue': tensor_info_x},
            outputs={'solution': tensor_info_y},
            method_name=tf.saved_model.signature_constants.PREDICT_METHOD_NAME))

    builder.add_meta_graph_and_variables(
        sess, [tf.saved_model.tag_constants.SERVING],
        signature_def_map={
            'predict_solution':
                prediction_signature,
            tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY:
                classification_signature,
        },
        main_op=tf.tables_initializer(),
        strip_default_attrs=True)

    builder.save()

    print('Done exporting!')

I have the below signature,

C:\Users\d074437\PycharmProjects\seq2seq>saved_model_cli show --dir ./export_base/1 --all

MetaGraphDef with tag-set: 'serve' contains the following SignatureDefs:

signature_def['predict_solution']:
The given SavedModel SignatureDef contains the following input(s):
inputs['issue'] tensor_info:
dtype: DT_INT64
shape: (1, -1)
name: encode_seqs_1:0
The given SavedModel SignatureDef contains the following output(s):
outputs['solution'] tensor_info:
dtype: DT_INT64
shape: (1, -1)
name: decode_seqs_1:0
Method name is: tensorflow/serving/predict

signature_def['serving_default']:
The given SavedModel SignatureDef contains the following input(s):
inputs['inputs'] tensor_info:
dtype: DT_INT64
shape: (32, -1)
name: encode_seqs:0
The given SavedModel SignatureDef contains the following output(s):
outputs['classes'] tensor_info:
dtype: DT_INT64
shape: (32, -1)
name: decode_seqs:0
Method name is: tensorflow/serving/classify

But when I try to run it, I get an error as below

C:\Users\d074437\PycharmProjects\seq2seq>saved_model_cli run --dir ./export_base --tag_set serve --signature_def predict_solution --inputs='this is the text'
usage: saved_model_cli [-h] [-v] {show,run,scan} ...
saved_model_cli: error: unrecognized arguments: is the text'

@pskrunner14
Copy link
Contributor

@nidhikamath91 sorry I'm not very familiar with TensorFlow serving. You'd be better off posting this on their issues itself. Although this seems like CLI argument error.

@nidhikamath91
Copy link
Author

Thanks. But could you help me with how can I create place holders for the input data and use them in data.py .

Eg get a placeholder for input query

@pskrunner14
Copy link
Contributor

@nidhikamath91 according to what I could gather from the MNIST example on TF serving, I think your
tensor_info_y needs to be the score outputs or in this case the softmax prediction defined here

...
y = tf.nn.softmax(net.outputs)
...
...
tensor_info_y = tf.saved_model.utils.build_tensor_info(y)

@nidhikamath91
Copy link
Author

nidhikamath91 commented Oct 16, 2018 via email

@pskrunner14
Copy link
Contributor

pskrunner14 commented Oct 16, 2018

@nidhikamath91 your def of x looks fine to me. However I'm not sure this is gonna work since we're feeding the encoder state to the decoder during inference then feeding the decoded sequence ids from the previous time steps one by one to the decoder until it outputs the end_id, so AFAIK you'll need to find a workaround for that. You should take a look at the inference method here and see what works for you. Best of luck!

@nidhikamath91
Copy link
Author

nidhikamath91 commented Oct 16, 2018 via email

@pskrunner14
Copy link
Contributor

@nidhikamath91 we're manually converting the input query into token ids that are fed into the encoder as encode_seqs2 and then feed the encoder state to the decoder to decode in time steps as I explained above. There's manual unrolling involved so I'm not sure how you'll get your desired output by just the query. As I said I'm not familiar with TF serving to be able to help you with that and it is beyond the scope of this example.

@pskrunner14
Copy link
Contributor

@nidhikamath91 apparently TF serving doesn't support stateful models.

tensorflow/serving#724

@nidhikamath91
Copy link
Author

nidhikamath91 commented Oct 16, 2018 via email

@nidhikamath91
Copy link
Author

nidhikamath91 commented Oct 16, 2018 via email

@pskrunner14
Copy link
Contributor

@nidhikamath91 I don't think that's possible since it's an autoencoding RNN application.

@nidhikamath91
Copy link
Author

Hello,

So I was thinking of the below solution, tell me what do you thinking about it.

I will create a inference graph with issue and solution placeholders and then serve the model.

encode_seqs2 = tf.placeholder(dtype=tf.int64, shape=[1, None], name="encode_seqs")
decode_seqs2 = tf.placeholder(dtype=tf.int64, shape=[1, None], name="decode_seqs")

issue = tf.placeholder(dtype=tf.string, shape=[1, None], name="issue")
solution = tf.placeholder(dtype=tf.string, shape=[1, None], name="solution")

table = tf.contrib.lookup.index_table_from_file(vocabulary_file=str(word2idx.keys()), num_oov_buckets=0)
seed_id = table.lookup(issue)

state = sess.run(net_rnn.final_state_encode, {encode_seqs2: seed_id})
# Decode, feed start_id and get first word [https://github.com/zsdonghao/tensorlayer/blob/master/example/tutorial_ptb_lstm_state_is_tuple.py]
o, state = sess.run([y, net_rnn.final_state_decode],
{net_rnn.initial_state_decode: state,
decode_seqs2: [[start_id]]})
w_id = tl.nlp.sample_top(o[0], top_k=3)
w = idx2word[w_id]
# Decode and feed state iteratively
sentence = [w]
for _ in range(30): # max sentence length
o, state = sess.run([y, net_rnn.final_state_decode],
{net_rnn.initial_state_decode: state,
decode_seqs2: [[w_id]]})
w_id = tl.nlp.sample_top(o[0], top_k=2)
w = idx2word[w_id]
if w_id == end_id:
break
sentence = sentence + [w]
return sentence

But I am getting the below error

TypeError: The value of a feed cannot be a tf.Tensor object. Acceptable feed values include Python scalars, strings, lists, numpy ndarrays, or TensorHandles.For reference, the tensor object was Tensor("hash_table_Lookup:0", shape=(1, ?), dtype=int64) which was passed to the feed with key Tensor("encode_seqs_1:0", shape=(1, ?), dtype=int64).

How do I proceed ? @pskrunner14

@pskrunner14
Copy link
Contributor

@nidhikamath91 you can't feed tensors into input placeholders, just convert them to numpy arrays before doing so.

@nidhikamath91
Copy link
Author

nidhikamath91 commented Oct 30, 2018 via email

@pskrunner14
Copy link
Contributor

@nidhikamath91 yes you could try constructing the graph in such a way that you only need to feed the seed id into issue placeholder at runtime which in turn passes the lookup tensor to the net rnn directly, bypassing encode_seqs2 entirely.

@nidhikamath91
Copy link
Author

nidhikamath91 commented Oct 30, 2018 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants