-
Notifications
You must be signed in to change notification settings - Fork 81
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Can't convert/import tensorflow .pb model #16
Comments
Could you provide more details? I downloaded the pretrained model 20180402-114759 but I cannot find the tensor with name "input:0" from the file named "20180402-114759.pb". |
Hi @shishaochen, I'll try to export to a SavedModel (I don't know how to do it, yet, but I'll search) and then I'll post the results here. Thanks for your time! |
@rgsousa88 There is an example at https://github.com/Microsoft/samples-for-ai/blob/master/projects/StyleTransfer/StyleTransferTraining/src/train.py#L218 or you can simply call tf.saved_model.simple_save. |
Hi @shishaochen, First off, thanks for your suggestion. I've exported the .pb file to a SavedModel as you suggested using the following script.
When a tried to import the new saved model using Import tool, I've gotta the following exception: mlscoring.exporter.exception.ExportException: 204:'Tensor input:0 with fully unknown shape not supported for serving in APIs' I'm not sure why this is happening to this model... What do you think? Do you think that generate the saved model using metagraph and checkpoint files would be a right way to do this? Thanks for your time. |
@rgsousa88 Currently, models with input nodes of unknown shape are not supported yet. But you can work around the limitation by reconstructing the inference graph with input nodes of explicit shape. import facenet # Source file in the FaceNet Git repostory
import os
import shutil
import tensorflow as tf
if __name__ == '__main__':
image_batch = tf.placeholder(tf.float32, shape=(None,160,160,3)) # Explicitly set the image shape
is_training = tf.constant(False)
facenet.load_model('20180402-114759.pb', input_map={'input':image_batch, 'phase_train':is_training})
embeddings = tf.get_default_graph().get_tensor_by_name("embeddings:0")
export_dir = 'export'
if os.path.isdir(export_dir):
shutil.rmtree(export_dir)
with tf.Session() as sess:
tf.summary.FileWriter(export_dir, sess.graph) # Visualize the graph using TensorBoard
tf.saved_model.simple_save(sess, export_dir, inputs={'image_batch':image_batch}, outputs={'embeddings':embeddings}) Then you can create the Model Inference Library project by importing the exported SavedModel file. |
@shishaochen Thanks a lot for your help and advising. I'm able to load the model and create de Model Inference Library. I'll search how to include this library in a UWP project and test the model behavior inside my test application. Thanks you one more time! |
@rgsousa88 I have to say the Microsoft.ML.Scoring library the project consumes has native DLLs, so it is hard to build it as an UWP library before new release. However, there are 2 ways to work around:
UWP support of the Model Inference Library is in our plan and you may be able to try it in future. |
@shishaochen Thanks for your hints. I've tried to convert the SavedModel (created using your script above) to a Onnx model using Convert Model in AI Tools menu but it failed. I thought that if I was able to import a model and create a Model Inference Library, I would be able to convert it too, but I was wrong... I'll inspect the other two options but the best case would be convert saved model to Onnx model... Anyway, I'm very grateful for your support and sugestions. If you're curious about the error I've gotta when trying to convert, I'm posting it below. System.Exception: "path_to_saved_model"\saved_model.pb is not a valid TensorFlow model. One more time, thanks for your time and help. |
@shishaochen , I've tried to use .meta and .check point files to convert the model. It's failed too... In this case, I've gotta a different error that I'm posting below. Is there any limitations related to tensorflow models supported by AI Tools Convert option? I mean, is convert tensorflow models limited to some operations or architectures? In this case, would be better to inform in documentation these limitations or constraints? I'll appreciate your considerations... Thanks in advice. Error that I've mentioned: (Traceback infos...) ValueError: graph_def is invalid at node 'InceptionResnetV1/Conv2d_1a_3x3/BatchNorm/cond_1/AssignMovingAvg/Switch': Input tensor 'InceptionResnetV1/Conv2d_1a_3x3/BatchNorm/moving_mean:0' Cannot convert a tensor of type float32 to an input of type float32_ref. |
@rgsousa88 The model converter backend is a Python package named tf2onnx. It is still in preview and may not cover all models. |
Hi @shishaochen , I tried to convert using tf2onnx scripts a few weeks ago but I hadn't any success... I'll check wait I can do with the others two options... Anyway, I'm grateful for your attention e aid. |
Hi @rgsousa88 , I hope this might help you. ValueError: graph_def is invalid at node
'InceptionResnetV1/Conv2d_1a_3x3/BatchNorm/cond_1/AssignMovingAvg/Switch': Input tensor The error you mentioned before mgiht be due to One practical ApproachFirst, download the inception resnet v1 code as well as pretrained model from project page. Second, write the script below to reload the model and transform the graph. 📄 📄
import tensorflow as tf
import inception_resnet_v1
data_input = tf.placeholder(name='input', dtype=tf.float32, shape=[None, 299, 299, 3])
output, _ = inception_resnet_v1.inference(data_input, keep_probability=0.8, phase_train=False, bottleneck_layer_size=512)
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
saver = tf.train.Saver()
saver.restore(sess, '/Users/kit/Downloads/20180408-102900/model-20180408-102900.ckpt-90')
path = '/Users/kit/Downloads/'
tf.train.write_graph(sess.graph,'/Users/kit/Downloads', 'imagenet_facenet.pb',as_text=False)
save_path = saver.save(sess, path + "imagenet_facenet.ckpt" )
print("Model saved in file: %s" % save_path) Then, you can use the tfonnx_freeze_graph.py to freeze the graph. python -m tfonnx_freeze_graph --input_graph=/Users/kit/Downloads/imagenet_facenet.pb --input_binary=true --input_names=input:0 --output_node_names=InceptionResnetV1/Bottleneck/BatchNorm/batchnorm/add_1 --input_checkpoint=/Users/kit/Downloads/imagenet_facenet.ckpt --output_graph=frozen.pb In this reloaded model, |
Hi, @rgsousa88 pip install -U git+https://github.com/Microsoft/MMdnn.git@master In order to convert the above model to mmconvert -sf tensorflow -in /Users/kit/Downloads/imagenet_facenet.ckpt.meta -iw /Users/kit/Downloads/imagenet_facenet.ckpt -df onnx -om /Users/kit/Downloads/facenet.onnx --dstNodeName InceptionResnetV1/Bottleneck/BatchNorm/batchnorm/add_1 I get the following result and some of the layers is skipped. Parse file [/Users/kit/Downloads/imagenet_facenet.ckpt.meta] with binary format successfully.
Tensorflow model file [/Users/kit/Downloads/imagenet_facenet.ckpt.meta] loaded successfully.
Tensorflow checkpoint file [/Users/kit/Downloads/imagenet_facenet.ckpt] loaded successfully. [490] variables loaded.
Tensorflow has not supported operator [Slice] with name [InceptionResnetV1/Logits/Flatten/Slice].
Tensorflow has not supported operator [Slice] with name [InceptionResnetV1/Logits/Flatten/Slice_1].
Tensorflow has not supported operator [Prod] with name [InceptionResnetV1/Logits/Flatten/Prod].
Tensorflow has not supported operator [ExpandDims] with name [InceptionResnetV1/Logits/Flatten/ExpandDims].
IR network structure is saved as [22d65258880149e8b78ffc636043fb4e.json].
IR network structure is saved as [22d65258880149e8b78ffc636043fb4e.pb].
IR weights are saved as [22d65258880149e8b78ffc636043fb4e.npy].
Parse file [22d65258880149e8b78ffc636043fb4e.pb] with binary format successfully.
Warning: Graph Construct a self-loop node InceptionResnetV1/Logits/Flatten/Slice. Ignored.
Warning: Graph Construct a self-loop node InceptionResnetV1/Logits/Flatten/ExpandDims. Ignored.
OnnxEmitter has not supported operator [Shape].
InceptionResnetV1/Logits/Flatten/Shape
Target network code snippet is saved as [22d65258880149e8b78ffc636043fb4e.py].
Target weights are saved as [22d65258880149e8b78ffc636043fb4e.npy].
ONNX model file is saved as [/Users/kit/Downloads/facenet.onnx], generated by [22d65258880149e8b78ffc636043fb4e.py] and [22d65258880149e8b78ffc636043fb4e.npy]. Hoping this might help! |
Hi @JiahaoYao , I've followed steps that you described above and it was not possible to freeze the graph due to error bellow: AssertionError: InceptionResnetV1/Bottleneck/BatchNorm/batchnorm/add_1 is not in graph In the command you suggested, this parameter is passed as an output_node_name but inspecting the graph (using Neutron) there is no output node with this name. The output node or final node is id: InceptionResnetV1/Bottleneck/BatchNorm/FusedBatchNorm:0. But, if I understand right, the scripts and commands above are attempts to only convert the body of FaceNet not the whole thing. In this case, I won't be able to use the converted model (onnx format) as the original one was designed, to extract facial features and perform face recognition, will I? Anyway, I'm grateful for your attention and help. |
Hi @rgsousa88, import tensorflow as tf
import inception_resnet_v1
data_input = tf.placeholder(name='input', dtype=tf.float32, shape=[None, 299, 299, 3])
output, _ = inception_resnet_v1.inference(data_input, keep_probability=0.8, phase_train=False, bottleneck_layer_size=512)
print(output.op.name) I get the name
For the frozen graph, it works for me, like the following image. Finally, since in the official webisite, I only find the inception resnet v1 and I bet it is the very main part of the facenet. If the facenet model is available in Your understanding is okay, because in the example, I just convert until the last node of inception resnet. And I think it might be converted until the very last node in the graph, which might fulfill the face recognition. |
why isn't there a simple way to load the pre trained model for this? something like what Keras has - load_model('model.h5'). |
@sid-sundrani If you wanna import TensorFlow SavedModel or ONNX model, then you only need select the model path in our Import Model Wizard. Otherwise, you need provide the model serving interface as TensorFlow Checkpoing does not has such fields for us to extract. |
google.protobuf.message.DecodeError: Error parsing message |
Hi all, i got same error when i export LPRnet (https://github.com/opencv/openvino_training_extensions/tree/develop/tensorflow_toolkit/lpr). if i use export.py to convert ckpt -> pb then using .pb to inference. it return nothing. |
Has anyone been able to convert the model to SavedModel or .onnx? |
Hi everyone,
I'm facing some errors when I try to convert or import a tensorflow .pb model using AI Tools converter or Import Model. The message that I've gotta is this:
KeyError: "The name 'input:0' refers to a Tensor which does not exist. The operation, 'input', does not exist in the graph."
I'm passing the correct internal names for input and output tensors (I'm able to load and use the same model inside a Python script). I've visualized my model using Netron and 'input' exists in the graph.
The model is available here.
Thanks in advance.
The text was updated successfully, but these errors were encountered: