You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
first of all thanks for sharing this much content on your model.
I want to do inference with squeezeDet inside a C++ project (roscpp). For this I need to load the model from model checkpoints and model data, this is done by using the function "ReadBinaryProto" of the tensorflow C++ api. Afterwards I create a tensorflow session and run it to do the inference.
This works fine if I use the model checkpoints that you provided for the demo.
Input tensors: image_input
Output tensors: bbox/trimming/bbox, probability/score, probaibility_class_idx
(I also needed to define a tensor to feed in a value for keep_prob to make it work.)
However, for inference, I wanted to set the batch size to 1. So I modified the provided demo.py by setting the batch size to 1 and after loading the model it will be saved to disk again using the tensorflow saver.
After saving I can load the modified model inside my C++ program and build it. At runtime I am not getting any errors but at the point where the inference is being called the execution "gets stuck" for forever without throwing any errors.
So my question is: @BichenWuUCB How did you save the model you provide for use with the demo.py? Why will my approach with simply using the tensorflow saver not work?
Here is the modified code of the image_demo() function of the demo.py
defimage_demo():
"""Detect image."""assertFLAGS.demo_net=='squeezeDet'orFLAGS.demo_net=='squeezeDet+', \
'Selected nueral net architecture not supported: {}'.format(FLAGS.demo_net)
withtf.Graph().as_default():
# Load modelifFLAGS.demo_net=='squeezeDet':
mc=kitti_squeezeDet_config()
# set batch size to 1 for inferencemc.BATCH_SIZE=1# model parameters will be restored from checkpointmc.LOAD_PRETRAINED_MODEL=Falsemodel=SqueezeDet(mc, FLAGS.gpu)
elifFLAGS.demo_net=='squeezeDet+':
mc=kitti_squeezeDetPlus_config()
mc.BATCH_SIZE=1mc.LOAD_PRETRAINED_MODEL=Falsemodel=SqueezeDetPlus(mc, FLAGS.gpu)
saver=tf.train.Saver(model.model_params)
withtf.Session(config=tf.ConfigProto(allow_soft_placement=True)) assess:
saver.restore(sess, FLAGS.checkpoint)
# Save graph metadata and checkpointsaver.save(sess, './data/out/checkpoint.ckpt')
forfinglob.iglob(FLAGS.input_path):
im=cv2.imread(f)
im=im.astype(np.float32, copy=False)
im=cv2.resize(im, (mc.IMAGE_WIDTH, mc.IMAGE_HEIGHT))
input_image=im-mc.BGR_MEANS# Detectdet_boxes, det_probs, det_class=sess.run(
[model.det_boxes, model.det_probs, model.det_class],
feed_dict={model.image_input:[input_image]})
# Filterfinal_boxes, final_probs, final_class=model.filter_prediction(
det_boxes[0], det_probs[0], det_class[0])
keep_idx= [idxforidxinrange(len(final_probs)) \
iffinal_probs[idx] >mc.PLOT_PROB_THRESH]
final_boxes= [final_boxes[idx] foridxinkeep_idx]
final_probs= [final_probs[idx] foridxinkeep_idx]
final_class= [final_class[idx] foridxinkeep_idx]
# TODO(bichen): move this color dict to configuration filecls2clr= {
'car': (255, 191, 0),
'cyclist': (0, 191, 255),
'pedestrian':(255, 0, 191)
}
# Draw boxes_draw_box(
im, final_boxes,
[mc.CLASS_NAMES[idx]+': (%.2f)'%prob \
foridx, probinzip(final_class, final_probs)],
cdict=cls2clr,
)
file_name=os.path.split(f)[1]
out_file_name=os.path.join(FLAGS.out_dir, 'out_'+file_name)
cv2.imwrite(out_file_name, im)
print ('Image detection output saved to {}'.format(out_file_name))
The text was updated successfully, but these errors were encountered:
Hi all,
first of all thanks for sharing this much content on your model.
I want to do inference with squeezeDet inside a C++ project (roscpp). For this I need to load the model from model checkpoints and model data, this is done by using the function "ReadBinaryProto" of the tensorflow C++ api. Afterwards I create a tensorflow session and run it to do the inference.
This works fine if I use the model checkpoints that you provided for the demo.
Input tensors:
image_input
Output tensors:
bbox/trimming/bbox, probability/score, probaibility_class_idx
(I also needed to define a tensor to feed in a value for
keep_prob
to make it work.)However, for inference, I wanted to set the batch size to 1. So I modified the provided demo.py by setting the batch size to 1 and after loading the model it will be saved to disk again using the tensorflow saver.
After saving I can load the modified model inside my C++ program and build it. At runtime I am not getting any errors but at the point where the inference is being called the execution "gets stuck" for forever without throwing any errors.
So my question is:
@BichenWuUCB How did you save the model you provide for use with the demo.py? Why will my approach with simply using the tensorflow saver not work?
Here is the modified code of the image_demo() function of the demo.py
The text was updated successfully, but these errors were encountered: