Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

training question #238

Open
jiansheng1993 opened this issue Nov 8, 2019 · 5 comments
Open

training question #238

jiansheng1993 opened this issue Nov 8, 2019 · 5 comments

Comments

@jiansheng1993
Copy link

OutOfRangeError (see above for traceback): RandomShuffleQueue '_1_shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 8, current size 0)
[[Node: shuffle_batch = QueueDequeueManyV2[component_types=[DT_FLOAT, DT_FLOAT], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/device:CPU:0"](shuffle_batch/random_shuffle_queue, shuffle_batch/n)]]


How to deal with this issue when I train the model in kitti ???

@jiansheng1993
Copy link
Author

still no answer?

@syxing2018
Copy link

OutOfRangeError (see above for traceback): RandomShuffleQueue '_1_shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 8, current size 0)
[[Node: shuffle_batch = QueueDequeueManyV2[component_types=[DT_FLOAT, DT_FLOAT], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/device:CPU:0"](shuffle_batch/random_shuffle_queue, shuffle_batch/n)]]

How to deal with this issue when I train the model in kitti ???

I have encountered the same problem. Maybe you can check if the image suffix in the "kitti_train_files.txt" is "jpg", but the suffix of the kitti images is "png".

@qiutzh
Copy link

qiutzh commented Apr 28, 2020

@jiansheng1993 Hi, have you solved the problem? I have encountered the same mistake.

@yamnaben
Copy link

hi please i encounter the same problem any solution

@benjaminkeltjens
Copy link

benjaminkeltjens commented Jan 10, 2021

@jiansheng1993 @qiutzh @yamnaben I had this same problem and I resolved it, though I'm not sure its universal. I think the main problem from this means that queue wants to dequeue but it can't. This means its something wrong with requesting the training file (so check your file names in the train image txt file).

The problem in the end was in line 100 of monodepth_dataloader.py.

path_length = string_length_tf(image_path)[0]

If you request the variable through sess.run, you will see that it does not work
This should be changed to:

path_length = tf.size(tf.string_split([image_path],""))

This gives a correct int when requested through sess.run.

Hopefully this works for you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants