-
Notifications
You must be signed in to change notification settings - Fork 276
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
the predict is not with the same result of evaluate #12
Comments
Do you mind sharing your code for testing? Did you try loading the saved checkpoint directly? |
def predict_tfrecords_by_checkpoint_image(checkpoint_dir, in_file_pattern, to_file):
|
`def record_image_placeholder_input_fn(meta_file, default_batch_size=None):
|
Actually , I rewrite the model, in the training process, it can run normally and also evaluate the model , in 20k train steps, it has the accuracy about 0.7, but when the model saved, I predict it again, it's accuracy is about 0.1, when I debug this , I found the last weight saved in model, and then I restored the model, but the same weight did not the same. |
when I predict the model, the result is not the same with evaluate, and the result has much difference. I fetch the fc_weight , save to model and restore from model of the weight is not the same. Maybe the model has something wrong.
The text was updated successfully, but these errors were encountered: