Replies: 4 comments 1 reply
-
Same Issue! I tried to use: read_config = tfds.ReadConfig(shuffle_seed=42) but no positive result. Also, I didn't shuffle method in the test_data |
Beta Was this translation helpful? Give feedback.
-
@donalhill Since, the labels are not one-hot encoded(0-100) in the datasets. There is no point in finding the argmax() while getting the y_labels.
Try this, also make sure you don't shuffle_files=False while loading from TFDS. If that is true it will affect the order. |
Beta Was this translation helpful? Give feedback.
-
Hi, I was facing the same issue. I would suggest you to check the line where you are importing the TensorFlow datasets using Hope this works out for you. |
Beta Was this translation helpful? Give feedback.
-
I was also same when using notebook 7 template notebook there was model.evaluate the accuray of 68% and and after unbatching test data and predicting the classes the accuracy score was just 0.005. It was huge difference spend many time to solving this issue. |
Beta Was this translation helpful? Give feedback.
-
Hi there,
I have been working on the food vision milestone project. Having trained my full Food101 model with fine-tuning and early stopping, I ran:
and I get an accuracy score close to DeepFood. When I try to evaluate the accuracy using scikit-learn instead, by doing:
the accuracy score comes out at around 1% (so basically a random guess). It's as if the true labels and predictions aren't matching, but I don't think there should be any shuffling that differs between the two since I am using
test_data
when calculating bothpred_classes
andy_labels
.I have tried setting
shuffle_files=False
when loading the data withtfds.load
. And I do the following:where there is no shuffling applied to the test data. Any help to understand this would be great!
Cheers,
Donal
Beta Was this translation helpful? Give feedback.
All reactions