Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How do I download the model in Ultralyrics HUB? Encountered downloading error. #878

Closed
1 task done
KDLPro opened this issue Oct 13, 2024 · 19 comments
Closed
1 task done
Assignees
Labels
app Issue related to Ultralytics HUB App question A HUB question that does not involve a bug

Comments

@KDLPro
Copy link

KDLPro commented Oct 13, 2024

Search before asking

Question

Right now, I'm training a model with the help of Ultralytics HUB. Here's the progress so far...

image

However, I encounter issues when continuing from previous checkpoint as earlier, the notebook was disconnected from the HUB so I tried restarting it:

image
image

The checkpoint saved in local storage is in epoch 68 though. How do I solve this?

Additional

No response

@KDLPro KDLPro added the question A HUB question that does not involve a bug label Oct 13, 2024
@UltralyticsAssistant UltralyticsAssistant added the app Issue related to Ultralytics HUB App label Oct 13, 2024
@UltralyticsAssistant
Copy link
Member

👋 Hello @KDLPro, thank you for raising an issue about the Ultralytics HUB 🚀! Please check out our HUB Docs for more detailed information:

It sounds like you might be encountering some issues with the model checkpoint. If this is a 🐛 Bug Report, please provide screenshots and steps to reproduce the problem to help us assist you better. Ensuring a Minimum Reproducible Example (MRE) will be very helpful.

If this is a ❓ Question, share as much relevant information as possible about your dataset, model, and environment.

This is an automated response, and an Ultralytics engineer will also assist you soon. We appreciate your patience and understanding! 😊

@sergiuwaxmann sergiuwaxmann self-assigned this Oct 14, 2024
@sergiuwaxmann
Copy link
Member

@KDLPro I just looked at your model and it looks like you can resume training from epoch 95. Can you try resuming again?

@KDLPro
Copy link
Author

KDLPro commented Oct 14, 2024

Oh I just resumed the model and it was done! Still it was weird that Roboflow is waiting for connection.
image

Maybe due to this?
image

@sergiuwaxmann
Copy link
Member

@KDLPro Something is indeed strange with this model.
Do you mind training it again (create new model)?

@KDLPro
Copy link
Author

KDLPro commented Oct 14, 2024

I'll try when I have time. But I do have results for all 100 epochs. Honestly, the app has problems when the system is disconnected and reconnected again.

@KDLPro
Copy link
Author

KDLPro commented Oct 14, 2024

Here are the results btw @sergiuwaxmann:

image
image

@KDLPro
Copy link
Author

KDLPro commented Oct 14, 2024

It might be a bug or smthn @sergiuwaxmann related to Internet connection

@sergiuwaxmann
Copy link
Member

@KDLPro Indeed, we noticed this as well and will try to improve this feature.
The resume works perfectly with our Cloud Training or if the environment doesn't change (e.g., local training - resumed from the same environment).

@KDLPro
Copy link
Author

KDLPro commented Oct 14, 2024

I see, also apparently the model is missing from my library as well.

@KDLPro
Copy link
Author

KDLPro commented Oct 14, 2024

Also, how do I calculate model accuracy?

@sergiuwaxmann
Copy link
Member

@KDLPro What do you mean the model is missing from your library? Model mAP is shown in the model list.

@KDLPro
Copy link
Author

KDLPro commented Oct 14, 2024

image
It's missing, however I can access through the link directly

@KDLPro
Copy link
Author

KDLPro commented Oct 14, 2024

Ahh so mAP is equivalent to model accuracy?

@pderrenger
Copy link
Member

Hello @KDLPro! Yes, mAP (mean Average Precision) is often used as a measure of model accuracy in object detection tasks. It evaluates how well the model predicts bounding boxes and classifies objects.

If you're experiencing issues with your model not appearing in the library, try refreshing the page or clearing your cache. If the problem persists, it might be a temporary glitch. You can still access it directly through the link, which is good.

Feel free to reach out if you have more questions or need further assistance. 😊

@KDLPro
Copy link
Author

KDLPro commented Oct 14, 2024

Gotcha, but which is more commonly used if you talk about model accuracy only? Since there's mAP50 and mAP50-95. Do I have to calculate model accuracy by taking the average of the two values @pderrenger?

@pderrenger
Copy link
Member

Hello! When discussing model accuracy in object detection, mAP @KDLPro and mAP@50-95 are both important metrics:

  • mAP@50: This measures precision and recall at an IoU threshold of 0.5. It's often used for a quick assessment of model performance.
  • mAP@50-95: This is a more comprehensive metric, averaging mAP across multiple IoU thresholds (from 0.5 to 0.95 in increments of 0.05). It provides a more detailed view of the model's accuracy.

Typically, mAP@50-95 is considered a more robust measure of accuracy as it evaluates the model's performance across various levels of overlap. You don't need to average the two; instead, use them to understand different aspects of your model's performance.

If you have further questions, feel free to ask! 😊

@KDLPro
Copy link
Author

KDLPro commented Oct 15, 2024

Gotcha, thank you very much!

@KDLPro KDLPro closed this as completed Oct 15, 2024
@KDLPro KDLPro reopened this Oct 15, 2024
@KDLPro
Copy link
Author

KDLPro commented Oct 15, 2024

Also, I have one final question. How would I be able to improve the performance of the model given that there is underfitting in the object loss and minor overfitting in the box loss?

@ultralytics ultralytics deleted a comment from KDLPro Oct 15, 2024
@ultralytics ultralytics deleted a comment from pderrenger Oct 15, 2024
@pderrenger
Copy link
Member

Hello! To improve your model's performance, especially with underfitting in object loss and minor overfitting in box loss, consider the following strategies:

  1. Data Augmentation: Enhance your dataset with techniques like flipping, rotation, and scaling to increase diversity and help the model generalize better.

  2. Learning Rate Adjustment: Experiment with different learning rates. A learning rate that's too high can cause underfitting, while a lower rate might help the model converge better.

  3. Regularization: Implement techniques like dropout or L2 regularization to reduce overfitting.

  4. Model Architecture: Try using a more complex model if your current one is too simple, or simplify it if it's too complex for your dataset.

  5. More Data: If possible, increase the size of your training dataset to provide more examples for the model to learn from.

  6. Hyperparameter Tuning: Adjust other hyperparameters such as batch size, optimizer, and epochs to find the optimal configuration.

Feel free to experiment with these suggestions and see which combination works best for your specific case. If you have more questions, just let me know! 😊

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
app Issue related to Ultralytics HUB App question A HUB question that does not involve a bug
Projects
None yet
Development

No branches or pull requests

4 participants