Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

pretrained model evaluation?and train from scratch #358

Closed
leeyegy opened this issue Jan 22, 2021 · 3 comments · Fixed by #359
Closed

pretrained model evaluation?and train from scratch #358

leeyegy opened this issue Jan 22, 2021 · 3 comments · Fixed by #359

Comments

@leeyegy
Copy link

leeyegy commented Jan 22, 2021

I have downloaded some pretrained model and tested them. However, I got some weird results as follow:

|scope| miou | macc | aAcc |
|global| 0.07 | xxx |xxx |
(dataset:ade20k)

the command I used for testing is as follow:
CUDA_VISIBLE_DEVICES=4 python tools/test.py configs/deeplabv3/deeplabv3_r50-d8_512x512_80k_ade20k.py checkpoints/deeplabv3/deeplabv3_r50-d8_512x512_80k_ade20k_20200614_185028-0bb3f844.pth --eval mIoU

Also, I have tried to train a model in ade20k using a single GPU, and I tested the model, then got similar weird results(very low miou).
the command I used for testing is as follow:

CUDA_VISIBLE_DEVICES=4 python tools/test.py configs/resnest/deeplabv3_s101-d8_512x512_160k_ade20k.py work_dirs/deeplabv3_s101-d8_512x512_160k_ade20k/latest.pth --eval mIoU

Any suggestion would be appreciated~ :)

@zhangxiaobaibai
Copy link

I also have this problem.

@zhangxiaobaibai
Copy link

Even can not get normal results when using official pretrained model

@yamengxi
Copy link
Collaborator

Sorry for the bug, it is fixed here.

aravind-h-v pushed a commit to aravind-h-v/mmsegmentation that referenced this issue Mar 27, 2023
* Initial support for mps in Stable Diffusion pipeline.

* Initial "warmup" implementation when using mps.

* Make some deterministic tests pass with mps.

* Disable training tests when using mps.

* SD: generate latents in CPU then move to device.

This is especially important when using the mps device, because
generators are not supported there. See for example
pytorch/pytorch#84288.

In addition, the other pipelines seem to use the same approach: generate
the random samples then move to the appropriate device.

After this change, generating an image in MPS produces the same result
as when using the CPU, if the same seed is used.

* Remove prints.

* Pass AutoencoderKL test_output_pretrained with mps.

Sampling from `posterior` must be done in CPU.

* Style

* Do not use torch.long for log op in mps device.

* Perform incompatible padding ops in CPU.

UNet tests now pass.
See pytorch/pytorch#84535

* Style: fix import order.

* Remove unused symbols.

* Remove MPSWarmupMixin, do not apply automatically.

We do apply warmup in the tests, but not during normal use.
This adopts some PR suggestions by @patrickvonplaten.

* Add comment for mps fallback to CPU step.

* Add README_mps.md for mps installation and use.

* Apply `black` to modified files.

* Restrict README_mps to SD, show measures in table.

* Make PNDM indexing compatible with mps.

Addresses open-mmlab#239.

* Do not use float64 when using LDMScheduler.

Fixes open-mmlab#358.

* Fix typo identified by @patil-suraj

Co-authored-by: Suraj Patil <surajp815@gmail.com>

* Adapt example to new output style.

* Restore 1:1 results reproducibility with CompVis.

However, mps latents need to be generated in CPU because generators
don't work in the mps device.

* Move PyTorch nightly to requirements.

* Adapt `test_scheduler_outputs_equivalence` ton MPS.

* mps: skip training tests instead of ignoring silently.

* Make VQModel tests pass on mps.

* mps ddim tests: warmup, increase tolerance.

* ScoreSdeVeScheduler indexing made mps compatible.

* Make ldm pipeline tests pass using warmup.

* Style

* Simplify casting as suggested in PR.

* Add Known Issues to readme.

* `isort` import order.

* Remove _mps_warmup helpers from ModelMixin.

And just make changes to the tests.

* Skip tests using unittest decorator for consistency.

* Remove temporary var.

* Remove spurious blank space.

* Remove unused symbol.

* Remove README_mps.

Co-authored-by: Suraj Patil <surajp815@gmail.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
aravind-h-v pushed a commit to aravind-h-v/mmsegmentation that referenced this issue Mar 27, 2023
* Fix LMS scheduler indexing in `add_noise` open-mmlab#358.

* Fix DDIM and DDPM indexing with mps device.

* Verify format is PyTorch before using `.to()`
sibozhang pushed a commit to sibozhang/mmsegmentation that referenced this issue Mar 22, 2024
…pen-mmlab#358)

* add unittest on repr for LoadHVULabel and SampleFrames in loading.py

* add repr unittest for the functions in loading.py
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants