Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ModuleNotFoundError: No module named 'torchtext.legacy' #8

Closed
chbfiv opened this issue Nov 24, 2022 · 15 comments
Closed

ModuleNotFoundError: No module named 'torchtext.legacy' #8

chbfiv opened this issue Nov 24, 2022 · 15 comments

Comments

@chbfiv
Copy link

chbfiv commented Nov 24, 2022

Walked through the README and got this. I didn't use conda though to install pytorch might try to do that instead

!python scripts/txt2img.py --prompt "a professional photograph of an astronaut riding a horse" --ckpt models/ldm/768-v-ema.ckpt --config configs/stable-diffusion/v2-inference-v.yaml --H 768 --W 768 
Traceback (most recent call last):
  File "scripts/txt2img.py", line 11, in <module>
    from pytorch_lightning import seed_everything
  File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/__init__.py", line 20, in <module>
    from pytorch_lightning import metrics  # noqa: E402
  File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/metrics/__init__.py", line 15, in <module>
    from pytorch_lightning.metrics.classification import (  # noqa: F401
  File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/metrics/classification/__init__.py", line 14, in <module>
    from pytorch_lightning.metrics.classification.accuracy import Accuracy  # noqa: F401
  File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/metrics/classification/accuracy.py", line 18, in <module>
    from pytorch_lightning.metrics.utils import deprecated_metrics, void
  File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/metrics/utils.py", line 29, in <module>
    from pytorch_lightning.utilities import rank_zero_deprecation
  File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/utilities/__init__.py", line 18, in <module>
    from pytorch_lightning.utilities.apply_func import move_data_to_device  # noqa: F401
  File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/utilities/apply_func.py", line 31, in <module>
    from torchtext.legacy.data import Batch
ModuleNotFoundError: No module named 'torchtext.legacy'

https://colab.research.google.com/drive/10jKS9pAB2bdN3SHekZzoKzm4jo2F4W1Q?usp=sharing

@FrancescoSaverioZuppichini

same

@woctezuma
Copy link

Ditto after running the following on Google Colab:

%cd /content
!git clone https://github.com/Stability-AI/stablediffusion.git
%cd /content/stablediffusion
%pip install -q -r requirements.txt

!wget https://huggingface.co/stabilityai/stable-diffusion-2-base/resolve/main/512-base-ema.ckpt

!python scripts/txt2img.py \
 --config configs/stable-diffusion/v2-inference.yaml \
 --ckpt 512-base-ema.ckpt \
 --prompt "a professional photograph of an astronaut riding a horse" 

This was referenced Nov 24, 2022
@woctezuma
Copy link

woctezuma commented Nov 24, 2022

If you run:

%pip install -q torchtext==0.9

then you get a different error:

/usr/local/lib/python3.7/dist-packages/torchvision/io/image.py:13: UserWarning: Failed to load image Python extension: libtorch_cuda_cu.so: cannot open shared object file: No such file or directory
  warn(f"Failed to load image Python extension: {e}")
Traceback (most recent call last):
  File "scripts/txt2img.py", line 10, in <module>
    from torchvision.utils import make_grid
  File "/usr/local/lib/python3.7/dist-packages/torchvision/__init__.py", line 7, in <module>
    from torchvision import models
  File "/usr/local/lib/python3.7/dist-packages/torchvision/models/__init__.py", line 18, in <module>
    from . import quantization
  File "/usr/local/lib/python3.7/dist-packages/torchvision/models/quantization/__init__.py", line 3, in <module>
    from .mobilenet import *
  File "/usr/local/lib/python3.7/dist-packages/torchvision/models/quantization/mobilenet.py", line 1, in <module>
    from .mobilenetv2 import *  # noqa: F401, F403
  File "/usr/local/lib/python3.7/dist-packages/torchvision/models/quantization/mobilenetv2.py", line 6, in <module>
    from torch.ao.quantization import QuantStub, DeQuantStub
ModuleNotFoundError: No module named 'torch.ao'

If you run:

%pip install -q torchtext==0.10

then you get a different error:

ModuleNotFoundError: No module named 'torch.ao.quantization'

If you run:

%pip install -q torchtext==0.11

then you get a different error:

ImportError: cannot import name 'QuantStub' from 'torch.ao.quantization' (/usr/local/lib/python3.7/dist-packages/torch/ao/quantization/__init__.py)

If you run:

%pip install -q torchtext==0.12

then you get the error:

/usr/local/lib/python3.7/dist-packages/torchvision/io/image.py:13: UserWarning: Failed to load image Python extension: libtorch_cuda_cu.so: cannot open shared object file: No such file or directory
  warn(f"Failed to load image Python extension: {e}")
Traceback (most recent call last):
  File "scripts/txt2img.py", line 11, in <module>
    from pytorch_lightning import seed_everything
  File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/__init__.py", line 20, in <module>
    from pytorch_lightning import metrics  # noqa: E402
  File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/metrics/__init__.py", line 15, in <module>
    from pytorch_lightning.metrics.classification import (  # noqa: F401
  File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/metrics/classification/__init__.py", line 14, in <module>
    from pytorch_lightning.metrics.classification.accuracy import Accuracy  # noqa: F401
  File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/metrics/classification/accuracy.py", line 18, in <module>
    from pytorch_lightning.metrics.utils import deprecated_metrics, void
  File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/metrics/utils.py", line 29, in <module>
    from pytorch_lightning.utilities import rank_zero_deprecation
  File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/utilities/__init__.py", line 18, in <module>
    from pytorch_lightning.utilities.apply_func import move_data_to_device  # noqa: F401
  File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/utilities/apply_func.py", line 31, in <module>
    from torchtext.legacy.data import Batch
ModuleNotFoundError: No module named 'torchtext.legacy'

@0xdevalias
Copy link

0xdevalias commented Nov 24, 2022

Googling the error message No module named 'torchtext.legacy':

https://www.datasciencelearner.com/modulenotfounderror-no-module-named-torchtext-legacy-solved/

The above Incorrect imports work properly in the lower version of torchtext (0.10.0 or lower ). Because these versions have the same directory structure. We will use the pip package manager to downgrade torchtext module. Here is the command for that-

pip install torchtext==0.10.0

Potentially related:


If you run:

%pip install -q torchtext==0.10

then you get a different error:

ModuleNotFoundError: No module named 'torch.ao.quantization'

Potentially related/helpful:


Here are the relevant releases between those versions:

@0xdevalias
Copy link

0xdevalias commented Nov 24, 2022

https://github.com/pytorch/text#installation

The following are the corresponding torchtext versions and supported Python versions.

PyTorch version torchtext version Supported Python version
nightly build main >=3.7, <=3.10
1.13.0 0.14.0 >=3.7, <=3.10
1.12.0 0.13.0 >=3.7, <=3.10
1.11.0 0.12.0 >=3.6, <=3.9
1.10.0 0.11.0 >=3.6, <=3.9
1.9.1 0.10.1 >=3.6, <=3.9
1.9 0.10 >=3.6, <=3.9
1.8.1 0.9.1 >=3.6, <=3.9
1.8 0.9 >=3.6, <=3.9
1.7.1 0.8.1 >=3.6, <=3.9
1.7 0.8 >=3.6, <=3.8
1.6 0.7 >=3.6, <=3.8
1.5 0.6 >=3.5, <=3.8
1.4 0.5 2.7, >=3.5, <=3.8
0.4 and below 0.2.3 2.7, >=3.5, <=3.8

Based on that, it looks like you want to be using one of these combinations presumably?

PyTorch version torchtext version Supported Python version
1.10.0 0.11.0 >=3.6, <=3.9
1.9.1 0.10.1 >=3.6, <=3.9

@woctezuma What version of PyTorch are you using alongside torchtext 0.10? As I suspect that will likely be the source of your ModuleNotFoundError: No module named 'torch.ao.quantization' error.

@0xdevalias
Copy link

0xdevalias commented Nov 24, 2022

@FrancescoSaverioZuppichini
Copy link

FrancescoSaverioZuppichini commented Nov 24, 2022

Guys how do you test and release software? What's your DevOps? It looks like there wasn't any test before making the announcement. I know people in DS world avoid SE best practices. Let me know if you need help for the future for testing and giving to the community the correct instruction to really make ML more open.

@backnotprop
Copy link

The issue isnt torchtext, it's your version of pytorch_lighting, here's the most recent version that should work:

!pip install pytorch-lightning==1.8.3.post0

@woctezuma
Copy link

woctezuma commented Nov 24, 2022

Yes, #30 (for newer versions of pytorch-lightning and torchmetrics) and #10 (for invisible-watermark) make the code run on Colab. However, there is a memory overflow on Colab, it seems. 😰 Maybe xformers are required.

@backnotprop
Copy link

@woctezuma try high runtime type if you have it:

In Colab : Runtime-> Change runtime type -> Runtime shape -> High-RAM

also for xformers there are precompiled versions you can grab if you dont want to wait:
https://github.com/TheLastBen/fast-stable-diffusion/tree/main/precompiled

@woctezuma
Copy link

woctezuma commented Nov 24, 2022

Thanks! I will look into these options!

Edit:

In Colab : Runtime-> Change runtime type -> Runtime shape -> High-RAM

I don't see this in the free version of Google Colab.

@woctezuma
Copy link

woctezuma commented Nov 25, 2022

I don't think it is possible to run this script on the free version of Google Colab. Thanks for the help though.

For reference, this is what my Google Colab code looked like:

%cd /content
!git clone https://github.com/Stability-AI/stablediffusion.git
%cd /content/stablediffusion

!curl https://raw.githubusercontent.com/backnotprop/stablediffusion/backnotprop-patch-pytorch_lighting/requirements.txt -O
%pip install -q -r requirements.txt

%pip install -q invisible-watermark

!nvidia-smi #T4
%pip install https://github.com/TheLastBen/fast-stable-diffusion/raw/main/precompiled/T4/xformers-0.0.13.dev0-py3-none-any.whl

!wget https://huggingface.co/stabilityai/stable-diffusion-2-base/resolve/main/512-base-ema.ckpt

!python scripts/txt2img.py \
 --config configs/stable-diffusion/v2-inference.yaml \
 --ckpt 512-base-ema.ckpt \
 --n_samples 1 --H 256 --W 256 \
 --prompt "a professional photograph of an astronaut riding a horse" 

However, the good piece of news is that the model is being integrated into HuggingFace's Diffusers:

@0xdevalias
Copy link

In Colab : Runtime-> Change runtime type -> Runtime shape -> High-RAM

I don't see this in the free version of Google Colab.

For reference:
image
image

@chbfiv
Copy link
Author

chbfiv commented Nov 26, 2022

Thanks! this resolved it with the additional collab recommendations above

!pip install pytorch-lightning==1.8.3.post0

@thusinh1969
Copy link

Guys how do you test and release software? What's your DevOps? It looks like there wasn't any test before making the announcement. I know people in DS world avoid SE best practices. Let me know if you need help for the future for testing and giving to the community the correct instruction to really make ML more open.

Same, I found this Facebook package is hilariously terribly released ! Facebook should NEVER release such product to public even for a test likes this. Sorry being very upset about this

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants