Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

is too long for context length #26

Open
NoteToSelfFindGoodNickname opened this issue Feb 2, 2023 · 1 comment
Open

is too long for context length #26

NoteToSelfFindGoodNickname opened this issue Feb 2, 2023 · 1 comment

Comments

@NoteToSelfFindGoodNickname

venv "C:\Users\tomwe\sdwebui\venv\Scripts\Python.exe"
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Commit hash: 79d802b48a70d2c7e4ca56639833171ac6996714
Installing requirements for Web UI
Installing requirements for Batch Face Swap

Initializing Riffusion

#######################################################################################################
Initializing Dreambooth
If submitting an issue on github, please provide the below text for debugging purposes:

Python revision: 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Dreambooth revision: c2269b8585d994efa31c6582fc19a890253c804e
SD-WebUI revision: 79d802b48a70d2c7e4ca56639833171ac6996714

Checking Dreambooth requirements...
[+] bitsandbytes version 0.35.0 installed.
[+] diffusers version 0.10.2 installed.
[+] transformers version 4.25.1 installed.
[+] xformers version 0.0.14.dev0 installed.
[+] torch version 1.12.1+cu113 installed.
[+] torchvision version 0.13.1+cu113 installed.

#######################################################################################################

loading Smart Crop reqs from C:\Users\tomwe\sdwebui\extensions\sd_smartprocess\requirements.txt
Checking Smart Crop requirements.

Launching Web UI with arguments: --xformers --precision full --no-half
SD-Webui API layer loaded
Checkpoint digi_angewomon.safetensors not found; loading fallback dreamlike-photoreal-2.0.ckpt
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
Loading weights [fc52756a74] from C:\Users\tomwe\sdwebui\models\Stable-diffusion\dreamlike-photoreal-2.0.ckpt
Applying xformers cross attention optimization.
Textual inversion embeddings loaded(5): CUserstomweDesktopauswahl, ie18, ie19, ie24, ie25
Model loaded in 2.8s (0.2s create model, 1.1s load weights).
[tag-editor] Settings has been read from config.json
Running on local URL: http://127.0.0.1:7860

To create a public link, set share=True in launch().
Nothing to do.
Nothing to do.
Loading captioning models...
Loading CLIP interrogator...
Loading CLIP model from ViT-H-14/laion2b_s32b_b79k
Loading BLIP model...
load checkpoint from https://storage.googleapis.com/sfr-vision-language-research/BLIP/models/model_large_caption.pth
Loading CLIP model...
Loaded CLIP model and data in 10.24 seconds.
Loading YOLOv5 interrogator...
Using cache found in C:\Users\tomwe/.cache\torch\hub\ultralytics_yolov5_master
YOLOv5 2023-1-24 Python-3.10.6 torch-1.12.1+cu113 CUDA:0 (NVIDIA GeForce RTX 3090, 24576MiB)

Fusing layers...
YOLOv5m6 summary: 378 layers, 35704908 parameters, 0 gradients
Adding AutoShape...
Preprocessing...
0%| | 0/177 [00:00<?, ?it/s]Processed: '(1.jpg - None)
1%|▍ | 1/177 [00:03<10:21, 3.53s/it]Processed: '(10.jpg - None)
2%|█▍ | 3/177 [00:05<05:00, 1.73s/it]Processed: '(11.jpg - None)
3%|██▎ | 5/177 [00:07<04:04, 1.42s/it]Processed: '(12.jpg - None)
4%|███▏ | 7/177 [00:10<03:39, 1.29s/it]Processed: '(13.jpg - None)
5%|████▏ | 9/177 [00:12<03:24, 1.21s/it]Processed: '(14.jpg - None)
7%|█████▍ | 12/177 [00:17<03:56, 1.43s/it]
Traceback (most recent call last):
File "C:\Users\tomwe\sdwebui\extensions\sd_smartprocess\smartprocess.py", line 289, in preprocess
im_data = crop_clip.get_center(img, prompt=short_caption)
File "C:\Users\tomwe\sdwebui\extensions\sd_smartprocess\clipcrop.py", line 100, in get_center
text_encoded = model.encode_text(clip.tokenize(prompt).to(device))
File "C:\Users\tomwe\sdwebui\venv\lib\site-packages\clip\clip.py", line 234, in tokenize
raise RuntimeError(f"Input {texts[i]} is too long for context length {context_length}")
RuntimeError: Input a woman is getting her pussy licked by a woman who is brushing her pussy pussy pussy pussy pussy pussy pussy pussy pussy pussy pussy pussy pussy pussy pussy pussy pussy pussy pussy pussy pussy pussy pussy pussy pussy pussy pussy pussy pussy pussy pussy pussy is too long for context length 77
Loading captioning models...
Loading CLIP interrogator...
Loading CLIP model from ViT-H-14/laion2b_s32b_b79k
Loading BLIP model...
load checkpoint from https://storage.googleapis.com/sfr-vision-language-research/BLIP/models/model_large_caption.pth
Loading CLIP model...
Loaded CLIP model and data in 10.24 seconds.
Loading YOLOv5 interrogator...
Using cache found in C:\Users\tomwe/.cache\torch\hub\ultralytics_yolov5_master
YOLOv5 2023-1-24 Python-3.10.6 torch-1.12.1+cu113 CUDA:0 (NVIDIA GeForce RTX 3090, 24576MiB)

Fusing layers...
YOLOv5m6 summary: 378 layers, 35704908 parameters, 0 gradients
Adding AutoShape...
Preprocessing...
0%| | 0/177 [00:00<?, ?it/s]Processed: '(1.jpg - None)
1%|▍ | 1/177 [00:03<10:18, 3.51s/it]Processed: '(10.jpg - None)
2%|█▍ | 3/177 [00:07<06:32, 2.25s/it]Processed: '(11.jpg - None)
3%|██▎ | 5/177 [00:10<05:52, 2.05s/it]Processed: '(12.jpg - None)
4%|███▏ | 7/177 [00:14<05:30, 1.94s/it]Processed: '(13.jpg - None)
5%|████▏ | 9/177 [00:18<05:17, 1.89s/it]Processed: '(14.jpg - None)
7%|█████▍ | 12/177 [00:25<05:52, 2.14s/it]
Traceback (most recent call last):
File "C:\Users\tomwe\sdwebui\extensions\sd_smartprocess\smartprocess.py", line 289, in preprocess
im_data = crop_clip.get_center(img, prompt=short_caption)
File "C:\Users\tomwe\sdwebui\extensions\sd_smartprocess\clipcrop.py", line 100, in get_center
text_encoded = model.encode_text(clip.tokenize(prompt).to(device))
File "C:\Users\tomwe\sdwebui\venv\lib\site-packages\clip\clip.py", line 234, in tokenize
raise RuntimeError(f"Input {texts[i]} is too long for context length {context_length}")
RuntimeError: Input a woman is getting her pussy licked by a woman who is brushing her pussy pussy pussy pussy pussy pussy pussy pussy pussy pussy pussy pussy pussy pussy pussy pussy pussy pussy pussy pussy pussy pussy pussy pussy pussy pussy pussy pussy pussy pussy pussy pussy is too long for context length 77
Loading captioning models...
Loading CLIP interrogator...
Loading CLIP model from ViT-H-14/laion2b_s32b_b79k
Loading BLIP model...
load checkpoint from https://storage.googleapis.com/sfr-vision-language-research/BLIP/models/model_large_caption.pth
Loading CLIP model...
Loaded CLIP model and data in 9.47 seconds.
Loading YOLOv5 interrogator...
Using cache found in C:\Users\tomwe/.cache\torch\hub\ultralytics_yolov5_master
YOLOv5 2023-1-24 Python-3.10.6 torch-1.12.1+cu113 CUDA:0 (NVIDIA GeForce RTX 3090, 24576MiB)

Fusing layers...
YOLOv5m6 summary: 378 layers, 35704908 parameters, 0 gradients
Adding AutoShape...
Preprocessing...
0%| | 0/177 [00:00<?, ?it/s]Processed: '(1.jpg - None)
1%|▍ | 1/177 [00:03<10:22, 3.54s/it]Processed: '(10.jpg - None)
2%|█▍ | 3/177 [00:07<06:38, 2.29s/it]Processed: '(11.jpg - None)
3%|██▎ | 5/177 [00:10<05:47, 2.02s/it]Processed: '(12.jpg - None)
4%|███▏ | 7/177 [00:14<05:24, 1.91s/it]Processed: '(13.jpg - None)
5%|████▏ | 9/177 [00:17<05:06, 1.83s/it]Processed: '(14.jpg - None)
7%|█████▍ | 12/177 [00:25<05:47, 2.11s/it]
Traceback (most recent call last):
File "C:\Users\tomwe\sdwebui\extensions\sd_smartprocess\smartprocess.py", line 289, in preprocess
im_data = crop_clip.get_center(img, prompt=short_caption)
File "C:\Users\tomwe\sdwebui\extensions\sd_smartprocess\clipcrop.py", line 100, in get_center
text_encoded = model.encode_text(clip.tokenize(prompt).to(device))
File "C:\Users\tomwe\sdwebui\venv\lib\site-packages\clip\clip.py", line 234, in tokenize
raise RuntimeError(f"Input {texts[i]} is too long for context length {context_length}")
RuntimeError: Input a woman is getting her pussy licked by a woman who is brushing her pussy pussy pussy pussy pussy pussy pussy pussy pussy pussy pussy pussy pussy pussy pussy pussy pussy pussy pussy pussy pussy pussy pussy pussy pussy pussy pussy pussy pussy pussy pussy pussy is too long for context length 77
Loading captioning models...
Loading CLIP interrogator...
Loading CLIP model from ViT-L-14/openai
Loading BLIP model...
load checkpoint from https://storage.googleapis.com/sfr-vision-language-research/BLIP/models/model_large_caption.pth
Loading CLIP model...
Loaded CLIP model and data in 4.95 seconds.
Loading YOLOv5 interrogator...
Using cache found in C:\Users\tomwe/.cache\torch\hub\ultralytics_yolov5_master
YOLOv5 2023-1-24 Python-3.10.6 torch-1.12.1+cu113 CUDA:0 (NVIDIA GeForce RTX 3090, 24576MiB)

Fusing layers...
YOLOv5m6 summary: 378 layers, 35704908 parameters, 0 gradients
Adding AutoShape...
Preprocessing...
0%| | 0/177 [00:00<?, ?it/s]Processed: '(1.jpg - None)
1%|▍ | 1/177 [00:02<06:50, 2.33s/it]Processed: '(10.jpg - None)
2%|█▍ | 3/177 [00:04<04:01, 1.39s/it]Processed: '(11.jpg - None)
3%|██▎ | 5/177 [00:06<03:33, 1.24s/it]Processed: '(12.jpg - None)
4%|███▏ | 7/177 [00:08<03:16, 1.15s/it]Processed: '(13.jpg - None)
5%|████▏ | 9/177 [00:10<03:06, 1.11s/it]Processed: '(14.jpg - None)
7%|█████▍ | 12/177 [00:15<03:28, 1.26s/it]
Traceback (most recent call last):
File "C:\Users\tomwe\sdwebui\extensions\sd_smartprocess\smartprocess.py", line 289, in preprocess
im_data = crop_clip.get_center(img, prompt=short_caption)
File "C:\Users\tomwe\sdwebui\extensions\sd_smartprocess\clipcrop.py", line 100, in get_center
text_encoded = model.encode_text(clip.tokenize(prompt).to(device))
File "C:\Users\tomwe\sdwebui\venv\lib\site-packages\clip\clip.py", line 234, in tokenize
raise RuntimeError(f"Input {texts[i]} is too long for context length {context_length}")
RuntimeError: Input a woman is getting her pussy licked by a woman who is brushing her pussy pussy pussy pussy pussy pussy pussy pussy pussy pussy pussy pussy pussy pussy pussy pussy pussy pussy pussy pussy pussy pussy pussy pussy pussy pussy pussy pussy pussy pussy pussy pussy is too long for context length 77

@NoteToSelfFindGoodNickname
Copy link
Author

I did not create these pussy pussy pussy tags. It was done by Smart Preprocess because I used Cropping.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant