Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AttributeError: 'NoneType' object has no attribute 'to'。 How should I do? Thanks a lot. #580

Closed
chimelea666 opened this issue Nov 16, 2024 · 0 comments

Comments

@chimelea666
Copy link

17:33:18-247491 INFO Training started with config file / 训练开始,使用配置文件: F:\lora-scripts-v1.10.0\config\autosave\20241116-173318.toml
17:33:18-247491 INFO Task 3c7688bb-72ce-4607-b60c-42900dacbf2b created
2024-11-16 17:33:25 INFO Loading settings from F:\lora-scripts-v1.10.0\config\autosave\20241116-173318.toml... train_util.py:4435
INFO F:\lora-scripts-v1.10.0\config\autosave\20241116-173318 train_util.py:4454
2024-11-16 17:33:25 INFO Checking the state dict: Diffusers or BFL, dev or schnell flux_utils.py:62
INFO t5xxl_max_token_length: 512 flux_train_network.py:152
You are using the default legacy behaviour of the <class 'transformers.models.t5.tokenization_t5.T5Tokenizer'>. This is expected, and simply means that the legacy (previous) behavior will be used so nothing changes for you. If you want to use the new behaviour, set legacy=False. This should only be set if you understand what it means, and thoroughly read the reason why this was added as explained in huggingface/transformers#24565
2024-11-16 17:33:28 INFO Using DreamBooth method. train_network.py:315
INFO prepare images. train_util.py:1956
INFO get image size from name of cache files train_util.py:1873
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 38/38 [00:00<?, ?it/s]
INFO set image size from cache files: 38/38 train_util.py:1901
INFO found directory F:\lora-scripts\train\10_nml_xl\5_zkz contains 38 image files train_util.py:1903
read caption: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 38/38 [00:00<00:00, 2437.77it/s]
WARNING No caption file found for 1 images. Training will continue without captions for these images. If class token exists, it will be train_util.py:1934
used. / 1枚の画像にキャプションファイルが見つかりませんでした。これらの画像についてはキャプションなしで学習を続行します。class
tokenが存在する場合はそれを使います。
WARNING F:\lora-scripts\train\10_nml_xl\5_zkz\ComfyUI_03235_.png train_util.py:1941
INFO 190 train images with repeating. train_util.py:1997
INFO 0 reg images. train_util.py:2000
WARNING no regularization images / 正則化画像が見つかりませんでした train_util.py:2005
INFO [Dataset 0] config_util.py:567
batch_size: 1
resolution: (768, 896)
enable_bucket: False
network_multiplier: 1.0

                           [Subset 0 of Dataset 0]
                             image_dir: "F:\lora-scripts\train\10_nml_xl\5_zkz"
                             image_count: 38
                             num_repeats: 5
                             shuffle_caption: False
                             keep_tokens: 1
                             keep_tokens_separator:
                             caption_separator: ,
                             secondary_separator: None
                             enable_wildcard: False
                             caption_dropout_rate: 0.0
                             caption_dropout_every_n_epoches: 0
                             caption_tag_dropout_rate: 0.0
                             caption_prefix: None
                             caption_suffix: None
                             color_aug: False
                             flip_aug: True
                             face_crop_aug_range: None
                             random_crop: False
                             token_warmup_min: 1
                             token_warmup_step: 0
                             alpha_mask: False
                             custom_attributes: {}
                             is_reg: False
                             class_tokens: zkz
                             caption_extension: .txt


                INFO     [Dataset 0]                                                                                                                         config_util.py:573
                INFO     loading image sizes.                                                                                                                 train_util.py:923

100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 38/38 [00:00<?, ?it/s]
INFO prepare dataset train_util.py:948
INFO preparing accelerator train_network.py:369
accelerator device: cuda
INFO Checking the state dict: Diffusers or BFL, dev or schnell flux_utils.py:62
INFO Building Flux model dev from BFL checkpoint flux_utils.py:120
INFO Loading state dict from F:/lora-scripts-v1.10.0/sd-models/flux1-dev.safetensors flux_utils.py:137
INFO Loaded Flux: flux_utils.py:156
INFO Building CLIP flux_utils.py:176
INFO Loading state dict from E:/comfyui-auto/models/clip/flux_clip_l.safetensors flux_utils.py:269
INFO Loaded CLIP: flux_utils.py:272
INFO Loading state dict from E:/comfyui-auto/models/clip/t5xxl_fp16.safetensors flux_utils.py:317
INFO Loaded T5xxl: flux_utils.py:320
INFO Building AutoEncoder flux_utils.py:163
INFO Loading state dict from E:/comfyui-auto/models/vae/flux-vae-bf16.safetensors flux_utils.py:168
INFO Loaded AE: flux_utils.py:171
import network module: networks.lora_flux
2024-11-16 17:33:29 INFO [Dataset 0] train_util.py:2480
INFO caching latents with caching strategy. train_util.py:1048
INFO caching latents... train_util.py:1093
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 38/38 [00:00<00:00, 2427.26it/s]
INFO move vae and unet to cpu to save memory flux_train_network.py:205
INFO move text encoders to gpu flux_train_network.py:213
2024-11-16 17:33:36 INFO [Dataset 0] train_util.py:2502
INFO caching Text Encoder outputs with caching strategy. train_util.py:1227
INFO checking cache validity... train_util.py:1238
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 38/38 [00:00<00:00, 2431.15it/s]
INFO no Text Encoder outputs to cache train_util.py:1265
INFO move t5XXL back to cpu flux_train_network.py:253
2024-11-16 17:33:39 INFO move vae and unet back to original device flux_train_network.py:258
INFO create LoRA network. base dim (rank): 32, alpha: 16 lora_flux.py:594
INFO neuron dropout: p=None, rank dropout: p=None, module dropout: p=None lora_flux.py:595
INFO train all blocks only lora_flux.py:605
INFO create LoRA for Text Encoder 1: lora_flux.py:741
2024-11-16 17:33:40 INFO create LoRA for Text Encoder 1: 72 modules. lora_flux.py:744
INFO create LoRA for FLUX all blocks: 304 modules. lora_flux.py:765
INFO enable LoRA for text encoder: 72 modules lora_flux.py:911
INFO enable LoRA for U-Net: 304 modules lora_flux.py:916
FLUX: Gradient checkpointing enabled. CPU offload: False
prepare optimizer, data loader etc.
INFO Text Encoder 1 (CLIP-L): 72 modules, LR 1e-05 lora_flux.py:1018
INFO use AdamW optimizer | {} train_util.py:4788
override steps. steps for 40 epochs is / 指定エポックまでのステップ数: 7600
enable full fp16 training.
running training / 学習開始
num train images * repeats / 学習画像の数×繰り返し回数: 190
num reg images / 正則化画像の数: 0
num batches per epoch / 1epochのバッチ数: 190
num epochs / epoch数: 40
batch size per device / バッチサイズ: 1
gradient accumulation steps / 勾配を合計するステップ数 = 1
total optimization steps / 学習ステップ数: 7600
steps: 0%| | 0/7600 [00:00<?, ?it/s]2024-11-16 17:35:07 INFO unet dtype: torch.float16, device: cuda:0 train_network.py:1084
INFO text_encoder [0] dtype: torch.float16, device: cuda:0 train_network.py:1090
INFO text_encoder [1] dtype: torch.float16, device: cpu train_network.py:1090

epoch 1/40
2024-11-16 17:35:57 INFO epoch is incremented. current_epoch: 0, epoch: 1 train_util.py:715
2024-11-16 17:35:57 INFO epoch is incremented. current_epoch: 0, epoch: 1 train_util.py:715
2024-11-16 17:35:57 INFO epoch is incremented. current_epoch: 0, epoch: 1 train_util.py:715
2024-11-16 17:35:57 INFO epoch is incremented. current_epoch: 0, epoch: 1 train_util.py:715
2024-11-16 17:35:57 INFO epoch is incremented. current_epoch: 0, epoch: 1 train_util.py:715
2024-11-16 17:35:57 INFO epoch is incremented. current_epoch: 0, epoch: 1 train_util.py:715
2024-11-16 17:35:57 INFO epoch is incremented. current_epoch: 0, epoch: 1 train_util.py:715
2024-11-16 17:35:57 INFO epoch is incremented. current_epoch: 0, epoch: 1 train_util.py:715
Traceback (most recent call last):
File "F:\lora-scripts-v1.10.0\scripts\dev\flux_train_network.py", line 564, in
trainer.train(args)
File "F:\lora-scripts-v1.10.0\scripts\dev\train_network.py", line 1165, in train
encoded_text_encoder_conds = [c.to(weight_dtype) for c in encoded_text_encoder_conds]
File "F:\lora-scripts-v1.10.0\scripts\dev\train_network.py", line 1165, in
encoded_text_encoder_conds = [c.to(weight_dtype) for c in encoded_text_encoder_conds]
AttributeError: 'NoneType' object has no attribute 'to'
steps: 0%| | 0/7600 [00:52<?, ?it/s]
17:36:01-764187 ERROR Training failed / 训练失败

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant