Skip to content

aravind-h-v/every_dream_2

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

12 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Important links:

Dataset link:

s3://instance.segmentation.aravind-1/every_dream_2_train_dataset.cpio

Everydream2 code:

https://github.com/victorchall/EveryDream2trainer.git

Env activate parts:

Install basic depdendencies:

sudo apt-get install cpio zstd aria2

Create and activate the env:

. /opt/anaconda/bin/activate
conda activate everydream

Create directory and CD to it:

DIR="${HOME}/every_dream_2_setup"
mkdir -pv -- "${DIR}"
cd "${DIR}"

Env setup parts:

Create and activate the env:

. /opt/anaconda/bin/activate
conda create -n everydream
conda activate everydream

Create directory and CD to it:

DIR="${HOME}/every_dream_2_setup"
mkdir -pv -- "${DIR}"
cd "${DIR}"

Install basic dependencies:

conda install \
    cython \
    ipython \
    jupyter \
    jupyterlab \
    matplotlib \
    nbconvert \
    numpy \
    opencv \
    pandas \
    python=3.10 \
    scikit-learn \
    scikit-learn-intelex \
    scipy \
    tqdm \
    pyqt \
    jinja2 \
;

Install condaforge dependencies:

conda install -c conda-forge \
    termcolor \
    streamlit \
    pudb \
;

Install pip stuff:

pip install -U \
    'accelerate' \
    'aiohttp' \
    'alive-progress' \
    'bitsandbytes' \
    'colorama' \
    'diffusers' \
    'ftfy' \
    'keyboard' \
    'lion-pytorch' \
    'matplotlib' \
    'OmegaConf' \
    'pynvml' \
    'pyre-extensions' \
    'pytorch-lightning' \
    'tensorboard' \
    'torch==1.13.1' \
    'torchaudio==0.13.1' \
    'torchvision==0.14.1' \
    'tqdm' \
    'transformers' \
    'triton' \
    'wandb' \
    'watchdog' \
    'xformers' \
;

clone the repo:

git clone 'https://github.com/victorchall/EveryDream2trainer.git'

cd to the dir and download the model:

cd 'EveryDream2trainer'
aria2c -c -x16 'https://huggingface.co/panopstor/EveryDream/resolve/main/sd_v1-5_vae.ckpt'
aria2c -c -x16 'https://huggingface.co/stabilityai/stable-diffusion-2-1/resolve/main/v2-1_768-nonema-pruned.ckpt'

Prepare diffusers:

Download the repo:

cd "${HOME}"
mkdir -pv -- 'huggingface'
cd 'huggingface'
git clone 'https://github.com/huggingface/diffusers.git'

Install diffusers:

cd 'diffusers'
pip install -e .

Convert the data:

SD1.5

Using original diffusers script:

python \
    "${HOME}/huggingface/diffusers/scripts/convert_original_stable_diffusion_to_diffusers.py" \
    --checkpoint_path "${HOME}/every_dream_2_setup/EveryDream2trainer/sd_v1-5_vae.ckpt" \
    --image_size 512 \
    --dump_path "${HOME}/every_dream_2_setup/EveryDream2trainer/sd_v1-5_vae.dir" \
;

SD2.1

Using original diffusers script:

python \
    "${HOME}/huggingface/diffusers/scripts/convert_original_stable_diffusion_to_diffusers.py" \
    --checkpoint_path "${HOME}/every_dream_2_setup/EveryDream2trainer/v2-1_768-nonema-pruned.ckpt" \
    --image_size 768 \
    --dump_path "${HOME}/every_dream_2_setup/EveryDream2trainer/v2-1_768-nonema-pruned.dir" \
;

Run the everydream2 trainer:

Working:

cd "${HOME}/every_dream_2_setup/EveryDream2trainer"

python train.py \
    '--resume_ckpt' "${HOME}/every_dream_2_setup/EveryDream2trainer/v2-1_768-nonema-pruned.dir" \
    '--max_epochs' '50' \
    '--data_root' "${HOME}/every_dream_2_setup/EveryDream2trainer/input" \
    '--logdir=log.dir' \
    '--lr_scheduler' 'constant' \
    '--project_name' 'owhx2' \
    '--batch_size' '6' \
    '--sample_steps' '500' \
    '--lr' '2e-7' \
    '--ckpt_every_n_minutes' '20' \
    '--useadam8bit' \
;

Allowed options:

("--batch_size", type=int, default=2, help="Batch size (def: 2)")
("--ckpt_every_n_minutes", type=int, default=None, help="Save checkpoint every n minutes, def: 20")
("--clip_grad_norm", type=float, default=None, help="Clip gradient norm (def: disabled) (ex: 1.5), useful if loss=nan?")
("--clip_skip", type=int, default=0, help="Train using penultimate layer (def: 0) (2 is 'penultimate')", choices=[0, 1, 2, 3, 4])
("--cond_dropout", type=float, default=0.04, help="Conditional drop out as decimal 0.0-1.0, see docs for more info (def: 0.04)")
("--data_root", type=str, default="input", help="folder where your training images are")
("--disable_amp", action="store_true", default=False, help="disables training of text encoder (def: False)")
("--disable_textenc_training", action="store_true", default=False, help="disables training of text encoder (def: False)")
("--disable_unet_training", action="store_true", default=False, help="disables training of unet (def: False) NOT RECOMMENDED")
("--disable_xformers", action="store_true", default=False, help="disable xformers, may reduce performance (def: False)")
("--flip_p", type=float, default=0.0, help="probability of flipping image horizontally (def: 0.0) use 0.0 to 1.0, ex 0.5, not good for specific faces!")
("--gpuid", type=int, default=0, help="id of gpu to use for training, (def: 0) (ex: 1 to use GPU_ID 1)")
("--gradient_checkpointing", action="store_true", default=False, help="enable gradient checkpointing to reduce VRAM use, may reduce performance (def: False)")
("--grad_accum", type=int, default=1, help="Gradient accumulation factor (def: 1), (ex, 2)")
("--logdir", type=str, default="logs", help="folder to save logs to (def: logs)")
("--log_step", type=int, default=25, help="How often to log training stats, def: 25, recommend default!")
("--lowvram", action="store_true", default=False, help="automatically overrides various args to support 12GB gpu")
("--lr", type=float, default=None, help="Learning rate, if using scheduler is maximum LR at top of curve")
("--lr_decay_steps", type=int, default=0, help="Steps to reach minimum LR, default: automatically set")
("--lr_scheduler", type=str, default="constant", help="LR scheduler, (default: constant)", choices=["constant", "linear", "cosine", "polynomial"])
("--lr_warmup_steps", type=int, default=None, help="Steps to reach max LR during warmup (def: 0.02 of lr_decay_steps), non-functional for constant")
("--max_epochs", type=int, default=300, help="Maximum number of epochs to train for")
("--notebook", action="store_true", default=False, help="disable keypresses and uses tqdm.notebook for jupyter notebook (def: False)")
("--optimizer_config", default="optimizer.json", help="Path to a JSON configuration file for the optimizer.  Default is 'optimizer.json'")
("--project_name", type=str, default="myproj", help="Project name for logs and checkpoints, ex. 'tedbennett', 'superduperV1'")
("--resolution", type=int, default=512, help="resolution to train", choices=supported_resolutions)
("--resume_ckpt", type=str, required=not ('resume_ckpt' in args), default="sd_v1-5_vae.ckpt", help="The checkpoint to resume from, either a local .ckpt file, a converted Diffusers format folder, or a Huggingface.co repo id such as stabilityai/stable-diffusion-2-1 ")
("--run_name", type=str, required=False, default=None, help="Run name for wandb (child of project name), and comment for tensorboard, (def: None)")
("--sample_prompts", type=str, default="sample_prompts.txt", help="Text file with prompts to generate test samples from, or JSON file with sample generator settings (default: sample_prompts.txt)")
("--sample_steps", type=int, default=250, help="Number of steps between samples (def: 250)")
("--save_ckpt_dir", type=str, default=None, help="folder to save checkpoints to (def: root training folder)")
("--save_every_n_epochs", type=int, default=None, help="Save checkpoint every n epochs, def: 0 (disabled)")
("--save_ckpts_from_n_epochs", type=int, default=0, help="Only saves checkpoints starting an N epochs, def: 0 (disabled)")
("--save_full_precision", action="store_true", default=False, help="save ckpts at full FP32")
("--save_optimizer", action="store_true", default=False, help="saves optimizer state with ckpt, useful for resuming training later")
("--scale_lr", action="store_true", default=False, help="automatically scale up learning rate based on batch size and grad accumulation (def: False)")
("--seed", type=int, default=555, help="seed used for samples and shuffling, use -1 for random")
("--shuffle_tags", action="store_true", default=False, help="randomly shuffles CSV tags in captions, for booru datasets")
("--useadam8bit", action="store_true", default=False, help="deprecated, use --optimizer_config and optimizer.json instead")
("--wandb", action="store_true", default=False, help="enable wandb logging instead of tensorboard, requires env var WANDB_API_KEY")
("--validation_config", default=None, help="Path to a JSON configuration file for the validator.  Default is no validation.")
("--write_schedule", action="store_true", default=False, help="write schedule of images and their batches to file (def: False)")
("--rated_dataset", action="store_true", default=False, help="enable rated image set training, to less often train on lower rated images through the epochs")
("--rated_dataset_target_dropout_percent", type=int, default=50, help="how many images (in percent) should be included in the last epoch (Default 50)")
("--zero_frequency_noise_ratio", type=float, default=0.02, help="adds zero frequency noise, for improving contrast (def: 0.0) use 0.0 to 0.15")

Convert the final ckpt to huggingface:

# python \
#     '/home/asd/every_dream_2_setup/EveryDream2trainer/utils/convert_original_stable_diffusion_to_diffusers.py' \
#     '--scheduler_type' 'ddim' \
#     '--image_size' '512' \
#     '--checkpoint_path' '/home/asd/every_dream_2_setup/every_dream_output/last-myproj-ep49-gs10450.ckpt' \
#     '--prediction_type' 'epsilon' \
#     '--upcast_attn' 'False' \
#     '--dump_path' '/home/asd/every_dream_2_setup/every_dream_output/last-myproj-ep49-gs10450.dir' \
# ;


python \
    '/home/asd/huggingface/diffusers/scripts/convert_original_stable_diffusion_to_diffusers.py' \
	  --checkpoint_path \
	      '/home/asd/every_dream_2_setup/every_dream_output/last-myproj-ep49-gs10450.ckpt' \
	  --image_size 512 \
	  --dump_path \
	      '/home/asd/every_dream_2_setup/every_dream_output/last-myproj-ep49-gs10450.dir' \
;

# --from_safetensors \

Finalizing:

Save the buffer and tangle the files:

(save-buffer) 
(org-babel-tangle)

About

testing everydream2 stuff

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published