Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How can i run this without docker? #4

Open
shaolinseed opened this issue Aug 19, 2021 · 1 comment
Open

How can i run this without docker? #4

shaolinseed opened this issue Aug 19, 2021 · 1 comment

Comments

@shaolinseed
Copy link

Hi there, for my final project at university i would like to create a GUI for vqgan+clip, i would like to run it locally on my machine without using docker, and connect cuda to a python application rather than jupyter?, is this possible?

Thanks

@sborquez
Copy link
Owner

sborquez commented Aug 19, 2021

I don't recommend doing it without docker. First, you can search how to run GUI apps with docker and then make your Dockerfile importing my container.

But, if you want to run without docker, I recommend you to read the Dockerfile and follow the instruction to install the dependencies.

FROM tensorflow/tensorflow:latest-gpu-jupyter
RUN apt update -y
RUN apt install -y ffmpeg
RUN git clone https://github.com/openai/CLIP
RUN git clone https://github.com/CompVis/taming-transformers
RUN pip install torch==1.9.0+cu111 torchvision==0.10.0+cu111 torchaudio===0.9.0 -f https://download.pytorch.org/whl/torch_stable.html
RUN pip install ftfy regex tqdm omegaconf pytorch-lightning
RUN pip install imageio imageio-ffmpeg pandas seaborn kornia einops==0.3.0 transformers==4.3.1
RUN apt install -y task-spooler
RUN git clone https://github.com/sborquez/VQGAN_CLIP_docker.git
RUN chmod u+x VQGAN_CLIP_docker/start_jupyter_notebook.sh
RUN chmod u+x VQGAN_CLIP_docker/enqueue_generate_images.sh
RUN mkdir -p "/root/.cache/torch/hub/checkpoints"
RUN curl "https://download.pytorch.org/models/vgg16-397923af.pth" -o "/root/.cache/torch/hub/checkpoints/vgg16-397923af.pth"
WORKDIR "/tf/VQGAN_CLIP_docker"

Make sure to use the correct version of CUDA.

It is possible to create a new GUI app without Jupyter. Just use this function as inspiration 😉:

def generate_images(
prompts, model, outputs_folder, models_folder, iterations=300, image_prompts=[],
noise_prompt_seeds=[], noise_prompt_weights=[], size=[300, 300],
init_image=None, init_weight=0., clip_model='ViT-B/32',
step_size=0.1, cutn=64, cut_pow=1., display_freq=5, seed=None,
overwrite=False
):
model_name = model
experiment_name = to_experiment_name(prompts)
experiment_folder = Path(outputs_folder) / experiment_name
os.makedirs(experiment_folder, exist_ok=overwrite)
os.makedirs(experiment_folder / "steps", exist_ok=overwrite)
vqgan_config = Path(models_folder)/f'{model_name}.yaml'
vqgan_checkpoint = Path(models_folder)/f'{model_name}.ckpt'
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
print('Using device:', device)
model = load_vqgan_model(vqgan_config, vqgan_checkpoint).to(device)
perceptor = clip.load(clip_model, jit=False)[0].eval().requires_grad_(False).to(device)
cut_size = perceptor.visual.input_resolution
e_dim = model.quantize.e_dim
f = 2**(model.decoder.num_resolutions - 1)
make_cutouts = MakeCutouts(cut_size, cutn, cut_pow=cut_pow)
n_toks = model.quantize.n_e
toksX, toksY = size[0] // f, size[1] // f
sideX, sideY = toksX * f, toksY * f
z_min = model.quantize.embedding.weight.min(dim=0).values[None, :, None, None]
z_max = model.quantize.embedding.weight.max(dim=0).values[None, :, None, None]
if seed is not None:
torch.manual_seed(seed)
if init_image:
pil_image = Image.open(init_image).convert('RGB')
pil_image = pil_image.resize((sideX, sideY), Image.LANCZOS)
z, *_ = model.encode(TF.to_tensor(pil_image).to(device).unsqueeze(0) * 2 - 1)
else:
one_hot = F.one_hot(torch.randint(n_toks, [toksY * toksX], device=device), n_toks).float()
z = one_hot @ model.quantize.embedding.weight
z = z.view([-1, toksY, toksX, e_dim]).permute(0, 3, 1, 2)
z_orig = z.clone()
z.requires_grad_(True)
opt = optim.Adam([z], lr=step_size)
normalize = transforms.Normalize(mean=[0.48145466, 0.4578275, 0.40821073],
std=[0.26862954, 0.26130258, 0.27577711])
pMs = []
for prompt in prompts:
txt, weight, stop = parse_prompt(prompt)
embed = perceptor.encode_text(clip.tokenize(txt).to(device)).float()
pMs.append(Prompt(embed, weight, stop).to(device))
for prompt in image_prompts:
path, weight, stop = parse_prompt(prompt)
img = resize_image(Image.open(path).convert('RGB'), (sideX, sideY))
batch = make_cutouts(TF.to_tensor(img).unsqueeze(0).to(device))
embed = perceptor.encode_image(normalize(batch)).float()
pMs.append(Prompt(embed, weight, stop).to(device))
for seed, weight in zip(noise_prompt_seeds, noise_prompt_weights):
gen = torch.Generator().manual_seed(seed)
embed = torch.empty([1, perceptor.visual.output_dim]).normal_(generator=gen)
pMs.append(Prompt(embed, weight).to(device))
@torch.no_grad()
def checkin(z, i, losses, iterations, experiment_folder):
out = synth(z, model)
TF.to_pil_image(out[0].cpu()).save(experiment_folder / 'progress.png')
def ascend_txt(pMs, z, i, init_weight, experiment_folder):
out = synth(z, model)
iii = perceptor.encode_image(normalize(make_cutouts(out))).float()
result = []
if init_weight:
result.append(F.mse_loss(z, z_orig) * init_weight / 2)
for prompt in pMs:
result.append(prompt(iii))
img = np.array(out.mul(255).clamp(0, 255)[0].cpu().detach().numpy().astype(np.uint8))[:,:,:]
img = np.transpose(img, (1, 2, 0))
imageio.imwrite(experiment_folder / 'steps'/ f"{str(i).zfill(3)}.png", np.array(img))
return result
def train(pMs, z, i, init_weight, display_freq, iterations, experiment_folder):
opt.zero_grad()
lossAll = ascend_txt(pMs, z, i, init_weight, experiment_folder)
if i % display_freq == 0: checkin(z, i, lossAll, iterations, experiment_folder)
loss = sum(lossAll)
loss.backward()
opt.step()
with torch.no_grad():
z.copy_(z.maximum(z_min).minimum(z_max))
try:
for i in tqdm(range(iterations), total=iterations, desc="Training"):
train(pMs, z, i, init_weight, display_freq, iterations, experiment_folder)
except KeyboardInterrupt:
print("Aborted")
return i

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants