Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Image to image #25

Merged
merged 20 commits into from
Nov 10, 2022
Prev Previous commit
Next Next commit
add
jackalcooper committed Nov 10, 2022
commit 0312e43bada169009179f6ce0b3998ae1d51bd4e
28 changes: 12 additions & 16 deletions tests/test_pipelines_oneflow.py
Original file line number Diff line number Diff line change
@@ -1176,25 +1176,21 @@ def test_stable_diffusion_memory_chunking(self):

# make attention efficient
pipe.enable_attention_slicing()
generator = torch.Generator(device=torch_device)
generator.manual_seed(0)
with og_torch.autocast(torch_device):
with torch.autocast(torch_device):
output_chunked = pipe(
[prompt], generator=generator, guidance_scale=7.5, num_inference_steps=10, output_type="numpy"
)
image_chunked = output_chunked.images
generator = torch.Generator(device=torch_device).manual_seed(0)
with torch.autocast(torch_device):
output_chunked = pipe(
[prompt], generator=generator, guidance_scale=7.5, num_inference_steps=10, output_type="numpy"
)
image_chunked = output_chunked.images

# disable chunking
pipe.disable_attention_slicing()
generator = torch.Generator(device=torch_device)
generator.manual_seed(0)
with og_torch.autocast(torch_device):
with torch.autocast(torch_device):
output = pipe(
[prompt], generator=generator, guidance_scale=7.5, num_inference_steps=10, output_type="numpy"
)
image = output.images
generator = torch.Generator(device=torch_device).manual_seed(0)
with torch.autocast(torch_device):
output = pipe(
[prompt], generator=generator, guidance_scale=7.5, num_inference_steps=10, output_type="numpy"
)
image = output.images

# make sure that more than 3.75 GB is allocated
mem_bytes = torch.cuda.max_memory_allocated()