Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Hang and crash when building computational graph #46

Open
jerr060599 opened this issue Oct 11, 2022 · 0 comments
Open

Hang and crash when building computational graph #46

jerr060599 opened this issue Oct 11, 2022 · 0 comments

Comments

@jerr060599
Copy link

Hi,

Diffvg appears to have issues when multiple instances of RenderFunction.apply are to be added into the computational graph.
If a second instance is called while an instance already exists under the computational graph, it will first hang and then eventually crash.

It does not output any errors to the console. It only hangs before crashing.

Here a simple script that demonstrates the behavior.
It is a simple polygon matching example, except with two keyframes.
It tries to match the two keyframes to a slowly rotating square.

import pydiffvg
import torch
import skimage
import numpy as np
import math

# Use GPU if available
pydiffvg.set_use_gpu(torch.cuda.is_available())

canvas_width, canvas_height = 256, 256

# Initialize the some target frames
targets = [];
render = pydiffvg.RenderFunction.apply

for i in range(8):
	points = torch.tensor([[128 + 60 * math.cos(0.1 * i + 0.0 * math.pi), 128 + 60 * math.sin(0.1 * i + 0.0 * math.pi)],
						   [128 + 60 * math.cos(0.1 * i + 0.5 * math.pi), 128 + 60 * math.sin(0.1 * i + 0.5 * math.pi)],
						   [128 + 60 * math.cos(0.1 * i + 1.0 * math.pi), 128 + 60 * math.sin(0.1 * i + 1.0 * math.pi)],
						   [128 + 60 * math.cos(0.1 * i + 1.5 * math.pi), 128 + 60 * math.sin(0.1 * i + 1.5 * math.pi)]])
	polygon = pydiffvg.Polygon(points = points, is_closed = True)
	shapes = [polygon]
	polygon_group = pydiffvg.ShapeGroup(shape_ids = torch.tensor([0]),
									fill_color = torch.tensor([1.0, 1.0, 1.0, 1.0]))
	shape_groups = [polygon_group]
	scene_args = pydiffvg.RenderFunction.serialize_scene(\
					canvas_width, canvas_height, shapes, shape_groups, 
					output_type = pydiffvg.OutputType.sdf)
	img = render(256, # width
			 256, # height
			 2,   # num_samples_x
			 2,   # num_samples_y
			 0,   # seed
			 None, # background_image
			 *scene_args)
	img = img / 256
	pydiffvg.imwrite(img.cpu(), 'results/test2/target_{}.png'.format(i), gamma=2.2)
	targets.append(img.clone())

# Setup the scene
# Normalize points for easier learning rate
keyframes = []
keyframes.append(torch.tensor([[(128 + 20) / 256.0, (128 - 20) / 256.0],
							   [(128 + 20) / 256.0, (128 + 20) / 256.0],
							   [(128 - 20) / 256.0, (128 + 20) / 256.0],
							   [(128 - 20) / 256.0, (128 - 20) / 256.0]],
							  requires_grad = True))
keyframes.append(torch.tensor([[(128 + 30) / 256.0, (128 - 30) / 256.0],
							   [(128 + 30) / 256.0, (128 + 30) / 256.0],
							   [(128 - 30) / 256.0, (128 + 30) / 256.0],
							   [(128 - 30) / 256.0, (128 - 30) / 256.0]],
							  requires_grad = True))

polygon.points = keyframes[0] * 256
scene_args = pydiffvg.RenderFunction.serialize_scene(\
				canvas_width, canvas_height, shapes, shape_groups,
				output_type = pydiffvg.OutputType.sdf)
img = render(256, # width
			 256, # height
			 2,   # num_samples_x
			 2,   # num_samples_y
			 1,   # seed
			 None, # background_image
			 *scene_args)
img = img / 256
pydiffvg.imwrite(img.cpu(), 'results/test2/init.png', gamma=2.2)

# Optimizer. This is so nice. Much cleaner than in c++
optimizer = torch.optim.Adam(keyframes, lr=1e-2)

# Iterate and optimize
for t in range(100):
	print('iteration:', t)
	# Reset iteration
	optimizer.zero_grad()

	loss = torch.tensor(0.0, requires_grad = True)
	
	# Render current scene
	polygon.points = keyframes[0] * 256
	scene_args = pydiffvg.RenderFunction.serialize_scene(\
					canvas_width, canvas_height, shapes, shape_groups,
					output_type = pydiffvg.OutputType.sdf)
	img = render(256,   # width
			 	 256,   # height
			 	 2,	 # num_samples_x
			 	 2,	 # num_samples_y
			 	 t+1,   # seed
			 	 None,
			 	 *scene_args)
	img = img / 256
	pydiffvg.imwrite(img.cpu(), 'results/test2/iter_{}.png'.format(t), gamma=2.2)

	# Compute loss
	for i, tar in enumerate(targets):
		k = i / (len(targets) - 1)
		print('loss_{}'.format(k))

		polygon.points = ((1 - k) * keyframes[0] + k * keyframes[1]) * 256
		scene_args = pydiffvg.RenderFunction.serialize_scene(\
						canvas_width, canvas_height, shapes, shape_groups,
						output_type = pydiffvg.OutputType.sdf)
		print('scene_{}'.format(k))
		
		img = render(256,   # width
			 		 256,   # height
			 		 2,	 # num_samples_x
			 		 2,	 # num_samples_y
			 		 t+1,   # seed
			 		 None,
			 		 *scene_args)
		print('render_{}'.format(k))

		img = img / 256
		loss = loss + (img - tar).pow(2).sum()
		print('fin_{}'.format(k))

	print('loss:', loss.item())
	loss.backward()

	 # Take a gradient descent step.
	optimizer.step()

# Render the final result.
for i, tar in enumerate(targets):
	k = i / (len(targets) - 1)

	polygon.points = ((1 - k) * keyframes[0] + k * keyframes[1]) * 256
	scene_args = pydiffvg.RenderFunction.serialize_scene(\
					canvas_width, canvas_height, shapes, shape_groups,
					output_type = pydiffvg.OutputType.sdf)
	img = render(256,   # width
			 	 256,   # height
			 	 2,	 # num_samples_x
			 	 2,	 # num_samples_y
			 	 i+1,   # seed
			 	 None,
			 	 *scene_args)
	img = img / 256
	pydiffvg.imwrite(img.cpu(), 'results/test2/final_{}.png'.format(i), gamma=2.2)

from subprocess import call
call(["ffmpeg", "-framerate", "24", "-i",
	"results/test2/iter_%d.png", "-vb", "20M",
	"results/test2/out.mp4"])

I am using Windows 11 and a NVidia GPU.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant