Skip to content
This repository has been archived by the owner on Nov 15, 2022. It is now read-only.

Add support for permute #130

Open
cpuhrsch opened this issue Apr 15, 2020 · 6 comments
Open

Add support for permute #130

cpuhrsch opened this issue Apr 15, 2020 · 6 comments
Assignees
Labels
enhancement New feature or request

Comments

@cpuhrsch
Copy link
Contributor

aten reference: permute.

Semantics here are limited if we're to maintain the view functionality of pytorch.

A user may permute tensor dimensions xor nestedtensor dimensions. Permutations at a per-tensor dimension is simply done with a map operation. Permutations at a nestedtensor dimension requires us to implement a "rotation" within the tree. This is a useful operation in general and may live in the csrc/utils folder.

@cpuhrsch cpuhrsch self-assigned this Apr 15, 2020
@cpuhrsch cpuhrsch added the enhancement New feature or request label Apr 15, 2020
@JCBrouwer
Copy link

JCBrouwer commented Jan 14, 2022

In a similar vein, I would also very much appreciate transpose!

Specifically I'd like to do my_nested_tensor.transpose(0,1) which I guess would require the same type of "rotation" as the full permute?

@cpuhrsch
Copy link
Contributor Author

@JCBrouwer - transpose should already be implemented and available.

@JCBrouwer
Copy link

JCBrouwer commented Jan 15, 2022

Hmm I've installed using both:
pip install git+https://github.com/pytorch/nestedtensor and pip install git+https://github.com/pytorch/nestedtensor@nightly
(versions nestedtensor==0.1.4+4e21fd6 and nestedtensor==0.1.4+aa1519a respectively)

But when I run:

noises = [torch.randn((timesteps, 1, size, size)) for size in [4, 8, 8, 16, 16, 32, 32, 64, 64]]
print(nested_tensor(noises).shape)
print(nested_tensor(noises).transpose(0, 1).shape)  # or torch.transpose(..., 0, 1)

I get:

(9, 785, 1, None, None)
Traceback (most recent call last):
  File "/home/jcbgb/anaconda3/envs/mauaudio/lib/python3.8/runpy.py", line 194, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/home/jcbgb/anaconda3/envs/mauaudio/lib/python3.8/runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "/home/hans/code/maua/maua/audiovisual/generate.py", line 79, in <module>
    video, (audio, sr) = generate_audiovisal_from_patch(
  File "/home/jcbgb/anaconda3/envs/mauaudio/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 28, in decorate_context
    return func(*args, **kwargs)
  File "/home/hans/code/maua/maua/audiovisual/generate.py", line 45, in generate_audiovisal_from_patch
    synthesizer_inputs = patch.process_synthesizer_inputs(mapped_inputs)
  File "/home/hans/code/maua/maua/audiovisual/patches/examples/stylegan2.py", line 53, in process_synthesizer_inputs
    noises = self.synthesizer.make_noise_pyramid(noise)
  File "/home/hans/code/maua/maua/GAN/wrappers/stylegan2.py", line 192, in make_noise_pyramid
    print(nested_tensor(noises).transpose(0, 1).shape)
  File "/home/jcbgb/anaconda3/envs/mauaudio/lib/python3.8/site-packages/nestedtensor/nested/nested.py", line 227, in _wrapped_fn
    result = getattr(self._impl, name)(*impl_args, **impl_kwargs)
RuntimeError: Transposition of nested dimensions is not implemented yet.

@cpuhrsch
Copy link
Contributor Author

cpuhrsch commented Jan 16, 2022

Oh I see. The first dimension is a nested dimension. Perhaps you want nested_tensor(noises).transpose(1, 2). This will transpose the first and second dimension of each constituent.

noises = [torch.randn((timesteps, 1, size, size)) for size in [4, 8, 8, 16, 16, 32, 32, 64, 64]]
print(nested_tensor(noises).shape)
print(nested_tensor(noises).transpose(1, 2).shape) 

will print

(9, 785, 1, None, None)
(9, 1, 785, None, None)

Otherwise, could you describe the behavior you expect? It's possible to transpose a nested and dense dimension, but it'll require multiple nested dimensions, which we currently don't support.

@JCBrouwer
Copy link

I have multiple inputs to a network of the form [timesteps, some, shape]. I'd like to be able to transpose the noises tensor so that it has the shape [785, 9, 1, None, None].

This way I can easily distribute the inputs like:

dataset = TensorDataset(latents, noises, truncations)
dataloader = FancyDistributedDataloader(dataset, **kwargs)

or

for result in torch.multiprocessing.map_async(inference_fn, zip(latents, noises, truncations)):
    ...

While it might possible to try to rewrite the above things to use something like the following, this can be a lot of extra effort depending on what's consuming my no-longer-zipped inputs (i.e. for the TensorDataset example it would be pretty easy to override __getitem__, but for the map_async example I'd need an extra for-loop like below to prepare the zipped list).

for idx in range(n):
    lat, noi, trunc = latents[idx], noises[:, idx], trunc[idx]
    ...

@cpuhrsch
Copy link
Contributor Author

cpuhrsch commented Jan 17, 2022

I see. For now you'll unfortunately need to do this via unbind and zip.

nested_tensor(noises).unbind(1) gets you a list of NestedTensors of length timesteps.

Each entry has nested_size

NestedSize([
        torch.Size([1, 4, 4]),
        torch.Size([1, 8, 8]),
        torch.Size([1, 8, 8]),
        torch.Size([1, 16, 16]),
        torch.Size([1, 16, 16]),
        torch.Size([1, 32, 32]),
        torch.Size([1, 32, 32]),
        torch.Size([1, 64, 64]),
        torch.Size([1, 64, 64])
])

You can now unbind each of these NestedTensors again and get a plain list of constituent Tensors. Then you can zip these lists together, stack the resulting tuples and construct a new NestedTensor.

I don't think this will buy you much however in comparison to doing this in plain Python first and then constructing a NestedTensor.

EDIT:

Actually I think your case might not be representable without more nested dimensions altogether.

You want a list of length 785 with entries each of length 9. Each of these entries contains Tensors of variable sizes. You can however currently only construct NestedTensors that a list of Tensors. That means you'd need to concatenate or stack these Tensors. However, they're all of different shapes.

I think without introducing more than one nested dimension we can't help you here yet.

However, there's nothing preventing you from maintaining a list of NestedTensors. You just have to loop over them, which is inefficient.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants