Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unify output tensor allocation across all callers and devices #321

Closed
wants to merge 18 commits into from

Conversation

NicolasHug
Copy link
Member

@NicolasHug NicolasHug commented Oct 30, 2024

This PR unifies the output tensor frame allocation within a single util: allocateEmptyHWCTensorForStream.

All high-level decoding entry-points are now responsible for allocating that output tensor.

All lower-level decoding entry-points now accept and expect this pre-allocated tensor to be passed. These low-level entry-points do not allocate anything that gets returned.

TODO, as a follow-up: try to fold the preAllocatedOutputTensor parameter within the rawOutput struct.

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Meta Open Source bot. label Oct 30, 2024
@NicolasHug NicolasHug changed the title [WPI] Output tensor allocation must now happen in high-level entry-points Output tensor allocation must now happen in high-level entry-points Oct 31, 2024
@NicolasHug NicolasHug changed the title Output tensor allocation must now happen in high-level entry-points Unify output tensor allocation across all callers and devices Oct 31, 2024
@NicolasHug NicolasHug marked this pull request as ready for review October 31, 2024 14:12
ptsSeconds(torch::empty({numFrames}, {torch::kFloat64})),
durationSeconds(torch::empty({numFrames}, {torch::kFloat64})) {}
torch::Tensor VideoDecoder::allocateEmptyHWCTensorForStream(
int streamIndex,
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please double check this:

To allocate a tensor, we need its height and width. To get those, we need a streamIndex[1]. To get the streamIndex, we need either:

  • it to be passed as a parameter
  • for the "NoDemux" functions, we need at least a RawOutput.

[1] well, not strictly true, there are some cases where it's not needed, but the whole point of this PR is to unify that logic.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the most generic case this isn't true because a single stream can have frames with different heights and widths:

#312

But ignoring that fact, to get the frame dimensions before decoding you do need the stream_index

}
}

VideoDecoder::BatchDecodedOutput VideoDecoder::allocateBatchDecodedOutput(
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The previous BatchDecodedOutput constructor wasn't a member of VideoDecoder, and so I got compilation errors because we're calling allocateEmptyHWCTensorForStream which itself is a member method (because it accesses attributes blahblahblah).

So I had to make the "constructor" a member itself.

Copy link
Contributor

@ahmadsharif1 ahmadsharif1 Oct 31, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I understand that but by making it a member we are letting it access a lot more information than it strictly needs. It only needs height, width, device, and not all the other stuff in VideoDecoder. What's worse is it can accidentally change the state of the VideoDecoder because it's not const

I think it may be better to pass in the height and width after looking it up in the caller.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A general principle I like to apply is that classes should know how to create correct objects, and ideally, constructed objects should be valid. That means classes should have constructors, and the construtor is responsible for making the object valid. So I think we should keep the BatchDecodedOutput explicit constructor. If VideoDecoder then needs to construct BatchDecodedOutput objects using some of its own internal state, then VideoDecoder should figure out what's relevant to BatchDecodedOutput and pass it in.

We may need a VideoDecoder::allocatedBatchDecodedOutput() method, but if we do, it should call BatchDecodedOutput's explicit constructor. What this could potentially look like:

VideoDecoder::BatchDecodedOutput::BatchDecodedOutput(
  int64_t numFrames,
  torch::Tensor frames)
  : frames(frames),
  ptsSeconds(torch::empty({numFrames}, {torch::kFloat64})),
  durationSeconds(torch::empty({numFrames}, {torch::kFloat64})) {}

VideoDecoder::BatchDecodedOutput VideoDecoder::allocateBatchDecodedOutput(
  int streamIndex,
  int64_t numFrames) {
  return BatchDecodedOutput(
    numFrames,
    allocateEmptyHWCTensorForStream(streamIndex, numFrames)
  );
}

We can take advantage of tensors have reasonable move-like semantics.

Copy link
Contributor

@ahmadsharif1 ahmadsharif1 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

#312 is closely related to this PR, hence I am bringing it up. You can probably address that next.

For this PR, for non-batch functions we should allocate the tensor using the frame's own width and height. That way users can get all the frames in a video whose stream has varying dimensions by calling non-batch functions.

For batch functions we can use streamIndex and if the user wants to get frames that are of different sizes, we can either return an error (for now) or resize them later.

ptsSeconds(torch::empty({numFrames}, {torch::kFloat64})),
durationSeconds(torch::empty({numFrames}, {torch::kFloat64})) {}
torch::Tensor VideoDecoder::allocateEmptyHWCTensorForStream(
int streamIndex,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the most generic case this isn't true because a single stream can have frames with different heights and widths:

#312

But ignoring that fact, to get the frame dimensions before decoding you do need the stream_index

}
}

VideoDecoder::BatchDecodedOutput VideoDecoder::allocateBatchDecodedOutput(
Copy link
Contributor

@ahmadsharif1 ahmadsharif1 Oct 31, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I understand that but by making it a member we are letting it access a lot more information than it strictly needs. It only needs height, width, device, and not all the other stuff in VideoDecoder. What's worse is it can accidentally change the state of the VideoDecoder because it's not const

I think it may be better to pass in the height and width after looking it up in the caller.

auto output = convertAVFrameToDecodedOutput(rawOutput);
output.frame = MaybePermuteHWC2CHW(output.streamIndex, output.frame);
auto streamIndex = rawOutput.streamIndex;
auto preAllocatedOutputTensor = allocateEmptyHWCTensorForStream(streamIndex);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the light of this: #312, it would be more correct to use rawOutput.frame's dimensions, and not the stream's dimensions

output.frame = MaybePermuteHWC2CHW(output.streamIndex, output.frame);
auto rawOutput = getNextRawDecodedOutputNoDemux();
auto streamIndex = rawOutput.streamIndex;
auto preAllocatedOutputTensor = allocateEmptyHWCTensorForStream(streamIndex);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ditto

// --------------------------------------------------------------------------
// Tensor (frames) manipulation APIs
// --------------------------------------------------------------------------
torch::Tensor MaybePermuteHWC2CHW(int streamIndex, torch::Tensor& hwcTensor);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This doesn't need to be in the header at all.

This should just be a utility function in an anonymous namespace in the cpp file. You can pass it the height and width.

Same goes for function below.

@NicolasHug
Copy link
Member Author

NicolasHug commented Nov 4, 2024

I've discussed this PR with both of you @ahmadsharif1 @scotts and the conclusion is that bubbling-up the tensor allocation might not be the best move at this time, because it would change the source of where we retrieve height and width from.

I'll follow-up soon with some alternative improvements. I want to at least document why the existing code is the way it is, and hopefully make the logic a bit clearer via dedicated utils.

@NicolasHug NicolasHug closed this Nov 4, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Meta Open Source bot.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants