-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unify output tensor allocation across all callers and devices #321
Conversation
ptsSeconds(torch::empty({numFrames}, {torch::kFloat64})), | ||
durationSeconds(torch::empty({numFrames}, {torch::kFloat64})) {} | ||
torch::Tensor VideoDecoder::allocateEmptyHWCTensorForStream( | ||
int streamIndex, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please double check this:
To allocate a tensor, we need its height and width. To get those, we need a streamIndex[1]. To get the streamIndex, we need either:
- it to be passed as a parameter
- for the "NoDemux" functions, we need at least a
RawOutput
.
[1] well, not strictly true, there are some cases where it's not needed, but the whole point of this PR is to unify that logic.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In the most generic case this isn't true because a single stream can have frames with different heights and widths:
But ignoring that fact, to get the frame dimensions before decoding you do need the stream_index
} | ||
} | ||
|
||
VideoDecoder::BatchDecodedOutput VideoDecoder::allocateBatchDecodedOutput( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The previous BatchDecodedOutput
constructor wasn't a member of VideoDecoder
, and so I got compilation errors because we're calling allocateEmptyHWCTensorForStream
which itself is a member method (because it accesses attributes blahblahblah).
So I had to make the "constructor" a member itself.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I understand that but by making it a member we are letting it access a lot more information than it strictly needs. It only needs height, width, device, and not all the other stuff in VideoDecoder. What's worse is it can accidentally change the state of the VideoDecoder because it's not const
I think it may be better to pass in the height and width after looking it up in the caller.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A general principle I like to apply is that classes should know how to create correct objects, and ideally, constructed objects should be valid. That means classes should have constructors, and the construtor is responsible for making the object valid. So I think we should keep the BatchDecodedOutput
explicit constructor. If VideoDecoder
then needs to construct BatchDecodedOutput
objects using some of its own internal state, then VideoDecoder
should figure out what's relevant to BatchDecodedOutput
and pass it in.
We may need a VideoDecoder::allocatedBatchDecodedOutput()
method, but if we do, it should call BatchDecodedOutput
's explicit constructor. What this could potentially look like:
VideoDecoder::BatchDecodedOutput::BatchDecodedOutput(
int64_t numFrames,
torch::Tensor frames)
: frames(frames),
ptsSeconds(torch::empty({numFrames}, {torch::kFloat64})),
durationSeconds(torch::empty({numFrames}, {torch::kFloat64})) {}
VideoDecoder::BatchDecodedOutput VideoDecoder::allocateBatchDecodedOutput(
int streamIndex,
int64_t numFrames) {
return BatchDecodedOutput(
numFrames,
allocateEmptyHWCTensorForStream(streamIndex, numFrames)
);
}
We can take advantage of tensors have reasonable move-like semantics.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
#312 is closely related to this PR, hence I am bringing it up. You can probably address that next.
For this PR, for non-batch functions we should allocate the tensor using the frame's own width and height. That way users can get all the frames in a video whose stream has varying dimensions by calling non-batch functions.
For batch functions we can use streamIndex and if the user wants to get frames that are of different sizes, we can either return an error (for now) or resize them later.
ptsSeconds(torch::empty({numFrames}, {torch::kFloat64})), | ||
durationSeconds(torch::empty({numFrames}, {torch::kFloat64})) {} | ||
torch::Tensor VideoDecoder::allocateEmptyHWCTensorForStream( | ||
int streamIndex, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In the most generic case this isn't true because a single stream can have frames with different heights and widths:
But ignoring that fact, to get the frame dimensions before decoding you do need the stream_index
} | ||
} | ||
|
||
VideoDecoder::BatchDecodedOutput VideoDecoder::allocateBatchDecodedOutput( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I understand that but by making it a member we are letting it access a lot more information than it strictly needs. It only needs height, width, device, and not all the other stuff in VideoDecoder. What's worse is it can accidentally change the state of the VideoDecoder because it's not const
I think it may be better to pass in the height and width after looking it up in the caller.
auto output = convertAVFrameToDecodedOutput(rawOutput); | ||
output.frame = MaybePermuteHWC2CHW(output.streamIndex, output.frame); | ||
auto streamIndex = rawOutput.streamIndex; | ||
auto preAllocatedOutputTensor = allocateEmptyHWCTensorForStream(streamIndex); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In the light of this: #312, it would be more correct to use rawOutput.frame's dimensions, and not the stream's dimensions
output.frame = MaybePermuteHWC2CHW(output.streamIndex, output.frame); | ||
auto rawOutput = getNextRawDecodedOutputNoDemux(); | ||
auto streamIndex = rawOutput.streamIndex; | ||
auto preAllocatedOutputTensor = allocateEmptyHWCTensorForStream(streamIndex); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ditto
// -------------------------------------------------------------------------- | ||
// Tensor (frames) manipulation APIs | ||
// -------------------------------------------------------------------------- | ||
torch::Tensor MaybePermuteHWC2CHW(int streamIndex, torch::Tensor& hwcTensor); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This doesn't need to be in the header at all.
This should just be a utility function in an anonymous namespace in the cpp file. You can pass it the height and width.
Same goes for function below.
I've discussed this PR with both of you @ahmadsharif1 @scotts and the conclusion is that bubbling-up the tensor allocation might not be the best move at this time, because it would change the source of where we retrieve height and width from. I'll follow-up soon with some alternative improvements. I want to at least document why the existing code is the way it is, and hopefully make the logic a bit clearer via dedicated utils. |
This PR unifies the output tensor frame allocation within a single util:
allocateEmptyHWCTensorForStream
.All high-level decoding entry-points are now responsible for allocating that output tensor.
All lower-level decoding entry-points now accept and expect this pre-allocated tensor to be passed. These low-level entry-points do not allocate anything that gets returned.
TODO, as a follow-up: try to fold the
preAllocatedOutputTensor
parameter within therawOutput
struct.