-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix rocAL Tensor build issues PR 1 - Augmentation nodes and OpenVX fix #24
Fix rocAL Tensor build issues PR 1 - Augmentation nodes and OpenVX fix #24
Conversation
fiona-gladwin
commented
Jul 3, 2023
- Modify the augmentation nodes in rocAL to fix build issues
- Change reference from Image to Tensor
- Comment out the vx_Extrpp calls wrt batchPD (The correct vxExtrpp call for tensor augmentations will be introduced in future PR)
- Change the API to get the width and height from TensorInfo
- Change the name of the API in OpenVX extensions tensor augmentation files wrt latest changes in the Opensource TOT develop
Change name of RPP handle related API
Fix build issue wrt dst_width and dst_height API
Comment out the vxExtrpp call Replace the API wrt tensor
@@ -45,7 +45,7 @@ void ColorTwistBatchNode::create_node() | |||
_hue.create_array(_graph , VX_TYPE_FLOAT32, _batch_size); | |||
_sat.create_array(_graph , VX_TYPE_FLOAT32, _batch_size); | |||
|
|||
_node = vxExtrppNode_ColorTwistbatchPD(_graph->get(), _inputs[0]->handle(), _src_roi_width, _src_roi_height, _outputs[0]->handle(), _alpha.default_array(), _beta.default_array(), _hue.default_array(), _sat.default_array(), _batch_size);/*A temporary fix for time being*/ | |||
// _node = vxExtrppNode_ColorTwistbatchPD(_graph->get(), _inputs[0]->handle(), _src_roi_width, _src_roi_height, _outputs[0]->handle(), _alpha.default_array(), _beta.default_array(), _hue.default_array(), _sat.default_array(), _batch_size);/*A temporary fix for time being*/ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why is this commented out?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is the temporary fix?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@rrawther all of it is commented. I think it is so that the branch will build? Only the tensor implemented augmentations seems to call the rpp kernels. @fiona-gladwin is that what you are trying to do here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We have commented RPP calls for batchPD augmentations so that we can build the existing tensor branch. If we add all augmentations in this PR the changes will be more and hard to review.
Like discussed we will issue subsequent PR's to convert batchPD to Tensor for rest of the augmentations.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is the tot MIVisionX changes are merged into this?
@rrawther Seems ok to me. let me know when I can merge this PR |