-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RMN PR #32
RMN PR #32
Conversation
vx_scalar NBATCHSIZE = vxCreateScalar(vxGetContext((vx_reference)graph), VX_TYPE_UINT32, &nbatchSize); | ||
vx_reference params[] = { | ||
(vx_reference) pSrc, | ||
(vx_reference) srcImgWidth, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please align with other params
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some are in tab and some in space. Convert everything to tab/space.
/// \param p_mirror | ||
/// \return | ||
extern "C" RocalImage ROCAL_API_CALL rocalResizeMirrorNormalize(RocalContext p_context, RocalImage p_input, | ||
unsigned dest_width, unsigned dest_height, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please align params
void create_node() override; | ||
void update_node() override; | ||
private: | ||
vx_array _dst_roi_width , _dst_roi_height ; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please remove additional space before in this line
there id additional space before
- _dst_roi_width
- ,
- ;
rmn_node->init(mean,std_dev,mirror); | ||
// TODO: Uncomment the below lines once RMN meta node is added to ToT | ||
// if (context->master_graph->meta_data_graph()) | ||
// context->master_graph->meta_add_node<ResizeMirrorNormalizeMetaNode,ResizeMirrorNormalizeNode>(rmn_node); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why was the meta node support not added?
How we are we updating the meta data currently then?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This PR is just for adding the RMN augmentation support. As of the current status of this PR, there is no support for reading metadata for RMN
rocAL/rocAL_pybind/amd/rocal/fn.py
Outdated
@@ -283,6 +283,30 @@ def resize(*inputs, bytes_per_sample_hint=0, image_type=0, interp_type=1, mag_fi | |||
resized_image = b.Resize(Pipeline._current_pipeline._handle ,*(kwargs_pybind.values())) | |||
return (resized_image) | |||
|
|||
def resize_mirror_normalize(*inputs, bytes_per_sample_hint=0, resize_min=0, resize_max=0, | |||
image_type=0, mean=[0.0], mirror=1, output_dtype=types.FLOAT, output_layout=types.NCHW, pad_output=False, | |||
preserve=False, seed=1, std=[1.0], device=None): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please align params
rocAL/rocAL_pybind/examples/tf_petsTrainingExample/requirements.sh
Outdated
Show resolved
Hide resolved
rocAL/rocAL_pybind/examples/tf_petsTrainingExample/download_and_preprocess_dataset.sh
Outdated
Show resolved
Hide resolved
@@ -10,5 +16,5 @@ Use the instructions in the [docker section](../../../../../docker) to build the | |||
|
|||
To run this example for the first run or subsequent runs, just execute: | |||
``` | |||
python3.9 train_withROCAL_withTFRecordReader.py | |||
python3 train_withROCAL_withTFRecordReader.py |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please check if this changes are needed in this PR
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A change from ToT not reflected in Shobana's fork
@@ -133,7 +133,7 @@ def __next__(self): | |||
if(self.loader._name == "labelReader"): | |||
if(self.loader._oneHotEncoding == True): | |||
self.loader.GetOneHotEncodedLabels(self.labels, self.device) | |||
self.labels_tensor = self.labels.view(-1, self.bs, self.loader._numOfClasses).long() | |||
self.labels_tensor = self.labels.reshape(-1, self.bs, self.loader._numOfClasses) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why was this changed?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A change from ToT not reflected in Shobana's fork
|
||
for(uint i = 0; i < _batch_size; i++) | ||
{ | ||
int min_size = 800; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How did we arrive at this min size and max size?
Please add a comment stating if this is needed explicitly for MaskRCNN
|
||
if(w < h) | ||
{ | ||
ow = size; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please change to better variable names like
out_w or output_w
int w = src_roi_width[i]; | ||
int h = src_roi_height[i]; | ||
int size = min_size; | ||
int ow, oh; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please remove additional space
int size = min_size; | ||
int ow, oh; | ||
|
||
float min_original_size = float(std::min(w, h)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please do the type cast as below
float min_original_size = (float)std::min(w, h);
@@ -110,6 +110,7 @@ extern "C" SHARED_PUBLIC vx_node VX_API_CALL vxExtrppNode_remap(vx_graph graph, | |||
extern "C" SHARED_PUBLIC vx_node VX_API_CALL vxExtrppNode_ResizebatchPD(vx_graph graph,vx_image pSrc,vx_array srcImgWidth,vx_array srcImgHeight,vx_image pDst,vx_array dstImgWidth,vx_array dstImgHeight,vx_uint32 nbatchSize); | |||
extern "C" SHARED_PUBLIC vx_node VX_API_CALL vxExtrppNode_ResizeCropbatchPD(vx_graph graph,vx_image pSrc,vx_array srcImgWidth,vx_array srcImgHeight,vx_image pDst,vx_array dstImgWidth,vx_array dstImgHeight,vx_array x1,vx_array y1,vx_array x2,vx_array y2,vx_uint32 nbatchSize); | |||
extern "C" SHARED_PUBLIC vx_node VX_API_CALL vxExtrppNode_ResizeCropMirrorPD(vx_graph graph,vx_image pSrc,vx_array srcImgWidth,vx_array srcImgHeight,vx_image pDst,vx_array dstImgWidth,vx_array dstImgHeight,vx_array x1,vx_array y1,vx_array x2,vx_array y2, vx_array mirrorFlag, vx_uint32 nbatchSize); | |||
extern "C" SHARED_PUBLIC vx_node VX_API_CALL vxExtrppNode_ResizeMirrorNormalizebatchPD(vx_graph graph,vx_image pSrc,vx_array srcImgWidth,vx_array srcImgHeight,vx_image pDst,vx_array dstImgWidth,vx_array dstImgHeight,vx_array mean, vx_array std_dev, vx_array flip, vx_scalar chnShift,vx_uint32 nbatchSize); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add space after comma or please remove, it is kind of mixed here
else if (df_image == VX_DF_IMAGE_RGB) | ||
{ | ||
// status = rppi_resize_mirror_normalize_u8_pkd3_batchPD_gpu((void *)data->hip_pSrc, data->srcDimensions, data->maxSrcDimensions, (void *)data->hip_pDst, data->dstDimensions, data->maxDstDimensions, data->mean, data->std_dev, data->mirror, data->chnShift, data->nbatchSize, data->rppHandle); | ||
status = rppt_resize_mirror_normalize_gpu((void *)data->hip_pSrc, data->srcDescPtr, (void *)data->hip_pDst, data->dstDescPtr, data->d_dstImgSize, RpptInterpolationType::BILINEAR, data->mean, data->std_dev, data->mirror, data->d_roiTensorPtrSrc, data->roiType, data->rppHandle); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@SundarRajan28 If we are internally using rppt call, shouldn't this be like ResizeTensor, Why have we named everywhere as RMNBatchPD?
static vx_status VX_CALLBACK refreshResizeMirrorNormalizebatchPD(vx_node node, const vx_reference *parameters, vx_uint32 num, ResizeMirrorNormalizebatchPDLocalData *data) | ||
{ | ||
vx_status status = VX_SUCCESS; | ||
STATUS_ERROR_CHECK(vxCopyArrayRange((vx_array)parameters[6], 0, data->nbatchSize*3, sizeof(vx_float32),data->mean, VX_READ_ONLY, VX_MEMORY_TYPE_HOST)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please add space before and after * operator
Add space after comma, after the sizeof(datatype) in lines 63-69
STATUS_ERROR_CHECK(vxCopyArrayRange((vx_array)parameters[2], 0, data->nbatchSize, sizeof(Rpp32u),data->srcBatch_height, VX_READ_ONLY, VX_MEMORY_TYPE_HOST)); | ||
STATUS_ERROR_CHECK(vxCopyArrayRange((vx_array)parameters[4], 0, data->nbatchSize, sizeof(Rpp32u),data->dstBatch_width, VX_READ_ONLY, VX_MEMORY_TYPE_HOST)); | ||
STATUS_ERROR_CHECK(vxCopyArrayRange((vx_array)parameters[5], 0, data->nbatchSize, sizeof(Rpp32u),data->dstBatch_height, VX_READ_ONLY, VX_MEMORY_TYPE_HOST)); | ||
for(int i = 0; i < data->nbatchSize; i++){ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Space before opening bracket is required here
hipMemcpy(data->d_roiTensorPtrSrc, data->roiTensorPtrSrc, data->nbatchSize * sizeof(RpptROI), hipMemcpyHostToDevice); | ||
#endif | ||
} | ||
if(data->device_type == AGO_TARGET_AFFINITY_CPU) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Better to add as an else if condition
STATUS_ERROR_CHECK(vxQueryParameter(input_param, VX_PARAMETER_ATTRIBUTE_REF, &input, sizeof(vx_image))); | ||
STATUS_ERROR_CHECK(vxQueryImage(input, VX_IMAGE_ATTRIBUTE_FORMAT, &df_image, sizeof(df_image))); | ||
if(df_image != VX_DF_IMAGE_U8 && df_image != VX_DF_IMAGE_RGB) | ||
{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Move bracket to previous line
/* if (df_image == VX_DF_IMAGE_U8 ){ | ||
status = rppi_crop_mirror_normalize_u8_pln1_batchPD_gpu((void *)data->cl_pSrc,data->srcDimensions,data->maxSrcDimensions,(void *)data->cl_pDst,data->dstDimensions,data->maxDstDimensions,data->start_x,data->start_y, data->mean, data->std_dev, data->mirror, data->chnShift ,data->nbatchSize,data->rppHandle); | ||
} | ||
else if(df_image == VX_DF_IMAGE_RGB) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please check with Shobana, if we are not using these better to remove
data->mirror = (vx_uint32 *)malloc(sizeof(vx_uint32) * data->nbatchSize); | ||
data->srcDimensions = (RppiSize *)malloc(sizeof(RppiSize) * data->nbatchSize); | ||
data->dstDimensions = (RppiSize *)malloc(sizeof(RppiSize) * data->nbatchSize); | ||
data->srcBatch_width = (Rpp32u *)malloc(sizeof(Rpp32u) * data->nbatchSize); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are many params here which are not passed to RPP call. Please check if we are going to retain for BatchPD or Tensor and make changes accordingly
auto input = static_cast<Image*>(p_input); | ||
auto mirror = static_cast<IntParam *>(p_mirror); | ||
for(unsigned i = 0; i < mean.size(); i++) | ||
{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Move brackets to previous line, please follow google coding guidelines
return output; | ||
} | ||
|
||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove this line
std::vector<float> _mean; | ||
std::vector<float> _std_dev; | ||
ParameterVX<int> _mirror; | ||
constexpr static int MIRROR_RANGE [2] = {0, 1}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove extra spaces before MIRROR_RANGE
_dst_roi_height = vxCreateArray(vxGetContext((vx_reference)_graph->get()), VX_TYPE_UINT32, _batch_size); | ||
|
||
vx_status width_status, height_status; | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove extra lines inbetween
{ | ||
int min_size = 800; | ||
int max_size = 1333; | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove extra line
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you please change the branch you have created PR to develop?
I have updated https://github.com/shobana-mcw/MIVisionX/tree/develop with opensource changes.
STATUS_ERROR_CHECK(vxCopyArrayRange((vx_array)parameters[4], 0, data->nbatchSize, sizeof(Rpp32u), data->dstBatch_width, VX_READ_ONLY, VX_MEMORY_TYPE_HOST)); | ||
STATUS_ERROR_CHECK(vxCopyArrayRange((vx_array)parameters[5], 0, data->nbatchSize, sizeof(Rpp32u), data->dstBatch_height, VX_READ_ONLY, VX_MEMORY_TYPE_HOST)); | ||
for(int i = 0; i < data->nbatchSize; i++) { | ||
data->srcDimensions[i].width = data->roiTensorPtrSrc[i].xywhROI.roiWidth = data->srcBatch_width[i]; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think data->srcDimensions is used. Check if all these variables are used. If not remove them.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Removed the unused variables from the file
vx_scalar NBATCHSIZE = vxCreateScalar(vxGetContext((vx_reference)graph), VX_TYPE_UINT32, &nbatchSize); | ||
vx_reference params[] = { | ||
(vx_reference) pSrc, | ||
(vx_reference) srcImgWidth, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some are in tab and some in space. Convert everything to tab/space.
std_dev_vx[3 * i] = _std_dev[0]; | ||
std_dev_vx[3 * i + 1] = _std_dev[1]; | ||
std_dev_vx[3 * i + 2] = _std_dev[2]; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we change this to mem_copy?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Mean_vx and std_dev_vx are vectors of size BS*3 while _mean and _std_dev are vectors of size 3. If we opt for memcpy it still needs to done over a loop so this seems to be the clean option
size = int(round(max_size * min_original_size / max_original_size)); | ||
|
||
if (((w <= h) && (w == size)) || ((h <= w) && (h == size))) | ||
{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Was this sorted out?
auto mirror = static_cast<IntParam *>(p_mirror); | ||
for(unsigned i = 0; i < mean.size(); i++) { | ||
mean[i] = 0; | ||
std_dev[i] = 1; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why are we hardcoding mean and std_dev value?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In the current image pipeline, the mean and std values were not being passed to the RMN calls and instead 0 and 1 is being passed. In case of CMN, we directly pass 0 and 1 to the CMN node init. But RMN Tensor needs mean and std as vectors so we are storing 0 and 1 in the vectors being passed to the RMN node
No description provided.