Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Python type mismatch #1082

Open
wants to merge 31 commits into
base: v3_develop
Choose a base branch
from

Conversation

borbrudar
Copy link

Added an extra argument to addTensor to allow users to specify the underlying tensor data type.

Copy link
Collaborator

@moratom moratom left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this will still behave a bit unexpected due to how C++ API works.

The int version I believe gets collapsed to uint8_t for example.

Let's add tests (see nndata_tensor_test.py).
To catch all various edge cases.

@borbrudar
Copy link
Author

borbrudar commented Jul 31, 2024

True, but that's due to the internals of how addTensor works, specifically:

        if(std::is_integral<_Ty>::value) {
            for(uint32_t i = 0; i < tensor.size(); i++) {
                vecData->data()[i + offset] = (uint8_t)tensor.data()[i];
            }
        } else {
            for(uint32_t i = 0; i < tensor.size(); i++) {
                *(uint16_t*)(&vecData->data()[2 * i + offset]) = fp32_to_fp16(tensor.data()[i]);
            }
        }

Tests from nndata_tensor_test.py seem to pass with the modified signatures, although the optional parameter doesn't appear to be optional.

Copy link
Collaborator

@moratom moratom left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's remove all of the specialized functions and rather have it as such:

  • By default we try to infer the type from the tensor
  • We allow to override the type with a function argument

We can do so both in C++ and Python I believe.

And please add tests as well.

Comment on lines 269 to 271
size_t sConvertedData = tensor.size();
if(dataType == dai::TensorInfo::DataType::FP16) sConvertedData *= 2;
else if(dataType == dai::TensorInfo::DataType::FP32 || dataType == dai::TensorInfo::DataType::INT) sConvertedData *= 4;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's do a switch here and cover all cases, so we visually see that nothing is missing

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

amended

@borbrudar
Copy link
Author

Reworked, however there are limitations to type conversions from python to c++ and vice versa, e.g. fp16 is not supported in pybind, so we have to cast it to a different type before we can pass it to c++ (i've chosen to upcast it to fp32). Similarly fp64 is not supported in c++ so i downcast it to fp32.

@@ -518,7 +617,7 @@ class NNData : public Buffer {

return {};
}
#endif
//#endif
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Intentional?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

;)

@@ -198,7 +198,7 @@ class NNData : public Buffer {
*/
span<std::uint8_t> emplaceTensor(TensorInfo& tensor);

#ifdef DEPTHAI_XTENSOR_SUPPORT
//#ifdef DEPTHAI_XTENSOR_SUPPORT
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this is still needed?

Comment on lines 211 to 252
NNData& addTensor(const std::string& name, const std::vector<_Ty>& data, dai::TensorInfo::DataType dataType) {
return addTensor<_Ty>(name, xt::adapt(data, std::vector<size_t>{1, data.size()}), dataType);
};
// addTensor vector overloads
NNData& addTensor(const std::string& name, const std::vector<int>& tensor) {
return addTensor<int>(name, tensor, dai::TensorInfo::DataType::INT);
};
NNData& addTensor(const std::string& name, const std::vector<uint16_t>& tensor) {
return addTensor<uint16_t>(name, tensor, dai::TensorInfo::DataType::FP16);
};
NNData& addTensor(const std::string& name, const std::vector<float>& tensor) {
return addTensor<float>(name, tensor, dai::TensorInfo::DataType::FP32);
};
NNData& addTensor(const std::string& name, const std::vector<double>& tensor) {
return addTensor<double>(name, tensor, dai::TensorInfo::DataType::FP64);
};
NNData& addTensor(const std::string& name, const std::vector<std::int8_t>& tensor) {
return addTensor<std::int8_t>(name, tensor, dai::TensorInfo::DataType::I8);
};
NNData& addTensor(const std::string& name, const std::vector<std::uint8_t>& tensor) {
return addTensor<std::uint8_t>(name, tensor, dai::TensorInfo::DataType::U8F);
};

// addTensor overloads
NNData& addTensor(const std::string& name, const xt::xarray<int>& tensor) {
return addTensor<int>(name, tensor, dai::TensorInfo::DataType::INT);
};
NNData& addTensor(const std::string& name, const xt::xarray<uint16_t>& tensor) {
return addTensor<uint16_t>(name, tensor, dai::TensorInfo::DataType::FP16);
};
NNData& addTensor(const std::string& name, const xt::xarray<float>& tensor) {
return addTensor<float>(name, tensor, dai::TensorInfo::DataType::FP32);
};
NNData& addTensor(const std::string& name, const xt::xarray<double>& tensor) {
return addTensor<double>(name, tensor, dai::TensorInfo::DataType::FP64);
};
NNData& addTensor(const std::string& name, const xt::xarray<std::int8_t>& tensor) {
return addTensor<std::int8_t>(name, tensor, dai::TensorInfo::DataType::I8);
};
NNData& addTensor(const std::string& name, const xt::xarray<std::uint8_t>& tensor) {
return addTensor<std::uint8_t>(name, tensor, dai::TensorInfo::DataType::U8F);
};
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's add tests for these too

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants