-
Notifications
You must be signed in to change notification settings - Fork 126
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Python type mismatch #1082
base: v3_develop
Are you sure you want to change the base?
Python type mismatch #1082
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this will still behave a bit unexpected due to how C++ API works.
The int version I believe gets collapsed to uint8_t for example.
Let's add tests (see nndata_tensor_test.py).
To catch all various edge cases.
True, but that's due to the internals of how
Tests from |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's remove all of the specialized functions and rather have it as such:
- By default we try to infer the type from the tensor
- We allow to override the type with a function argument
We can do so both in C++ and Python I believe.
And please add tests as well.
size_t sConvertedData = tensor.size(); | ||
if(dataType == dai::TensorInfo::DataType::FP16) sConvertedData *= 2; | ||
else if(dataType == dai::TensorInfo::DataType::FP32 || dataType == dai::TensorInfo::DataType::INT) sConvertedData *= 4; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's do a switch here and cover all cases, so we visually see that nothing is missing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
amended
Reworked, however there are limitations to type conversions from python to c++ and vice versa, e.g. fp16 is not supported in pybind, so we have to cast it to a different type before we can pass it to c++ (i've chosen to upcast it to fp32). Similarly fp64 is not supported in c++ so i downcast it to fp32. |
3cf81f7
to
3eb3d25
Compare
@@ -518,7 +617,7 @@ class NNData : public Buffer { | |||
|
|||
return {}; | |||
} | |||
#endif | |||
//#endif |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Intentional?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
;)
@@ -198,7 +198,7 @@ class NNData : public Buffer { | |||
*/ | |||
span<std::uint8_t> emplaceTensor(TensorInfo& tensor); | |||
|
|||
#ifdef DEPTHAI_XTENSOR_SUPPORT | |||
//#ifdef DEPTHAI_XTENSOR_SUPPORT |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is still needed?
NNData& addTensor(const std::string& name, const std::vector<_Ty>& data, dai::TensorInfo::DataType dataType) { | ||
return addTensor<_Ty>(name, xt::adapt(data, std::vector<size_t>{1, data.size()}), dataType); | ||
}; | ||
// addTensor vector overloads | ||
NNData& addTensor(const std::string& name, const std::vector<int>& tensor) { | ||
return addTensor<int>(name, tensor, dai::TensorInfo::DataType::INT); | ||
}; | ||
NNData& addTensor(const std::string& name, const std::vector<uint16_t>& tensor) { | ||
return addTensor<uint16_t>(name, tensor, dai::TensorInfo::DataType::FP16); | ||
}; | ||
NNData& addTensor(const std::string& name, const std::vector<float>& tensor) { | ||
return addTensor<float>(name, tensor, dai::TensorInfo::DataType::FP32); | ||
}; | ||
NNData& addTensor(const std::string& name, const std::vector<double>& tensor) { | ||
return addTensor<double>(name, tensor, dai::TensorInfo::DataType::FP64); | ||
}; | ||
NNData& addTensor(const std::string& name, const std::vector<std::int8_t>& tensor) { | ||
return addTensor<std::int8_t>(name, tensor, dai::TensorInfo::DataType::I8); | ||
}; | ||
NNData& addTensor(const std::string& name, const std::vector<std::uint8_t>& tensor) { | ||
return addTensor<std::uint8_t>(name, tensor, dai::TensorInfo::DataType::U8F); | ||
}; | ||
|
||
// addTensor overloads | ||
NNData& addTensor(const std::string& name, const xt::xarray<int>& tensor) { | ||
return addTensor<int>(name, tensor, dai::TensorInfo::DataType::INT); | ||
}; | ||
NNData& addTensor(const std::string& name, const xt::xarray<uint16_t>& tensor) { | ||
return addTensor<uint16_t>(name, tensor, dai::TensorInfo::DataType::FP16); | ||
}; | ||
NNData& addTensor(const std::string& name, const xt::xarray<float>& tensor) { | ||
return addTensor<float>(name, tensor, dai::TensorInfo::DataType::FP32); | ||
}; | ||
NNData& addTensor(const std::string& name, const xt::xarray<double>& tensor) { | ||
return addTensor<double>(name, tensor, dai::TensorInfo::DataType::FP64); | ||
}; | ||
NNData& addTensor(const std::string& name, const xt::xarray<std::int8_t>& tensor) { | ||
return addTensor<std::int8_t>(name, tensor, dai::TensorInfo::DataType::I8); | ||
}; | ||
NNData& addTensor(const std::string& name, const xt::xarray<std::uint8_t>& tensor) { | ||
return addTensor<std::uint8_t>(name, tensor, dai::TensorInfo::DataType::U8F); | ||
}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's add tests for these too
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added.
Added an extra argument to
addTensor
to allow users to specify the underlying tensor data type.