-
Notifications
You must be signed in to change notification settings - Fork 59
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to transform a point from depth camera space to world space? #50
Comments
I don't have a comprehensive answer for you but the short answer is, the sensor takes a while to open and close (say 1-2 seconds, but I never tested it), so if you call your uvToWorld in each frame, there is obviously a problem there. You may want to let the sensor keep running in the background and figure out someway to query the uv by another function, for example via shared variable to let the thread know which uv you want to back-project and then save the result somewhere you can access from main thread. Your uv is basically the i and j in LongDepthSensorLoop. Hope that helps. |
@petergu684 Thanks for your answer! |
@Tolerm how did you get this working? when i try it tells me i overloded "std::invoke'. Was there something special you did with your header file? |
@EliasMosco Are you talking the struct HL2ResearchMode : HL2ResearchModeT<HL2ResearchMode>
{
......
private:
...
std::atomic_bool m_isDepthSensorStreamOpen = true; // this boolean
static void myFunc(HL2ResearchMode* pHL2ResearchMode);
...
} and in void HL2ResearchMode::MyFunc(HL2ResearchMode* pHL2ResearchMode)
{
if(pHL2ResearchMode->m_isDepthSensorStreamOpen)
{
pHL2ResearchMode->m_longDepthSensor->OpenStream();
pHL2ResearchMode->m_isDepthSensorStreamOpen = false;
}
...
} it's very simple, you can change it according to your demands. If you are looking for a way to control multi-sensors stream control, the Mirosoft official docs |
Hello @Tolerm, did you manage to get it to work after all? If so, do you happen to have a repository? I'm trying to achieve the same thing! |
Well, I'm working in the transformation of PV Camera and Depth Camera, but I'm not sure if the method I'm using now is correct, in principle, same cameras and same mode according the ResearchMode ApiDoc, so the transformation in HoloLens2 should be the same as HoloLen1, and the works in HoloLen1 should be valuable. Finally, I mainly referred to the following:
if your work is the same as mine, get a (u,v) in PV Camera and transform it to 3D point in Unity's coord system with the depth of this image point, then my method now like :
if your work is to transform a (u,v) point in Depth Camera image coord system -> (X,Y,Z,W) in a given world coord system, you may do it like this:
|
@Tolerm Thanks a lot for your answer! Could you provide more details about how you computed the inrtrinsics of the PV camera? |
I first tried the UndistortedProjectionTransform here, but the matrix I got was like this : |
Thanks for this project! It is great and I got this work on HoloLens2 ( through Unity2021.3.8f1c1) successfully!
Now I want to implement a function that receive a (u,v) point on the image coordinate system and transform it to a (X,Y,Z) point on the Unity/HoloLens coordinate system with the long throw depth information, to put it simply, its a coordinate transformation. I'm new to C++ so I tried to imitate petergu's code, my code which added to the HL2ReasearchMode.cpp are:
The StartuvToWorld func above worked for the first call but got an error indicator on the
pHL2ResearchMode->m_longDepthSensor->OpenStream()
for the second call and the HoloLens2 app crashed, and Visual Studio output wereException thrown at 0x00007FFE74686888 (HL2UnityPlugin.dll) in My_Project.exe: 0xC0000005: Access violation reading location 0x0000000000000000
Can anyone help to solve this? Thanks a lot!
The text was updated successfully, but these errors were encountered: