-
Notifications
You must be signed in to change notification settings - Fork 75
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How can I get RealWorldPoints? #15
Comments
realWorldPoint is a sampled point from the depth map given by the Kinect. for example, sampling a point from the mesh of a person found in the kinect. the example programs show different examples of this, and there are multiple ways to obtain them, depending on what you are trying to do. this library is written for SimpleOpenNI in java, doing it in python is beyond the scope of this repository. |
PVector realWorldPoint = kpt.getDepthMapAt(startX, startY); startX and startY are X and Y coordinates off the Kinect image. Please get back soon! |
See the calibration example. there is a method there "getDepthMapAt".
then |
kpt.setDepthMapRealWorld(kinect.depthMapRealWorld()); PVector realWorldPoint = kpt.getDepthMapAt(startX, startY); startX and startY are X and Y coordinates off the Kinect image. I've done everything exactly as you've mentioned it and as it is in all the tutorials. The x and y that we're passing to the getDepthMap() fn are coordinates from the Kinect image, right? |
`import controlP5.; // For Kinect's RGB stream + the KPT: PImage currKinectFrameRGB; void setup(){ // Setting up the Kinect: opencv = new OpenCV(this, context.depthWidth(), context.depthHeight()); //What's this for? // Setting up the KPT: void draw(){ PVector realWorldPoint = kpt.getDepthMapAt(207, 222); print("ProjPoint1: "); @genekogan This is the very simple program that I'm trying to get to run. The calibration I know is working because I've tested it in the CALIBRATION.pde file. Please take out some time to have a look see. Regards |
@genekogan Please spare a few minutes of your time and read my previous comment. Thanks |
startX and startY in your example should be coordinates in the kinect depth image (usually between 0-640, 0-480 in xy direction, corresponding to the size of the depth image). those two lines will translate this to a projector coordinate which are between 0 and 1 (agnostic to screen size). you still need to multiply these by your projector width and projector height if you haven't done that already. look at projectedPointUno... is it between 0 and 1 on both? if so, try multiplying it by projector width and height. |
"So given real world PVector realWorldPoint, the projected coordinate is accessible via:
PVector projectedPoint = kpt.convertKinectToProjector(realWorldPoint);"
Referring to this, what do you mean exactly when you say "realWorldPoint". Is it simply a 3d point out of the Kinect depth stream or something else? How do i get such points?
Also, if possible, could you tell me how to get such a 3d point in Python?
@genekogan
@2075 @kulbhushan-chand @dattasaurabh82
The text was updated successfully, but these errors were encountered: