-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The right way to deproject pixel to point with align to depth frame. #5403
Comments
UPD: |
@BadMachine , getting the final result oscillating in range of +-1m is unacceptable, so it is more likely that the data path is not configured correctly. As for the reported behavior - But in any event these steps are secondary to the actual verification of the user c++ code. I'll make certain assumptions as the snippet is neither covering the configuration part nor the RGB features/image handling. So the things to verify are:
|
@BadMachine , looking deeper at the posted results of deprojected data I can see the depth fluctuates in range of [1.368-1.413] meter ,i.e. it seem to be +-2.5cm which approximates for 1.8% of the range. Can you provide an example with valid depth values deviating within 1 meter range? |
@ev-mp Thank you for your reply! {
"aux-param-autoexposure-setpoint": "1536",
"aux-param-colorcorrection1": "0.298828",
"aux-param-colorcorrection10": "-0",
"aux-param-colorcorrection11": "-0",
"aux-param-colorcorrection12": "-0",
"aux-param-colorcorrection2": "0.293945",
"aux-param-colorcorrection3": "0.293945",
"aux-param-colorcorrection4": "0.114258",
"aux-param-colorcorrection5": "-0",
"aux-param-colorcorrection6": "-0",
"aux-param-colorcorrection7": "-0",
"aux-param-colorcorrection8": "-0",
"aux-param-colorcorrection9": "-0",
"aux-param-depthclampmax": "65536",
"aux-param-depthclampmin": "0",
"aux-param-disparityshift": "0",
"controls-autoexposure-auto": "True",
"controls-autoexposure-manual": "5040",
"controls-color-autoexposure-auto": "True",
"controls-color-autoexposure-manual": "156",
"controls-color-backlight-compensation": "0",
"controls-color-brightness": "0",
"controls-color-contrast": "50",
"controls-color-gain": "64",
"controls-color-gamma": "300",
"controls-color-hue": "0",
"controls-color-power-line-frequency": "3",
"controls-color-saturation": "64",
"controls-color-sharpness": "50",
"controls-color-white-balance-auto": "True",
"controls-color-white-balance-manual": "4600",
"controls-depth-gain": "16",
"controls-laserpower": "300",
"controls-laserstate": "on",
"ignoreSAD": "0",
"param-amplitude-factor": "0",
"param-autoexposure-setpoint": "1536",
"param-censusenablereg-udiameter": "8",
"param-censusenablereg-vdiameter": "9",
"param-censususize": "8",
"param-censusvsize": "9",
"param-depthclampmax": "65536",
"param-depthclampmin": "0",
"param-depthunits": "1000",
"param-disableraucolor": "0",
"param-disablesadcolor": "0",
"param-disablesadnormalize": "0",
"param-disablesloleftcolor": "0",
"param-disableslorightcolor": "1",
"param-disparitymode": "0",
"param-disparityshift": "0",
"param-lambdaad": "751",
"param-lambdacensus": "6",
"param-leftrightthreshold": "10",
"param-maxscorethreshb": "2893",
"param-medianthreshold": "796",
"param-minscorethresha": "4",
"param-neighborthresh": "108",
"param-raumine": "6",
"param-rauminn": "3",
"param-rauminnssum": "7",
"param-raumins": "2",
"param-rauminw": "2",
"param-rauminwesum": "12",
"param-regioncolorthresholdb": "0.784736",
"param-regioncolorthresholdg": "0.565558",
"param-regioncolorthresholdr": "0.985323",
"param-regionshrinku": "3",
"param-regionshrinkv": "0",
"param-robbinsmonrodecrement": "25",
"param-robbinsmonroincrement": "2",
"param-rsmdiffthreshold": "1.65625",
"param-rsmrauslodiffthreshold": "0.71875",
"param-rsmremovethreshold": "0.809524",
"param-scanlineedgetaub": "13",
"param-scanlineedgetaug": "15",
"param-scanlineedgetaur": "30",
"param-scanlinep1": "155",
"param-scanlinep1onediscon": "160",
"param-scanlinep1twodiscon": "59",
"param-scanlinep2": "190",
"param-scanlinep2onediscon": "507",
"param-scanlinep2twodiscon": "493",
"param-secondpeakdelta": "647",
"param-texturecountthresh": "0",
"param-texturedifferencethresh": "1722",
"param-usersm": "1",
"param-zunits": "1000",
"stream-depth-format": "Z16",
"stream-fps": "30",
"stream-height": "720",
"stream-width": "1280"
} In depth-quality-tool i see fill rate is ~ 96% in viewport i see "well filled" depth frame: Now i aligned depth frame to RGB, that makes output image much Init stream:rs2::context ctx;
auto device = ctx.query_devices();
auto dev = device[0];
pipeline p;
config cfg;
string serial = dev.get_info(RS2_CAMERA_INFO_SERIAL_NUMBER);
cfg.enable_stream(RS2_STREAM_DEPTH, 848, 480, RS2_FORMAT_Z16, 30);
cfg.enable_stream(RS2_STREAM_COLOR, 848, 480, RS2_FORMAT_RGB8);
auto advanced = dev.as<advanced_mode>();
ifstream t("C:/Users/Bumpy/source/repos/RealMotion/RealMotion/config/config.json");
string config((istreambuf_iterator<char>(t)), istreambuf_iterator<char>());
advanced.load_json(config);
cfg.enable_device(serial);
rs2::align align_to_depth(RS2_STREAM_DEPTH);
rs2::align align_to_color(RS2_STREAM_COLOR);
auto profile = p.start(cfg);
rs2::pointcloud pc;
rs2::points points;
while (!QThread::currentThread()->isInterruptionRequested())
{
rs2::frameset frames = p.wait_for_frames();
frames = align_to_color.process(frames);
rs2::depth_frame depth = frames.get_depth_frame();
filter->set_filter(depth); filter function:void set_filter(rs2::depth_frame depth) {
dec_filter.set_option(RS2_OPTION_FILTER_MAGNITUDE, 3);
spat_filter.set_option(RS2_OPTION_FILTER_SMOOTH_ALPHA, 0.50f);
spat_filter.set_option(RS2_OPTION_FILTER_MAGNITUDE, 2);
spat_filter.set_option(RS2_OPTION_FILTER_SMOOTH_DELTA, 15);
temp_filter.set_option(RS2_OPTION_FILTER_SMOOTH_ALPHA, 0.4f);
temp_filter.set_option(RS2_OPTION_FILTER_SMOOTH_DELTA, 20.0f);
rs2::disparity_transform depth_to_disparity(true);
rs2::disparity_transform disparity_to_depth(true);
hole_filter.set_option(RS2_OPTION_HOLES_FILL, 1);
depth = dec_filter.process(depth);
depth_to_disparity.process(depth);
disparity_to_depth.process(depth);
depth = spat_filter.process(depth);
depth = hole_filter.process(depth);
}
Validating data:Code to show right palm 3d coordinates: float planarPoint3d[3];
float pix[2] = { toSend.captured[i.key()].x, toSend.captured[i.key()].y };
float pixel_distance_in_meters = depth.get_distance(toSend.captured[i.key()].x, toSend.captured[i.key()].y);
rs2_deproject_pixel_to_point(planarPoint3d, &inrist, pix, pixel_distance_in_meters);
QJsonObject point3D;
point3D.insert("x", QJsonValue::fromVariant(0.01 * floor(100 * planarPoint3d[0])));
point3D.insert("y", QJsonValue::fromVariant(0.01 * floor(100 * planarPoint3d[1])));
point3D.insert("z", QJsonValue::fromVariant(0.01 * floor(100 * planarPoint3d[2])));
if (i.key() == "Right palm") {
qDebug() << "x: " << i.value().x << "y: " << i.value().y<<endl;
qDebug() << "coords: " << point3D;
} Output from GIF:coords: QJsonObject({"x":0.6900000000000001,"y":0.09,"z":1.32}) coords: QJsonObject({"x":0.7000000000000001,"y":0.1,"z":1.3900000000000001}) coords: QJsonObject({"x":0.6900000000000001,"y":0.1,"z":1.42}) coords: QJsonObject({"x":0.68,"y":0.1,"z":1.48}) coords: QJsonObject({"x":0,"y":0,"z":0}) coords: QJsonObject({"x":0,"y":0,"z":0}) coords: QJsonObject({"x":0,"y":0,"z":0}) coords: QJsonObject({"x":0.27,"y":0.06,"z":1.74}) coords: QJsonObject({"x":0.18,"y":0.03,"z":1.73}) coords: QJsonObject({"x":0.09,"y":-0.02,"z":1.76}) coords: QJsonObject({"x":0,"y":-0.02,"z":1.77}) coords: QJsonObject({"x":-0.09,"y":-0.02,"z":1.77}) coords: QJsonObject({"x":-0.14,"y":-0.02,"z":1.74}) coords: QJsonObject({"x":-0.2,"y":-0.02,"z":1.74}) coords: QJsonObject({"x":-0.23,"y":-0.05,"z":1.72}) coords: QJsonObject({"x":-0.29,"y":-0.05,"z":1.71}) coords: QJsonObject({"x":-0.34,"y":-0.05,"z":1.69}) coords: QJsonObject({"x":0,"y":0,"z":0}) coords: QJsonObject({"x":0,"y":0,"z":0}) coords: QJsonObject({"x":-0.51,"y":-0.05,"z":1.68}) coords: QJsonObject({"x":0,"y":0,"z":0}) coords: QJsonObject({"x":0,"y":0,"z":0}) coords: QJsonObject({"x":0,"y":0,"z":0}) coords: QJsonObject({"x":0,"y":0,"z":0}) coords: QJsonObject({"x":-0.6900000000000001,"y":-0.06,"z":1.78}) coords: QJsonObject({"x":-0.6900000000000001,"y":-0.06,"z":1.8}) coords: QJsonObject({"x":-0.7000000000000001,"y":-0.06,"z":1.81}) coords: QJsonObject({"x":-0.71,"y":-0.09,"z":1.84}) coords: QJsonObject({"x":0,"y":0,"z":0}) coords: QJsonObject({"x":-0.67,"y":-0.09,"z":1.9100000000000001}) coords: QJsonObject({"x":-0.68,"y":-0.09,"z":1.94}) coords: QJsonObject({"x":-0.65,"y":-0.09,"z":1.94}) coords: QJsonObject({"x":-0.63,"y":-0.09,"z":1.97}) coords: QJsonObject({"x":-0.64,"y":-0.1,"z":1.99}) coords: QJsonObject({"x":0,"y":0,"z":0}) coords: QJsonObject({"x":0,"y":0,"z":0}) coords: QJsonObject({"x":-0.62,"y":-0.1,"z":2.05}) coords: QJsonObject({"x":-0.62,"y":-0.1,"z":2.05}) coords: QJsonObject({"x":-0.63,"y":-0.1,"z":2.07}) coords: QJsonObject({"x":-0.63,"y":-0.1,"z":2.07}) coords: QJsonObject({"x":-0.62,"y":-0.1,"z":2.06}) coords: QJsonObject({"x":-0.62,"y":-0.1,"z":2.06}) coords: QJsonObject({"x":-0.62,"y":-0.1,"z":2.06}) coords: QJsonObject({"x":-0.62,"y":-0.1,"z":2.06}) coords: QJsonObject({"x":0,"y":0,"z":0}) coords: QJsonObject({"x":0,"y":0,"z":0}) coords: QJsonObject({"x":0,"y":0,"z":0}) coords: QJsonObject({"x":0,"y":0,"z":0}) coords: QJsonObject({"x":0,"y":0,"z":0}) coords: QJsonObject({"x":0,"y":0,"z":0}) coords: QJsonObject({"x":0,"y":0,"z":0}) coords: QJsonObject({"x":-0.63,"y":0.04,"z":1.98}) coords: QJsonObject({"x":-0.62,"y":0.04,"z":1.94}) coords: QJsonObject({"x":-0.65,"y":0.04,"z":1.93}) coords: QJsonObject({"x":-0.68,"y":0.01,"z":1.93}) coords: QJsonObject({"x":-0.6900000000000001,"y":0,"z":1.86}) coords: QJsonObject({"x":-0.71,"y":-0.03,"z":1.85}) coords: QJsonObject({"x":-0.73,"y":-0.06,"z":1.8}) coords: QJsonObject({"x":0,"y":0,"z":0}) coords: QJsonObject({"x":0,"y":0,"z":0}) coords: QJsonObject({"x":-0.74,"y":-0.08,"z":1.7}) coords: QJsonObject({"x":0,"y":0,"z":0}) coords: QJsonObject({"x":0,"y":0,"z":0}) coords: QJsonObject({"x":-0.74,"y":-0.08,"z":1.62}) coords: QJsonObject({"x":-0.7000000000000001,"y":-0.08,"z":1.6}) coords: QJsonObject({"x":0,"y":0,"z":0}) coords: QJsonObject({"x":-0.61,"y":-0.05,"z":1.51}) coords: QJsonObject({"x":-0.5700000000000001,"y":-0.05,"z":1.48}) coords: QJsonObject({"x":-0.51,"y":-0.05,"z":1.45}) coords: QJsonObject({"x":0,"y":0,"z":0}) coords: QJsonObject({"x":0,"y":0,"z":0}) coords: QJsonObject({"x":0,"y":0,"z":0}) coords: QJsonObject({"x":0,"y":0,"z":0}) coords: QJsonObject({"x":0,"y":0,"z":0}) coords: QJsonObject({"x":0,"y":0,"z":0}) coords: QJsonObject({"x":-0.22,"y":0.32,"z":4.47}) PSI think that for some reason my filter function doesnt work at all. |
@BadMachine,
while in the second post the direction is reverted with depth is aligned to color:
Thus the application flow is still not clear. Please try to present the case and the issue you have in a concise and unambiguous manner, as it really not effective to try and guess the missing parts. |
@ev-mp But depth of the joint pixel-point is 0? Here is code that i wrote to catch the frame when depth of "right palm" is 0: float planarPoint3d[3];
float pix[2] = { toSend.captured[i.key()].x, toSend.captured[i.key()].y };
float pixel_distance_in_meters = depth.get_distance(toSend.captured[i.key()].x, toSend.captured[i.key()].y);
rs2_deproject_pixel_to_point(planarPoint3d, &inrist, pix, pixel_distance_in_meters);
QJsonObject point3D;
point3D.insert("x", QJsonValue::fromVariant(0.01 * floor(100 * planarPoint3d[0])));
point3D.insert("y", QJsonValue::fromVariant(0.01 * floor(100 * planarPoint3d[1])));
point3D.insert("z", QJsonValue::fromVariant(0.01 * floor(100 * planarPoint3d[2])));
if (i.key() == "Right palm") {
if (planarPoint3d[2] == 0)
{
imshow("image", toSend.input.getMat(ACCESS_READ));
waitKey(30);
//cv::imwrite("wrong depth.jpg", toSend.input);
} if I understand you right i should use some specific functions to get nearest pixels depth with non-zero value, right? Also i`ll try to show you processes of my program, picture related: I |
@BadMachine , thanks for the explanation. For clarification :
There is still something seem not right with the "Aligned Color" - I would expect that image to be much "noisier" and to have much more "zero" pixels as it is in the relevant depth image. When using the "Aligned RGB image make sure that the pose estimator skips the areas with no valid RGB data with zero/black pixels. And that opposing to the depth pixels where zero represents no data, you may actually get a valid RGB pixel with black color, and then you'll have to decide whether this is the color obtained from the sensor or that it received it during alignment as RGB couldn't find its correspondence in the depth frame. |
q: "The alignment takes place before sending the frame to pose estimator" q: "The pose estimator extract the tracking features from the RGB image only." q: "The 3d coordinates are extracted from the original depth image." float pixel_distance_in_meters = depth.get_distance(toSend.captured[i.key()].x, toSend.captured[i.key()].y);
rs2_deproject_pixel_to_point(planarPoint3d, &inrist, pix, pixel_distance_in_meters); |
The skeleton tracking for the right hand seems to locate the palm outside of the textured (aligned) area and right into the black area. See the highlighted red dot below. To the very least - it is on the border line so it makes sense that there would be no corresponding depth data. Once you manage to configure the estimator to move the palm's location away from the no-valid/boundary data towards the |
@ev-mp |
I just realized that in order to enhance depth<->color alignment required by tracking you also must ensure the temporal sync between the color and depth frames capture times is within very strict boundaries. |
@ev-mp thanks for advice |
@ev-mp frames = frames.apply_filter(dec_filter).apply_filter(spat_filter).apply_filter(temp_filter).apply_filter(hole_filter);
auto frames_aligned = align_to_depth.process(frames); aligned frames Also your advice about desync frames was god damn right, i completely forgot about it. Thank u again! |
@BadMachine , that's correct - post-processing shall be performed before alignment to avoid smearing of the generated artifacts. See a similar reply in another thread |
Issue Description
I'm trying to get accurate points in 3d space using "rs2_deproject_pixel_to_point" function.
I have 2 problems with it:
Steps of the programm
Grabbing frames and intrinsics
Getting necessary pixels in frame
Finding 3d coordinates of those pixels
RESULT
Final questions
Am i using those function right?
Did i make mistake when align color frame to depth?
Did i get wrong intrinsics?
What am i doing wrong?)
The text was updated successfully, but these errors were encountered: