Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

L515 Point Cloud projection on 2D image with lattice cell on points cloud #9658

Closed
Ziling169BENZHI opened this issue Aug 20, 2021 · 11 comments
Closed

Comments

@Ziling169BENZHI
Copy link

  • Before opening a new issue, we wanted to provide you with some useful suggestions (Click "Preview" above for a better view):

  • All users are welcomed to report bugs, ask questions, suggest or request enhancements and generally feel free to open a new issue, even if they haven't followed any of the suggestions above :)


Required Info
Camera Model { L515 }
Firmware Version ()
Operating System & Version {Win (10)
Kernel Version (Linux Only) (e.g. 4.14.13)
Platform PC.
SDK Version { legacy / 2.<4>.<4> }
Language {C++/opencv}
Segment {Robot}

Issue Description

<Describe your issue / question / feature request / etc..>
I tried to project 3D point clouds onto 2D image pixel coordinates, and I double-checked on the rs2_project_point_to_pixel(...). However, it seems like the output is two float data in meters rather than the int pixel coordinate.
Can I ask the reason for this output and how to fix it?

Later I tried the code from https://github.com/IntelRealSense/librealsense/blob/master/wrappers/pcl/pcl-color/rs-pcl-color.cpp,
in this method:
std::tuple<int, int, int> RGB_Texture(rs2::video_frame texture, rs2::texture_coordinate Texture_XY)

It works, but the output image has some lattice cells on the data. Do you have any idea how I can improve the result to a smooth face without the cells?
微信图片_20210818024004

Best
@MartyG-RealSense

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Aug 20, 2021

Hi @Ziling169BENZHI I have only seen this grid effect once before, on a case about the SR300 camera model, and it was thought to be related to the infrared stream.

#5095

Could you try commenting-out of removing the line cfg.enable_stream(RS2_STREAM_INFRARED); from the PCL script to see whether this resolves the problem, please? The script does not seem to need an infrared stream anyway if your camera model is equipped with an RGB sensor (which the L515 is).

https://github.com/IntelRealSense/librealsense/blob/master/wrappers/pcl/pcl-color/rs-pcl-color.cpp#L170

Regarding the issue with rs2_project_point_to_pixel ... I have not seen this particular described problem on L515. The L515 case #8655 that provides scripting that uses rs2_project_point_to_pixel in its scripting may be worth reading though.

image

@Ziling169BENZHI
Copy link
Author

Hi Marty,

Thanks for the quick reply. However, I am not using cfg.enable_stream(RS2_STREAM_INFRARED); in my code.
I only used the

    rs_config.enable_stream(RS2_STREAM_DEPTH, 1024, 768, RS2_FORMAT_Z16, 30);
rs_config.enable_stream(RS2_STREAM_COLOR, 1920, 1080, RS2_FORMAT_RGB8, 30);

and I used the method in rs-pcl-color.cpp to get the 2D image plane for the pc.
std::tuple<int, int, int> RGB_Texture(rs2::video_frame texture, rs2::texture_coordinate Texture_XY)
{
// Get Width and Height coordinates of texture
int width = texture.get_width(); // Frame width in pixels
int height = texture.get_height(); // Frame height in pixels

// Normals to Texture Coordinates conversion
int x_value = min(max(int(Texture_XY.u * width  + .5f), 0), width - 1);
int y_value = min(max(int(Texture_XY.v * height + .5f), 0), height - 1);

int bytes = x_value * texture.get_bytes_per_pixel();   // Get # of bytes per pixel
int strides = y_value * texture.get_stride_in_bytes(); // Get line width in bytes
int Text_Index =  (bytes + strides);

const auto New_Texture = reinterpret_cast<const uint8_t*>(texture.get_data());

// RGB components to save in tuple
int NT1 = New_Texture[Text_Index];
int NT2 = New_Texture[Text_Index + 1];
int NT3 = New_Texture[Text_Index + 2];

return std::tuple<int, int, int>(NT1, NT2, NT3);

}

I am not sure whether the texture.get_stride_in_bytes() is causing the problem??

@MartyG-RealSense
Copy link
Collaborator

Does the lattice disappear if you use 1280x720 for color instead of 1920x1080

@MartyG-RealSense
Copy link
Collaborator

Hi @Ziling169BENZHI Do you require further assistance with this case, please? Thanks!

@Ziling169BENZHI
Copy link
Author

Hi @Ziling169BENZHI Do you require further assistance with this case, please? Thanks!
Hi Marty,

Thank you for the answer! The problem was resolved after I change the depth stream.
rs_config.enable_stream(RS2_STREAM_DEPTH, 1024, 768, RS2_FORMAT_Z16, 30);//
rs_config.enable_stream(RS2_STREAM_COLOR, 960, 540, RS2_FORMAT_RGB8, 30);//

However, I am experiencing new problems that my normal for XYZ is not calculated. Is that ok for you to take a look? I am not quite sure where is wrong. I just copy the calculation procedure from the point cloud.CPP.

@MartyG-RealSense
Copy link
Collaborator

In your script, did you set up int in the line before std::tuple

https://github.com/IntelRealSense/librealsense/blob/master/wrappers/pcl/pcl-color/rs-pcl-color.cpp#L54

@MartyG-RealSense
Copy link
Collaborator

Hi @Ziling169BENZHI Do you require further assistance with this case, please? Thanks!

@Ziling169BENZHI
Copy link
Author

Hi @Ziling169BENZHI Do you require further assistance with this case, please? Thanks!

yeah I still need assistance on calculating the normal of the point cloud, I simply used the code from the rs_export.cpp:
'''
// Compute normals
auto profile = points.get_profile().asrs2::video_stream_profile();
auto width = profile.width(), height = profile.height();
std::vector<std::array<size_t, 3>> faces;
for (size_t x = 0; x < width - 1; ++x) {
for (size_t y = 0; y < height - 1; ++y) {
auto a = y * width + x, b = y * width + x + 1, c = (y + 1) * width + x, d = (y + 1) * width + x + 1;
if (vertices[a].z && vertices[b].z && vertices[c].z && vertices[d].z
&& fabs(vertices[a].z - vertices[b].z) < kThreshold && fabs(vertices[a].z - vertices[c].z) < kThreshold
&& fabs(vertices[b].z - vertices[d].z) < kThreshold && fabs(vertices[c].z - vertices[d].z) < kThreshold)
{
if (index_map.count(a) == 0 || index_map.count(b) == 0 || index_map.count(c) == 0 ||
index_map.count(d) == 0)
continue;
faces.push_back({ index_map[a], index_map[d], index_map[b] });
faces.push_back({ index_map[d], index_map[a], index_map[c] });

			rs2::vec3d point_a = { vertices[a].x ,  -1 * vertices[a].y,  -1 * vertices[a].z };
			rs2::vec3d point_b = { vertices[b].x ,  -1 * vertices[b].y,  -1 * vertices[b].z };
			rs2::vec3d point_c = { vertices[c].x ,  -1 * vertices[c].y,  -1 * vertices[c].z };
			rs2::vec3d point_d = { vertices[d].x ,  -1 * vertices[d].y,  -1 * vertices[d].z };

			auto n1 = cross(point_d - point_a, point_b - point_a);
			auto n2 = cross(point_c - point_a, point_d - point_a);

			index_to_normals[index_map[a]].push_back(n1);
			index_to_normals[index_map[a]].push_back(n2);
			index_to_normals[index_map[b]].push_back(n1);
			index_to_normals[index_map[c]].push_back(n2);
			index_to_normals[index_map[d]].push_back(n1);
			index_to_normals[index_map[d]].push_back(n2);
		}
	}
}

for (size_t i = 0; i != result.size(); ++i)
{
	auto normals_vec = index_to_normals[i];
	rs2::vec3d sum = { 0, 0, 0 };
	for (auto& n : normals_vec)
		sum = sum + n;
	if (normals_vec.size() > 0) {
		auto normal = sum.normalize();
		result[i].nx = normal.x;
		result[i].ny = normal.y;
		result[i].nz = normal.z;
	}
	else {
		std::cout << "not right\n";
		result[i].nx = 0.0f;
		result[i].ny = 0.0f;
		result[i].nz = 0.0f;
	}
}

'''

however, it did not work, I am not sure where exactly I got it wrong. I did not use the std::tuple to store it.

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Sep 7, 2021

Another RealSense L515 user wanted to calculate point cloud normals in #8982 in May 2021 but that particular question did not have an answer provided.

In #5120 a RealSense team member recommended calculating normals outside of the SDK.


There is a PCL script for RealSense that my research found that computes normals.

https://www.twblogs.net/a/5b8e5b682b71771883449636

#include <iostream>
#include <pcl/visualization/cloud_viewer.h>
#include<pcl/io/pcd_io.h>
#include<pcl/point_types.h>
#include <pcl/conversions.h>
#include <pcl/features/normal_3d.h>
#include <boost/thread/thread.hpp>
#include <pcl/common/common_headers.h>
#include <pcl/filters/voxel_grid.h>
#include <pcl/visualization/pcl_visualizer.h>
#include <pcl/console/parse.h>
#include <pcl/filters/statistical_outlier_removal.h>

int main()
{

	pcl::PointCloud<pcl::PointXYZ>::Ptr cloud_old(new pcl::PointCloud<pcl::PointXYZ>);
	pcl::PointCloud<pcl::PointXYZ>::Ptr cloud_downsampled(new pcl::PointCloud<pcl::PointXYZ>);
	pcl::PointCloud<pcl::PointXYZ>::Ptr cloud(new pcl::PointCloud<pcl::PointXYZ>);
	pcl::io::loadPCDFile("test_pcd.pcd", *cloud_old);

	//Use a voxelSampler to downsample
	pcl::VoxelGrid<pcl::PointXYZ> voxelSampler;
	voxelSampler.setInputCloud(cloud_old);
	voxelSampler.setLeafSize(0.01f, 0.01f, 0.01f);
	voxelSampler.filter(*cloud_downsampled);

	//Use a filter to reduce noise
	pcl::StatisticalOutlierRemoval<pcl::PointXYZ> statFilter;
	statFilter.setInputCloud(cloud_downsampled);
	statFilter.setMeanK(10);
	statFilter.setStddevMulThresh(0.2);
	statFilter.filter(*cloud);

	// Create the normal estimation class, and pass the input dataset to it
	pcl::NormalEstimation<pcl::PointXYZ, pcl::Normal> ne;
	ne.setInputCloud(cloud);

	// Create an empty kdtree representation, and pass it to the normal estimation object.
	// Its content will be filled inside the object, based on the given input dataset (as no other search surface is given).
	pcl::search::KdTree<pcl::PointXYZ>::Ptr tree(new pcl::search::KdTree<pcl::PointXYZ>());
	ne.setSearchMethod(tree);

	// Output datasets
	pcl::PointCloud<pcl::Normal>::Ptr normals(new pcl::PointCloud<pcl::Normal>);

	// Use all neighbors in a sphere of radius 1cm

	ne.setRadiusSearch(0.01);
	// Compute the features
	ne.compute(*normals);

	boost::shared_ptr<pcl::visualization::PCLVisualizer> viewer(new pcl::visualization::PCLVisualizer("3D Viewer"));
	viewer->setBackgroundColor(0, 0, 0);

	viewer->addPointCloud<pcl::PointXYZ>(cloud, "sample cloud");
	viewer->setPointCloudRenderingProperties(pcl::visualization::PCL_VISUALIZER_POINT_SIZE, 3, "sample cloud");
	viewer->addPointCloudNormals<pcl::PointXYZ, pcl::Normal>(cloud, normals, 10, 0.2, "normals");
	viewer->addCoordinateSystem(1.0);
	viewer->initCameraParameters();
	while (!viewer->wasStopped())
	{
		viewer->spinOnce(100);
		boost::this_thread::sleep(boost::posix_time::microseconds(100000));
	}
	return 0;
}

@MartyG-RealSense
Copy link
Collaborator

Hi @Ziling169BENZHI Do you require further assistance with this case, please? Thanks!

@MartyG-RealSense
Copy link
Collaborator

Case closed due to no further comments received.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants