-
Notifications
You must be signed in to change notification settings - Fork 26
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Lidar sensor support #169
Comments
Any hint add the usage of Lidar? |
Hi @wolfgangschwab, sorry for the delay, I am not familiar with sensors and I wanted to provide some context (please tag me the next time, I check those notifications with higher priority). You can find the implementation of the sensor here (reminder to myself: update the links to the GitHub repo as soon as it will be available): An interesting point for us is the following:
You can create a sensor using a provided helper class: Manager.hh. Here is an example. Despite what's written here, it seems that the returned Lidar pointer doesn't need to be cast since it should already contain all the methods you need. Extracting data from the Lidar should pass through a callback that could be configured with The alternative is passing through the transport interface, but it doesn't blend very well with the current architecture of gym-ignition. Though, you could give it a try. Note that in this way you might miss few lidar frames by time to time. I would suggest to develop a simple C++ prototype that:
What puzzles me is the last point. The loop to gather data is this one, but it seems to me that it just adds noise. There is no data gathered by the scene. Lidars are listed as implemented sensors, I suspect that I'm missing a piece here. Check the Camera sensor that I'm sure it already works (you can find few demos around), it seems that indeed there are missing pieces in the Lidar class. In particular, both Camera and Lidar inherit from Given these information, maybe you could start trying to make the [1] I would suggest to open an issue upstream to ask the developers how they suggest to proceed. Note that they're in the middle of a migration from bitbucket to github this month. |
A quick update from my previous comment. I realized that they have also a GPU implementation of the Lidar: GpuLidarSensor.cc. It seems to be complete. All what I wrote above about the simple example still holds, it seems that the connection with an existing scene happens here. If you manage to make progresses about this please drop a comment. When you have a working prototyping without using the |
Hi @diegoferigo, Btw. I used gpu_lidar_sensor_plugin.cc as one of the sources, which I found helpful. So I think a missed a part here. Do I have to connect the sensor to the scene? I did not find a scene in gym-ignition. I also created a new Manager (Sensormanager). Is this a good or a bad idea? I know for sure that Lidar is working in Ignition. Maybe it is different compared to a camera because it is not using the rendering visual objects but using physics in the gpu. I think I read something like this, but I might be wrong. |
@wolfgangschwab thanks for the hints! I forgot to check the integration folder for additional tests, the one you linked is definitely helpful.
I think we have to use our own Manager so having one (or also more than one) where needed is not a problem.
If you noticed, all the integration tests accept the name of the engine from [...]
<!-- GUI plugins -->
<plugin filename='GzScene3D' name='3D View'>
<ignition-gui>
<title>3D View</title>
<property type='bool' key='showTitleBar'>false</property>
<property type='string' key='state'>docked</property>
</ignition-gui>
<engine>ogre</engine>
<scene>scene</scene>
<ambient_light>0.4 0.4 0.4</ambient_light>
<background_color>0.8 0.8 0.8</background_color>
<camera_pose>-6 0 6 0 0.5 0</camera_pose>
</plugin>
[...] I'm not really sure whether we have to create a new scene associated to this sensor instance or we can somehow use the default scene. And what about a headless simulation, where there is no GUI? I'm a bit lost, I never had to deal with rendering so far and these edges of the simulator are still a blackbox to me. My suggestion is to start understanding where to get a scene with all the entities already present in the ECM. For sure you don't have to be responsible to manually align the scene with the ECM, there should be something that keeps them aligned.
I think that the publishers are are there but they are not initialized / they do not stream anything. You need to call |
After 3 Sensors (IMU, camera and Lidar) have been created, the following output is generted:
The relevant part of the code is:
So the sensors are listed in the topiclist. The callbacks are never called. And the Errormessage |
Not really :/ Let's do a step back. What if you create a world file that contains your model, and your model also contains the lidar sensor? Are you able to read the published data using the This test is completely independent from gym-ignition, and it would provide a working SDF configuration. |
Only a short interim status. I'm currently struggling with some link errors. Will come back when solved. |
@diegoferigo , Using this line in the code in GazeboWrapper So I changed the code to
Can you tell me how I can solve this error? |
Here what I meant is if you can provide a sdf file that works for you with plain ignition gazebo, launching it through
Note that the GazeboWrapper is most likely not the right place where to put this code. Though, feel free to use it now for these preliminary experiments.
|
I added the model to the world file. Then I noticed that I get two robots in the GUI, because I loaded it twice. :-o :-) And I noticed that I still had other issues in my code that I tried to solve prior to the test with the extended world file. Thanks for your feedback regarding the link error. I'll try this. PS: I was already affraid, that GazeboWrapper is not the right place for this code. But, as you mentioned, I just wanted to try whether this approach works or not. |
Meanwhile I'm able to get the output from a laser into a callback-function. Now I'm unsure where to place the callback correctly. I would expect that I add the subscribe to the topic and the callback-function into IgnitionRobot.cpp. @diegoferigo, |
The Callback-function in IgnitionRobot.cpp is working. Now I need to add the world to the scene. @diegoferigo, |
@wolfgangschwab That's cool! Do you have any pushed branch where I can have a look? We don't have any sensor yet, it will help me finding the right location. Likely it will be the Robot class, but how it is exposed depends on the type of data you have extracted. I never worked with lidars and I have no idea what kind of data is typically needed, its format, and how it is consumed downstream. Then, I want again to point out the new bindings, based on the new ScenarI/O APIs. Yesterday I merged them in the devel branch (and, therefore, the nightly channel), it might make sense to integrate sensors directly there. Feel free to have a look in the meantime.
Can you elaborate? What do you mean with it? |
I can upload my working version of gym-ignition with the changes I did. But there are many changes, that are meanwhile meaningless and should be deleted, but currently I do not have the time to do the cleanup. I've added a scene object to create the sensors to a scene. but the scene doesn't contain the world. Currently I only have a cube in the scene that is recognized by the lidar, but no part of the world-sdf-file. So I need to bring these two parts together. I added some error-messages to IgnitionRobot to see the output of the lidar data. The propagation of this data to the training script does not yet work. I noticed that you did much work on the ScenarI/O api. But I couldn't find the time to have a deeper look to it. What is the principle difference to the old programms? Btw. it seems to me that there is a dependency missing for generating gympp_bindings. When I add something to gympp the generated gympp_bindings.py is changed. But the generation of gympp_bindings.so does not work because it is still using an old file (gympp_bindingsPATHON_wrap.cxx.o or so). I could not find the missing part so I deleted the build folder when I had problems. |
(Weight my words properly because once again I have no experience with the rendering system)
If I understood well, there is no sync between the rendered scene of your sensor and the simulated world. Is it right? Did you manage to check from upstream code how the more simple camera system works?
The main problem with the previous architecture was that the task could only operate on a single Robot object. For instance, if you had another model (it could be a ball on top of a table) and you wanted to get its position, it was not possible. Not the task, instead of controlling only the Robot, has the knowledge on the entire World. In short, before our Gazebo bindings was just the
You are right, this is a longstanding problem that affects SWIG bindings. After the first generation, altering the headers does not create a build dependency that triggers a rebuild. We have this situation in many of our project in robotology. The workaround is to |
Upstream CMake issue on this: https://gitlab.kitware.com/cmake/cmake/-/issues/4147 . I think it is working fine only for |
The robot I'm using has a camera but I haven't tried to get data form the camera sensor. Lidar seem to be less data to investigate, so easier to check. |
I think that I've solved the problem in principle. But there are still some issues. The solution is surprisingly simple. I thought that I need to add a sensor object and create a scene, but this is all done in the background of the server object you are already using. Short description what is needed to get this running:
Now I see lidar-data in the python scripts. I still have some issues. One issue is that the process seems to eat memory. Confusing is to me, that I could not get it working when I added the sensor plugin to the model instead to the world. |
Ow ok... this is one of the reasons why I suggested to make it work outside gym-ignition first, in #169 (comment) (always start from a working configuration). If you see the example provided upstream, the sensors system is added to the world (so that it can be synced with all the simulated objects inserted in the world), and the lidar sensor is added to the model. In general terms, I don't like much reading from transport topics when there's the possibility to instantiate and control the sensor directly from C++, but let's process one thing at a time. |
It seems that the sensor publishers also have to be removed during And another question. I found the parameter Btw: I should move to your new ScenarI/O api. I did not move over as I first wanted to get a stable solution before I move over. Maybe I should do it now. :-( |
Why don't we just reset the pose of the model and do not remove and create it again as a new model? It might be easier to reset the position instead taking care to remove every part of the model (including the sensors). |
Unfortunately this is an upstream problem we're already aware of. I didn't yet face the situation with sensors, but the problem is similar with the robot controllers. Controllers are inserted as model plugins and they do not get unloaded when the model is removed. This is quite a problem, because if you remove and insert models very often as we do, even if you program the plugins to handle properly their execution when there's no longer their model, still consume memory. It's under many aspects a memory leak. And there's no solution I'm aware of :/ A workaround we're applying during our training experiments is destroying the simulator after a number of rollouts and creating it again. It's not optimal, but it works robustly. I did a quick benchmark with ScenarI/O and the overhead of doing it every rollout is not huge, about 1.3x (this is the worst case scenario you could reset it after 100 / 1000 rollout and there would be no overhead).
You can configure the number of iterations (i.e. the number of steps executed every run) in the
This is the very first logic that was implemented in gym-ignition. Then, as soon as we started working with complex robots like the Panda or iCub, we switched to the current logic of removing and inserting again the model. In theory, this operation is not necessary, and you can reset the base and joints to the original position. But this is not the entire story. Simulated models often have hidden states that should be reset as well, think of the integrator of a PID for instance. Removing and inserting the model would automatically take care of these states, instead by just resetting the configuration it's the user responsibility to do that. Unfortunately in most cases it requires a deep knowledge of what's running under the hood, and it's error prone. If you're quite confident that you properly handle all these hidden states, then I don't see any problems. Note that you cannot change it using |
Hi, @wolfgangschwab have you managed to make the lidar working at the end? |
Hi @TadielloM , So currently nothing completed that I could provide. |
While we wait for sensor support from the low level (i.e. integration into the C-level API), you could use ropy to get the sensor data into python by doing the following from scenario import gazebo as scenario_gazebo
import numpy as np
import ropy.ignition as ign
import matplotlib.pyplot as plt
gazebo = scenario_gazebo.GazeboSimulator(step_size=0.001, rtf=1.0, steps_per_run=1)
# this is the example from `ign gazebo gpu_lidar_sensor`
assert gazebo.insert_world_from_sdf("/usr/share/ignition/ignition-gazebo4/worlds/gpu_lidar_sensor.sdf")
gazebo.initialize()
# Fix: Topics are not available until after the first run
gazebo.run(paused=True)
with ign.Subscriber("/lidar") as lidar:
gazebo.run()
lidar_msg = lidar.recv()
# neat transition to numpy
lidar_data = np.array(lidar_msg.ranges).reshape(
(lidar_msg.vertical_count, lidar_msg.count)
)
lidar_data[lidar_data == np.inf] = lidar_msg.range_max
# has all the bells and whistles of a lidar message
print(
f"""
Message type: {type(lidar_msg)}
Some more examples:
\tRange: ({lidar_msg.range_min},{lidar_msg.range_max})
\tReference Frame: {lidar_msg.frame}
\tData shape: {lidar_data.shape}
"""
)
# visualize the lidar data
# (not too meaningful I fear, but with some fantasy
# you can recognize the [upside-down] playground)
fig, ax = plt.subplots()
ax.imshow(lidar_data/lidar_data.max(), cmap="gray")
plt.show() Note: Note 2: If you want to get the messages in your simulation loop, you will need to do some math to match the main loop's frequency with the sensor's frequency (see #296 (comment)). Alternatively - if you don't mind sacrificing reproducibility - you can just call Docs: https://robotics-python.readthedocs.io/en/latest/api_reference.html#ropy.ignition.Subscriber Edit: I just realized that you will need the current |
Awesome @FirefoxMetzger, thanks a lot for working on this. Waiting #199, relying on topics is a great workaround! What's nice is that it can be also applied to other missing features of gym-ignition that are already implemented as plain Ignition Gazebo plugins.
Expect a new release soon 😉 |
I would like to add sensor data to the framework. Especially lidar sensors.
As it sensors are not part of the development plan, I would appreciate any hint how to add this into the framework.
The text was updated successfully, but these errors were encountered: