-
Notifications
You must be signed in to change notification settings - Fork 195
This issue was moved to a discussion.
You can continue the conversation there. Go to discussion →
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implications of high Command - Feedback latency #219
Comments
If this were implemented, then we would also need to implement a version-detection message to ensure backward compatibility with the previous message format.
I think that the software should rely on the "in_motion" field in the Robot Status message (ROS_MSG_ROBOT_STATUS). This should be the indicator to show when the robot has finished motion and the FB has converged with CMD. Having said that... the MotoROS code needs an update to fix the "in_motion" flag. Currently, that flag only indicates that all motion increments have been submitted to the robot's motion planner. It doesn't actually indicate whether the robot is still moving due to the internal averaging filter. So, either Eric and I need to find a better detection mechanism for "in_motion". |
@ted-miller: is the "better in_motion" implementation something you could look into? |
Yes. Eric and I already have something coded up. I just need to test this week on a live controller before I submit a PR. |
A "better in_motion" implementation was submitted in the pull request #227. The "in_motion" status will now stay on until the feedback position is within the START_MAX_PULSE_DEVIATION (10 pulses) range of the command position. This should insure that once the "in_motion" status is false, a new trajectory can be initiated without getting the ROS_RESULT_INVALID_DATA_START_POS (3011) error. There might be some latency for that to happen as pointed out in this issue from @andreaskoepf. If there are people wanting to start up trajectory sooner, this is my suggestion. You need to keep track of your command position and then monitor for the queue count (you can send the ROS_CMD_CHECK_QUEUE_CNT message #200102). When the queue count is at zero, you can start sending a new trajectory from your last command position (not the feedback position). This was the original intent of the "in_motion" signal that use to be link to the queue count = 0 (prior to PR#227), but I don't think it was ever used that way. People would see the In_motion signal turn off and then grab the current feedback position to start planning the next motion. In retrospect, it was confusing. So the new In_motion, should make it easier for most user in the future. |
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
We just ran into this error while using PR #215. We were using the action interface to the motoman driver on the ROS side to execute repeated moves with no issues, but during our desired task we had need to temporarily use the point streaming interface. When exiting point streaming and beginning a different move we were greeted with the fairly common I can confirm that as @EricMarcil suggested, checking the value of The change in MotoROS v1.8.1 was a real life-saver for us. Thanks for everyone's contributions. |
For YRC1000 and YRC1000micro, if the latency is causing difficult in your application you can improve the response time by using the High Accuracy Path Control Function which is a standard function. You can details about it in the YRC1000 General Operator's Manual (Manual: RE-CSO-A051 (Japan) or 178645-1 (US)). NOP The 2nd point is the same position as the first one, it is just to force a motion after enabling the HTRAJON function and the TIMER T=3.00 is to give time for the change to apply before starting ROS. (The change gradually takes effect over a small period of time to avoid a sudden change in the middle of a motion, so that time could probably be reduced or removed if it is not critical right at the beginning of your motion). Now, this will not completely remove the latency. There are still some other components such as communication, inertia, filtering that are still present but it does show a significant improvement. I hope this is helpful. |
@EricMarcil this seems like a great improvement. Would your example completely replace the INIT_ROS.JBI from the repo or should we keep the extra |
@garyedwards : You are right. That job was for testing with a different application than ROS. The code needs to be merged with the ROS_INIT job. I suggest getting the current position and storing it to a local variable and then moving to that position, so the robot won't actually move from it's current position. The reason for this is to make sure that the HTRAJON command is activated before starting to move with ROS. The 3 seconds timer is because the HTRAJON command is not applied instantly because the command can be call during motion and we want the transition to be smooth. Though I have not tested it yet, I believe the timer could be reduce, the transition would then take place during the first few seconds of the ROS generated motion. So the new ROS_INIT job looks as follows:
I'm attaching a copy of the file here (I had to zip it because .JBI extension was not allowed) |
@ted-miller @gavanderhoorn @EricMarcil. We'd like some information of the control lag breakdown, relating to a YRC1000 or YRC1000micro and GP8 or similar:
If response is a positive one we will be purchasing the robot today and we hope to contribute to the repository in months to come. |
Let me start by saying that the MotoROS driver was designed for flexibility and communication lag, it was not designed for peek performance streaming. However some people have made a custom version for streaming. I haven't personally experimented with it but you can find the latest version under the PR #215. There is also a video showing an application of it: Additionally, the code for the robot driver is open source and can be modified using the MotoPlus SDK (purchased option). So, if the existing ROS interface is not adequate for your needs, you could access the motion API directly. As for the polling of position, the rate can be changed by editing the defined constant: |
@EricMarcil in the graph above "GP8 Motion Latency", is the x-axis units time in milliseconds, or does it represent number of commanded traj points? If it is time, it seems to indicate the robot begins to respond to your commands within 20-30 ms. I've been unable to get our GP12 to begin responding to motion commands in less than 120-150 ms, and have seen even worse minimum response time on a GP180 we've been working with. That is with this new INIT_ROS job you posted above, as well as the S2C-1584 parameter enabled. Couple questions:
Thanks for the help! Attached is a recording of position vs time showing the delayed reaction i mentioned (with the new INIT_ROS, plus S2C-1584 enabled) |
@smurray47 in the graph above the units are in ms.
|
@EricMarcil thanks for the quick response. I should clarify when I say "we're not using the ring buffer", I don't mean that we've modified the Motoplus code to remove it, I just mean we're not intentionally "pre-filling" it with any amount of the trajectory, and are instead just streaming each point just in time. So the data is presumably still coming off the ring buffer, it just doesn't have much extra data in it at any point in time. The units on the x-axis in the graph I posted are seconds. We've implemented our own client-side and are measuring it by logging all the points we receive and send. We'll do some measurements directly from wireshark and will report back. |
Hi @EricMarcil. I helped @smurray47 gather some data with wireshark to confirm the high latency that he was seeing in his comment above. I've attached a trajectory plot for one of the joints that we captured with wireshark. The plot shows joint commands and feedback plotted against the timestamps from a wireshark capture. This data shows about 120-150ms of delay which matches what @smurray47 showed earlier. Some notes about our setup:
We haven't been able to reduce the latency any further than this and we're hoping you might have some more insight on what could be causing this latency. |
This issue was moved to a discussion.
You can continue the conversation there. Go to discussion →
In order to evaluate the realtime control capabilities of a SDA10D (DX100) with the Motoman ROS driver we conducted a couple ot realtime control experiments at Xamla. We modified the MotoPlus part of the driver to report not only joint-states but also additional values like the command position. We found that our DX100 with the
mpMeiIncrementMove()
command has a relative high latency for executing commands of at least 150ms which cannot be explained by the physical limitation of the motors. We think a processing part like a moving average smoothing filter in the Yaskawa motor controllers could explain the observed behavior, e.g. Eric Macil of Yaskawa America writes in a forum post regarding CMD-FB latency "That command goes through an averaging filter for motion smoothness and then PID control loop command the motion to reduce the error between the feedback position and command position.".We asked Yaskawa whether it would be possible to reduce the system latency but we learned that it is not possible to reduce the latency significantly and that even a minor possible reductions would result in a complete loss of warranty. Although the CMD-delay is a significant problem for ROS based realtime control loops, like force-torque control, we managed to get this working to some extend with work-arounds like look-ahead. Some videos of FT behaviour that we managed to get working are: Admittance Control and Impedance Control
I would like to share some plots that we recorded about a year ago:
1. Single Joint Sinus-Trajectory
In this plot the yellow
q_cmd
line represents the position that has been commanded, meaning the sample was generated after the command had been processed bympMeiIncrementMove()
. The greenq_fb
line is the joint feedback reported by the Motoman driver (which is normally published on the/joint_states
topic). You see a clear phase shift of >100ms between yellow and green lines.The light blue
forces_abs
line is the before zeroed norm reading of a ATI gamma force torque sensor. The temporal correlation between measured absolute force (~abs acceleration) and joint-feedback indicate that the joint-feedback is probably the actual position of the robot reported only with minimal latency, e.g. directly read out of the joint-encoders.2. Force-Torque based Move Until Contact
The plot was generated while touching a wood board with the SDA10D. In this plot you can see the operation of our TVP extension to the Motoman driver. After reaching the FT-threshold we send a new setpoint to move the robot away from the collision. The reception of the new setpoint inside the DX100 controller is reflected by the red
q_cmd
line which confirms the command from he motoplus side of the driver. The TVP controller (also realtime in MotoPlus) now generates new commands for the internal incremental moves while respecting configured acceleration and velocity limits. The TVP controller output can be seen by the yellow lineq_inc
. The joint feedback signal is again plotted in green.In order to classify the reaction time of the robot the time distance between crossing the FT threshold and reaching maximum force can be used, which was about 95ms in this experiment. The actual control delay can be seen by the distance in time and shape of the yellow (command) and green (feedback) plots.
Why this might be relevant in general...
We with the DX100 and others (e.g. see #217) with a FS100 have observed that it happens quite frequently that the motoplus part of the driver rejects a trajectory planned by MoveIt! since it detects a deviation of the current command position and the start position of the trajectory. The code checking the first trajectory point is in MotionServer.c#L1152-L1161. In general I consider this check to be very important from a safety stand-point and it should definitely not be removed.
The problem: When the robot-position for planning is read 'too early' (e.g. while it is actially still moving) a planned trajectory will likely not pass the start-point-deviation check.
In many cases planning with MoveIt! will be done from the current position to a specified goal position. The current-position is read from the current robot-state which is filled by the values reported by the robot driver via
/joint_states
. As the above plots show there is a temporal delay between command position and feedback position.Now the question is when the trajectory execution 'really' ends and all controllers have converged. To test this we at Xamla added 'convergence' detection code to the Motoman-Driver on the ROS side which waites for N consecutive joint-feedback reports with very small distance. With N>50 we could perform longer test-runs with thousands of moves but unfortunately it always took relatively long to wait for this kind of convergence. The effect can be clearly seen on a Speed-Test video in form of waits between consecutive motions. While the actual movement of the robot is clearly fast enough for normal industrial operation the wait convergence wait time is too high.
Ideas/Discussion
Todo for me:
The text was updated successfully, but these errors were encountered: