-
-
Notifications
You must be signed in to change notification settings - Fork 50
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Waypoint Generation and Loss Architecture #5
Comments
For the control branch, the init I think the waypoints shown in your video is not normal. Could you please try to remove the noise added to the expert during data collection and see whether it is still the case? |
But since the online leaderboard, the GPS also have the noise, it should be added during collection to maintain the same setting? Did the TCP dataset remove all noise of sensor? like GPS noise, camera distortion etc? |
No, not the noise in sensors. I mean the |
Hi, In the meantime, I've solved the waypoint generation issue. Currently, I am able to fetch them from the intermediate route traces. Is there any code example that you can refer to understand the attention-guidance mechanism? I think it is still not that clear. What I understand is the following:
Control Branch
|
Trajectory Branch
|
Hi again 😄
I have a few questions regarding the waypoint generation and loss calculation.
First of all, there are a lot of feature vector representations in the paper (i.e.,$\mathbf{j_m}$ , $\mathbf{j^{traj}}$ , $\mathbf{j^{ctl}}$ ). However, I think there are some ambiguities related to them. What I assumed is the following:
Multi-Step Control Prediction Part:
hidden = GRU([control_action, j], hidden) # first hidden is zeros(256)
j = FeatureEncoder(hidden)
control_action = PolicyHead(j)
Waypoint Prediction Part:
hidden = GRU([waypoint, j], hidden) # first hidden is zeros(256), first waypoint is [0, 0]
j = FeatureEncoder(hidden)
delta_waypoint = WaypointHead(j)
waypoint = waypoint + delta_waypoint
I'd be very happy if you could give me an idea about the degree of correctness of my assumptions above regarding your model.
Waypoint Prediction:
The question is about the generation of ground-truth waypoints. One natural solution might be to use the GNSS data in future frames. However, when the data is collected with Roach, due to the fluctuations in control actions (typical RL issue), waypoints become so noisy. I am attaching a video here. Note that in this video, the waypoints are strided (every fourth future GNSS data). When I collect the subsequent future datapoints directly, it nearly becomes a cluster of points rather than a spline-like curve that shows the future trajectory. Example of the not-strided case is here.
Finally, I am assuming that the waypoints are predicted with respect to the car's reference frame, not World's. Then, the average of amplitudes and rotations of vectors composed by the subsequent waypoints are fed to the longitudinal and lateral PID controller.
I know I am bugging you a lot 🥲 , but I think everything will be more clear for all readers through the answers to these questions.
The text was updated successfully, but these errors were encountered: