-
Notifications
You must be signed in to change notification settings - Fork 293
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
rpmsg_virtio_send_offchannel_nocopy does not return error when RPMSG_ASSERT fails #342
Comments
This is something worth discussing. The current approach using an assertion will kill the whole process if it fails, AND a special debug symbol is defined. Otherwise, things will attempt to progress as normal, except that an operation has failed without an error indication. I believe that the first case should be treated as a catastrophic error. The second case should return some sort of useful error code, since being out of buffers (for example) could well be a transitory condition that could succeed later. |
@arnopo Do you have an opinion on this topic? |
Full agree with with @edmooring analysis. In |
I think the mindset to kill the process stems from not being able to return a buffer to the virtqueue. For example, if rpmsg_virtio_send_offchannel_nocopy fails, rpmsg_virtio_send_offchannel_raw should return the buffer it reserved, but there is no way to do this. So I think this ticket and #361 should be resolved together. |
new rpmsg_release_tx_buffer has been introduced in #401 that should help to fix current issue |
This issue has been marked as a stale issue because it has been open (more than) 45 days with no activity. |
This routine ultimately returns the number of bytes transmitted to a calling API.
The following fail cases do not propagate a failure back up the call stack and therefore the caller will think the data has been transmitted:
. . .
We could either propagate the error code back up the call stack, return a generic error indicating tx failure or return 0 indicating no data was transmitted.
The text was updated successfully, but these errors were encountered: