You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Dec 8, 2022. It is now read-only.
Describe the bug
While Publishing MQTT QoS0 message using Sync or Async API it always return SUCCESS status because the operation is actually async internally and uses MQTT operation scheduling.
System information
Which hardware board or part numbers?
We are using custom NRF52 based system BG96 WAN module connected to it to communicate with AWS IoT.
IDE used
Operating System [Windows|Linux|MacOS]
MacOS
Version of FreeRTOS (run git describe --tags to find it)
We are using master branch at commit 7878b52
Project [Custom Application]
If your project is a Custom Application, please add the relevant code snippet in the section
We are using Amazon FreeRTOS network interface implementation (iot_network_freertos.c) with MQTT lib. Implemented custom socket API so that it can be used in iot_network_freertos.c. This should be reproducible whenever underlying socket is closed abruptly.
Expected behavior
Ideally MQTT lib should be aware of the internal socket closer, but I did not find any mechanism in AFR IoT Network implementation where in can notify about this to the MQTT lib. In function which is dedicated for network recv task (_networkReceiveTask) it just exits if the socketStatus value is negative without notifying this to to the MQTT lib.
If we land up in the situation mentioned above and try to send QoS0 message will always get the SUCCESS status from MQTT API, but internally it actually fails at MQTT operation level.
This gets resolved if keep alive job kicks in and closes the connection after there is network error while sending the PING request.
But we have set keep alive at MAX value because it most of the times leads to disconnection even if there is active traffic on the MQTT connection. That is separate issue.
Screenshots or console output
Since we are using TLS as well we get mbedTLS internal error:
Describe the bug
While Publishing MQTT QoS0 message using Sync or Async API it always return SUCCESS status because the operation is actually async internally and uses MQTT operation scheduling.
System information
We are using custom NRF52 based system BG96 WAN module connected to it to communicate with AWS IoT.
MacOS
git describe --tags
to find it)We are using master branch at commit 7878b52
We are using Amazon FreeRTOS network interface implementation (iot_network_freertos.c) with MQTT lib. Implemented custom socket API so that it can be used in iot_network_freertos.c. This should be reproducible whenever underlying socket is closed abruptly.
Expected behavior
Ideally MQTT lib should be aware of the internal socket closer, but I did not find any mechanism in AFR IoT Network implementation where in can notify about this to the MQTT lib. In function which is dedicated for network recv task (_networkReceiveTask) it just exits if the socketStatus value is negative without notifying this to to the MQTT lib.
If we land up in the situation mentioned above and try to send QoS0 message will always get the SUCCESS status from MQTT API, but internally it actually fails at MQTT operation level.
This gets resolved if keep alive job kicks in and closes the connection after there is network error while sending the PING request.
But we have set keep alive at MAX value because it most of the times leads to disconnection even if there is active traffic on the MQTT connection. That is separate issue.
Screenshots or console output
Since we are using TLS as well we get mbedTLS internal error:
To reproduce
Steps to reproduce the behavior:
Code to reproduce the bug
The code should be wrapped in the cpp tag in order to be displayed clearly. For example:
If the code is longer than 30 lines, GIST is preferred.
Additional context
Add any other context about the problem here.
Thank you!
The text was updated successfully, but these errors were encountered: