Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Socket closed but not freed in NanostackInterface.cpp? #3168

Closed
EduardPon opened this issue Oct 31, 2016 · 6 comments
Closed

Socket closed but not freed in NanostackInterface.cpp? #3168

EduardPon opened this issue Oct 31, 2016 · 6 comments

Comments

@EduardPon
Copy link

In the \mbed-os\features\net\FEATURE_IPV6\nanostack-interface, in the source file NanostackInterface.cpp. It looks like NanostackSocket::close(), closes the socket but does not free it. Every reopen (NanostackSocket::open) by calling temp_socket = socket_open(proto, 0, socket_callback), the returned socket ID (temp_socket) is incremented. So after a number of open close cycles, the returned socket ID clips on the maximum of 15, resulting in a asserting NanostackInterface.cpp function bool NanostackSocket::open(void):

    if (socket_tbl[temp_socket] != NULL) {
        MBED_ASSERT(false);
        return false;
    } 

The issue is triggered when the LWM2M client cannot find a LWM2M server. See call stack:

Thread #1 1 (Thread mode) [core: 0] (Suspended : Step)
NanostackSocket::open() at NanostackInterface.cpp:205 0x8522c
NanostackInterface::socket_open() at NanostackInterface.cpp:718 0x85ce4
Socket::open() at Socket.cpp:38 0x8661e
Socket::open() at Socket.h:49 0x3d2ac
UDPSocket::UDPSocket() at UDPSocket.h:48 0x3d236
M2MConnectionHandlerPimpl::init_socket() at m2mconnectionhandlerpimpl.cpp:528 0x46b98
M2MConnectionHandlerPimpl::dns_handler() at m2mconnectionhandlerpimpl.cpp:183 0x462d6
connection_tasklet_event_handler() at m2mconnectionhandlerpimpl.cpp:63 0x45ff0
eventOS_scheduler_dispatch_event() at event.c:293 0x5f6ac
eventOS_scheduler_run() at event.c:318 0x5f6ca
event_loop_thread() at ns_event_loop.c:76 0x5d828
() at rt_CMSIS.c:827 0x8f764

@0xc0170
Copy link
Contributor

0xc0170 commented Oct 31, 2016

cc @SeppoTakalo @kjbracey-arm

@EduardPon
Copy link
Author

EduardPon commented Nov 1, 2016

Some more details:

An instantiated socket is freed in destructor.
Opening a new socket returns an incremented ID, even when it was previously freed.
The issue we have is the internal administration of object pointers stored in socket_tbl[socket_id]. It is not cleared in the destructor and hereby causes the issue in the open function:

    if (socket_tbl[temp_socket] != NULL) {
        MBED_ASSERT(false);
        return false;
    } 

A possible fix of the constructor:

NanostackSocket::~NanostackSocket()
{
    nanostack_assert_locked();

    if (mode != SOCKET_MODE_CLOSED) {
        close();
    }
    if (socket_id >= 0) {
        printf("**DESTRUCTOR + socket_free %d**\n\r",socket_id );  //ed
        MBED_ASSERT(0 == ret);
        MBED_ASSERT(socket_tbl[socket_id] == this);

        socket_tbl[socket_id] = NULL;    //// FIX Clear handle ///////

        socket_id = -1;
        data_free_all();
    }
}

I assume your test cases should perform more socket open close cycles.

@kjbracey
Copy link
Contributor

kjbracey commented Nov 1, 2016

I've seen a similar report before - and don't believe I got an answer to this question.

Do you have a build of Nanostack where SOCKETS_MAX has been increased? The adaptation layer is vulnerable to that being increased - it has its own local assumption that it is 16. You should make a corresponding change to NS_INTERFACE_SOCKETS_MAX there.

Nanostack does cycle through its socket IDs generally - seeing it increment in successive open calls doesn't mean you have a leak.

@EduardPon
Copy link
Author

EduardPon commented Nov 1, 2016

Hello Kevin, see my latest findings above. It seems we both were adding comments at the same time.

@kjbracey
Copy link
Contributor

kjbracey commented Nov 1, 2016

Yes, that seems clear enough. The ID cycling has helpfully concealed that little bug.

However it appears someone has already caught it on master - the fix has been applied in commit 51fd80b.

@EduardPon
Copy link
Author

Ok, thanks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants