Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Resolve segfault when a new hashtable node is allocated. #20

Merged

Conversation

sinisterchipmunk
Copy link
Contributor

In libmdnsd/xht.c, at line 110, if a candidate node is not found, a new one is allocated.

Around line 116, if n->flag is true then n->u.key and n->val are freed. However, since the memory allocated at line 110 was not initialized, n may contain any kind of data, which may unpredictably cause n->flag to have a nonzero value. When it does, a segfault occurs during free(n->u.key) or free(n->val).

Valgrind detects this issue as a conditional jump based on an uninitialized value. (Note that because the initial elements of the xht_t.zen member are zeroed during allocation via calloc, a hash collision must be created before this behavior can be witnessed. With a prime value of 11, hash keys "server" and "version9" produce the necessary collision. Of course, while this is enough to detect the issue under valgrind, it is not enough to actually trigger the segfault every time because that depends on the random bits in the memory that was allocated.)

This pull request resolves the issue by zeroing n after its memory is allocated.

Around line 117, `n->flag` is checked and if true `n->u.key` and
`n->val` are freed. If `n` is uninitialized data, `n->flag` may sometimes
be true, and under that circumstance a segfault occurs during
`free(n->u.key)` or `free(n->val)`.
@troglobit
Copy link
Owner

Awesome, great catch, thank you! 😎👍

@troglobit troglobit merged commit d4fc074 into troglobit:master Sep 21, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants