Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

doc: update for new_thread note #639

Merged
merged 1 commit into from
Mar 30, 2023
Merged

Conversation

erw7
Copy link
Contributor

@erw7 erw7 commented Mar 30, 2023

There seems to be a problem if the luv_thread_t userdata is released before the thread terminates. I have confirmed that the thread entry function may not be executed because thd->code was released by gc as shown below by the debugger. I have not been successful in creating a test case. An example of a segmentation fault in luv_thread_arg_clear, possibly due to this issue, has been reported in the neovim issue(neovim/neovim/issues/22694).

test.lua

local uv = require'luv'

uv.new_thread(function()
  print('Hello')
end)
collectgarbage('collect')

-- wait for thread to finish
uv.sleep(3000)
> gdb luajit
…
>>> break luv_thread_gc
…
>>> commands
>p "call luv_thread_gc"
>cont
>end
>>> break luv_thread_cb
…
>>> commands
>call sleep(1)
>p "call luv_thread_cb"
>p ((luv_thread_t*)varg)->code
>p ((luv_thread_t*)varg)->len
>cont
>end
>>> run /home/erw7/test.lua
…
Thread 1 "luajit" hit Breakpoint 1, luv_thread_gc (L=0x7fffff3f0380) at ../src/thread.c:253
253     static int luv_thread_gc(lua_State* L) {
$1 = "call luv_thread_gc"
…
Thread 2 "luajit" hit Breakpoint 2, luv_thread_cb (varg=0x7fffff40d178) at ../src/thread.c:269
269     static void luv_thread_cb(void* varg) {
$2 = 0
$3 = "call luv_thread_cb"
$4 = 0x0
$5 = 0
…

@zhaozg zhaozg merged commit e93d540 into luvit:master Mar 30, 2023
@erw7 erw7 deleted the improve-doc-thread branch April 1, 2023 12:18
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants