Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question: is it possible to implement non-blocking FIFO queue with luv? #546

Open
snoopcatt opened this issue May 18, 2021 · 7 comments
Open
Labels

Comments

@snoopcatt
Copy link

Hello.
Looking for a solution (or even possibility) to implement non-blocking FIFO queue with libuv.

Just like that: https://gist.github.com/daurnimator/88e942022d4f3cabf57c but for luv instead of cqueues.

I'm ready to pay some bounty (like $100 in USDT or LTC) if someone suggests an answer.

@SinisterRectus
Copy link
Member

I don't know the cqueues API so I'm not quite sure what it is that that FIFO implementation does to make it "non-blocking". How do you make a queue/stack non-blocking? Is it the access to it that you're controlling? Could you show an example usage?

@snoopcatt
Copy link
Author

I will try to explain.
Let's look on example: https://github.com/luvit/luv/blob/master/examples/uvbook/tcp-echo-server.lua

Imagine that we have FIFO with some object inside. Let's say there is 5 objects.
In callback function for read_start, to conjure a response, we :pop() an object from FIFO, and after we send a response and close connection — we :push() our object back to FIFO.

If we have 6 simultaneous requests, sixth request will face that FIFO is empty.
It will invoke fifo:empty() function (throws an error by default)

Let's look at cqueues-fifo.lua.
There is modified fifo:empty() function:

	fifo:setempty(function(self)
        		cond:wait()
        		return fifo:pop()
       end)

And also fifo:push():

function methods:push(...)
        	self.fifo:insert(...)
        	self.pollfd:signal(1)
end

So, when we trying to :pop() an object from empty FIFO, it will wait for a signal, that will occur when someone :push() into FIFO.


Let's return to our example.
When we will receive 6 simultaneous request, on a sixth request we will face empty FIFO.

With cqueues, sixth request will invoke fifo:empty() that waits for a signal.
When one of first five requests will finish and :push() our object back to FIFO, procedure in sixth request will be awakened with a signal and will be completed normally.

How to do the same with luv?
I tried playing around with timers and pipes, but it does not work.
When sixth request invokes fifo:empty() (with timer/pipe), it blocks whole I/O, so neither of first five requests will complete and :push() object back to FIFO, so our program will just hang.


Hope I explained it understandable.

@creationix
Copy link
Member

@daurnimator do you understand what is being asked here? When I hear fifo, I think of the linux file objects created by mkfifo(3), but I don't think that's @snoopcatt means.

@snoopcatt, do you just was a pure lua code library that creates a queue of arbitrary lua values or do you want something related to actual linux fifos? I don't understand what the empty thing that throws an error by default means.

@daurnimator
Copy link
Contributor

@daurnimator do you understand what is being asked here? When I hear fifo, I think of the linux file objects created by mkfifo(3), but I don't think that's @snoopcatt means.

A first-in-first-out queue interface. https://en.wikipedia.org/wiki/FIFO_(computing_and_electronics)#Computer_science

i.e. you can push in one end; and pop out the other. It stores objects waiting to be popped in an internal buffer.


The integration between a fifo and luv would be: "when I pop and the internal buffer is empty; the current coroutine should be paused; and the next push would wake it up"

@creationix
Copy link
Member

@daurnimator, Thanks I understand the general concept of fifo queues, but the title "with luv" made me think maybe it was related to linux syscalls.

For starters, here is a simple FIFO queue with no max buffer size:

local function fifo() 
    local reader = 0
    local writer = 0

    local function pop()
        if reader < writer then
            local value = queue[reader]
            queue[reader] = nil
            reader = reader + 1
            return value
        end
        local current = coroutine.runing()
        queue[reader] = current
        reader = reader + 1
        return coroutine.yield()
    end

    local function push(value)
        if reader > writer then
            local waiting = queue[writer]
            queue[writer] = nil
            writer = writer + 1
            return coroutine.resume(waiting, value)
        end
        queue[writer] = value
        writer = writer + 1
    end

    return {
        pop = pop,
        push = push,
    }
end

It will block the current coroutine on queue.pop() if there is nothing waiting in the queue and resume when data appears. But it will not ever block the writer because there is no max queue size. I'm guessing you want some kind of max queue size that blocks the writer if that max size is reached? What if there are multiple concurrent writers and a full buffer, should we then queue all the writers without a limit, or throw an error after the first blocked writer? And same thing for readers, is there a limit for the number of enqueued readers allowed or is it just the one?

@daurnimator
Copy link
Contributor

return coroutine.resume(waiting, value)

this of course wouldn't handle errors well; nor do you want random 'poppers' resumed from the context of the 'puller': you want the puller to be resume next time the main loop is hit. I guess in luv you'd use a uv_async_t for that?

I'm guessing you want some kind of max queue size that blocks the writer if that max size is reached?

maybe; the example I wrote (back in 2015) that the OP linked doesn't have that feature though.

@creationix
Copy link
Member

True, I've often been in a context where that was the only option, but I like the idea of using async or something to keep them on clean stacks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

4 participants