Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Queues: generalize PIFOs #1810

Closed
2 tasks done
anshumanmohan opened this issue Dec 18, 2023 · 6 comments · Fixed by #2189
Closed
2 tasks done

Queues: generalize PIFOs #1810

anshumanmohan opened this issue Dec 18, 2023 · 6 comments · Fixed by #2189
Labels
C: Queues One of the queue-style frontends good first issue Good issue to start contributing on Calyx S: Available Can be worked upon

Comments

@anshumanmohan
Copy link
Contributor

anshumanmohan commented Dec 18, 2023

At present our PIFOs can only handle:

  • Two flows.
  • A policy that attempts to give those two flows equal shares.

These limitations of PIFOs also limit PIFO trees.

There is room for generalization on a few axes. I see these as orthogonal, though of course they'd be more powerful if combined. I think these are doable even just using our approach where a PIFO merely orchestrates some n FIFOs, where n can be determined in advance.

  • A PIFO that handles n flows and affects the policy du jour on them.
  • A PIFO that allows strict prioritization among its children: strictly prefer flow A over flow B over flow C, and so on. This PIFO would still be work-conserving: if flow A were silent and flow B had items, we'd send B's items until A became chatty again, and we'd send C's items only if A and B were both silent. This generalizes easily to n flows.
@anshumanmohan anshumanmohan added S: Available Can be worked upon C: Queues One of the queue-style frontends labels Dec 18, 2023
@sampsyo
Copy link
Contributor

sampsyo commented Dec 18, 2023

This all seems quite sensible! Except that I don't quite understand one thing: what does "strict prioritization" mean? Is it an alternative to fairness?

@anshumanmohan
Copy link
Contributor Author

Ah, sorry! I'll edit the text above to explain, but yes, this is a different scheduling policy that I think is easily within our grasp using just our simple approach. The idea is to strictly prefer flow A over flow B over flow C, and so on.

@anshumanmohan
Copy link
Contributor Author

anshumanmohan commented Jun 24, 2024

Just documenting a wrinkle that has come up now that @csziklai has started working on this! Let's just focus on round-robin scheduling with 3 flows.

Review: n=2

As you know, we have a little mechanism to record which of the sub-queues of the PIFO is hot, i.e., ready to be popped/peeked next. When possible we pop/peek from that queue and flip the pointer hot, and otherwise we (begrudgingly) pop/peek from the other queue and do not flip hot.

A Glimpse of the Challenge: n=3

As documented in the glossary, round-robin behaves as expected when there are three flows that are all sending packets. However, if one other flows falls silent, the expectation is a little subtle. rr(A, B, C), where A falls silent, needs to behave like rr(B, C).

In implementation-land, say we have a PIFO with three FIFOs as sub-queues. Call them A, B, and C. Say that hot points to A but A has no packets, while B and C do have packets. Now pop is called repeatedly on the PIFO. Which sub-queue do we pop begrudgingly? We can't just always pop B... that would create either strict(B, C) or fifo(B), depending on how cleverly we do it. Anyway, either is wrong.

To get rr(B, C) behavior out of rr(A, B, C) with silent A, we actually need to again swap between B and C! This may be okay if n were hardcoded to 3 – we might have a secondary hot-pointer that toggles between the two non-hot choices – but we need to think a little more carefully about truly general n!

@anshumanmohan
Copy link
Contributor Author

anshumanmohan commented Jun 24, 2024

Here's a sketch proposal. Warning that it's a quick and dirty sketch mostly focussed on n=3, but I'd like to surface it so we can pick holes in it later today.

If we have three flows A, B, and C, we need to consider the following eight cases in terms of their chattiness.

Flows Case 1 Case 2 Case 3 Case 4 Case 5 Case 6 Case 7 Case 8
A chatty silent chatty chatty silent silent chatty silent
B chatty chatty silent chatty silent chatty silent silent
C chatty chaty chatty silent chatty silent silent silent

Whatever our policy, it needs to behave correctly in all eight cases. Cases 1 and 8 are sorta easy, and cases 5, 6, and 7 are not so bad because there is only one correct flow to send from. Cases 2, 3, and 4 are harder because, see above, we need to alternate between the two flows that are chatty.

I have sketched up the following, and I can walk through the thinking IRL. I have abstracted away "transmit ans", which is how we'd tell the world that we found an answer.

hot = 0 # the point is just that hot has been set for us arbitrarily.
# the value 0 does not matter.

ans = pop(hot) 
if ans: # the target queue had something, great! 
  hot++ # move the target. incr is assumed to overflow at n, which is 3.
  explore = 0 # set/reset exploration probe.
  transmit ans
else:
  repeat(3): # we limit our explorations to n steps.
    ans = pop(hot + explore) # this addition is assumed to overflow at n, here 3.
    # So if hot = 2 and explore = 1, this will try to pop queue 0.
    explore++
    if ans:
      transmit ans
      break # break out of this repeat loop
  # the n-step exploration has ended, because of success or failure.
  if not ans:
    underflow error

@csziklai
Copy link
Contributor

Here is the updated pop() code, as discussed in the meeting.

   def pop(self) -> Optional[int]:
       """Pops the PIFO."""
       print("pop " + str(self.hot))
       if self.pifo_len == 0:
           if self.error_mode:
               raise QueueError("Cannot pop from empty PIFO.")
           return None
       self.pifo_len -= 1
       original_hot = self.hot

       while True:
            val = self.data[self.hot].pop()
            if val is not None:
                self.increment_hot() 
                return val
            else:
                self.increment_hot()
                if self.hot == original_hot:
                    return None```

@anshumanmohan
Copy link
Contributor Author

anshumanmohan commented Jun 24, 2024

Thanks Cassandra! So yes, to sum: the problem I chalked out was reasonable, but my proposed code was actually still a little off. The code above takes a slightly different tack to achieving round-robin. It is simpler and will translate to Calyx more easily.

In English, the policy is simply:

  • Start with some hot pointer.
  • In an infinite loop:
    • Pop hot.
      • If it succeeds, great. Increment hot and return the result you got.
      • If it fails, increment hot. This is an attempt to keep looking for a value in some other queue. The queue that was just recently hot has missed its turn, and we'll just get to it later.

Two notes:

  • These are all wraparound increments, i.e. add modulo n where n is the number of flows managed by the PIFO.
  • We have a little original_hot variable, that we check to ensure that we haven't looped all the way around. This may in fact be unnecessary, since we know from line 4 that the PIFO is nonempty. Anyway let's just keep it as a redundancy, watch everything succeed, and then maybe we can remove that variable and the last two lines of the code snippet.

csziklai added a commit that referenced this issue Jul 2, 2024
This PR makes progress towards #1810. It implements the python oracle
for PIFOs generalized to n flows, now known as Round Robin queues. Just
as with the PIFO, if a flow falls silent, the remaining flows will take
their turns. That flow effectively skips its turn. To re-generate the
test files with 20000 commands and a max length of 16, type in the
command line after navigating to the directory calyx/calyx-py/calyx

```
./gen_queue_data_expect.sh
```

Additionally, this PR also implements the Calyx version of Round Robin
queues in rr_queue.py. This was originally supposed to be its own PR,
but I thought it might be more complicated if I branched off a branch.
To run these tests, type in the command line
```
runt -i "rr_queue"
```

---------

Co-authored-by: Cassandra Nicole Sziklai <cns58@havarti.cs.cornell.edu>
Co-authored-by: Anshuman Mohan <anshumanmohan@live.com>
Co-authored-by: Anshuman Mohan <10830208+anshumanmohan@users.noreply.github.com>
@anshumanmohan anshumanmohan linked a pull request Jul 3, 2024 that will close this issue
csziklai added a commit that referenced this issue Jul 3, 2024
This PR ties off the last half of #1810. It implements the python oracle
and Calyx eDSL for strict PIFOs, which are generalized to n flows. Flows
have a strict order of priority, which determines popping and peeking
order. If the highest priority flow is silent when it is its turn, that
flow simply skips its turn and the next flow is offered service. If that
higher priority flow get pushed to in the interim, the next call to
pop/peek will return from that flow. To re-generate the test files with
20000 commands and a max length of 16, type in the command line after
navigating to the directory calyx/calyx-py/calyx
```
./gen_queue_data_expect.sh
```
To run the runt tests on the eDSL implementation, type 
```
runt -i "strict"
```

---------

Co-authored-by: Cassandra Nicole Sziklai <cns58@havarti.cs.cornell.edu>
Co-authored-by: Anshuman Mohan <anshumanmohan@live.com>
Co-authored-by: Anshuman Mohan <10830208+anshumanmohan@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
C: Queues One of the queue-style frontends good first issue Good issue to start contributing on Calyx S: Available Can be worked upon
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants