-
Notifications
You must be signed in to change notification settings - Fork 53
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Schedule compaction #1722
Schedule compaction #1722
Conversation
@paili0628 any updates on the state of this and what would it take to merge this? I think it would be good to summarize the discussion we had in the meeting here to describe the 3 steps it would take to get the complete schedule compaction we wanted. |
I'm currently debugging this pr. It seems that for some test cases the while loops are violating the bound constraints even though |
Is it possible that it's not keeping track of dependencies across iterations of the while loop? For example:
I haven't looked at the code carefully; maybe you have covered this. |
@calebmkim The problem is that
For the program segment above, group B's write cells should include
|
I see. I think we should probably try to handle this case? (Although I'm not sure how difficult that would be). I had a similar problem when I was trying to use graph coloring to reuse FSMs for static islands that we know wouldn't run at the same time. (In other words, when instantiating FSMs for static islands, I made it so that different static islands can use the same FSM as long as they're guaranteed not be active at the same time). The code is here. Looking back, the code is not necessarily the best, so I could envision factoring out some of this code into a separate analysis that this pass will use as well. |
I see what you're saying. I think the code in |
@paili0628 are there technical discussions we need to have to move this PR along? Let's try to wrap this up soon so we can start working on results |
There are no technical discussions. I only have to deal with fmt. Sorry about this. |
No worries! |
This is ready for performance evaluation. @calebmkim |
Awesome thanks! |
* added schedule compaction pass * debug * fmt * changed test cases and added schedule-compaction to default pipeline * addressed comments * fmt * debug * debug * changed the group enable to static par * allowed non-enable control and recalculated latency * add schedule-compaction to default pass * restored invoke-memory.futil * fixed bug * fmt * restore pass ordering * debug * fmt and fix test cases * fix test cases * fix test cases
implements #1717.
I have not yet put the
schedule-compaction
pass into the default pipeline but I think it should be immediately afterstatic-promotion
, since I think it is only reasonable to compact static seqs that were promoted by thestatic-promotion
pass, but I want to know what you think @calebmkimUpdate: Already inserted
schedule-compaction
into default pipeline afterstatic-promotion
.On Monday we discussed that while the current version of
schedule-compaction
is adequate, it is worthwhile to extend this optimization to nested static seqs withrepeat
,par
,seq
andwhile
blocks as children. To address the problem that we can only utilize thego
signal of groups, @calebmkim suggested that instead of making a big group schedulinggo
signals, we use astatic par
cleverly to achieve the same effect.