-
-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Channels, fibers, oh my! #241
Conversation
Okay, I think my todos are done. The only unpolished part is the tracer. It's not worse than before, but it's far from finished: For now, the tracer returned by the VM's The main reason is that its whole architecture relied on being owned by a fiber (for example, it only stores pointers because it doesn't have its own heap). I also noticed that the way we use the tracer for determining whose fault a panic is isn't quite correct all the time. In particular, we need to handle cases where
Here, the I have ideas on how to solve this but I'd rather do that in a separate PR because it requires more work, and this PR is already quite large. In the next PR, I'll separate the tracer into two parts:
|
Co-Authored-By: Jonas Wanke <contact@wanke.dev>
Co-Authored-By: Jonas Wanke <contact@wanke.dev>
Co-Authored-By: Jonas Wanke <contact@wanke.dev>
Co-Authored-By: Jonas Wanke <contact@wanke.dev>
Some comments are about how the
Essentially, this means that the outside world needs to also have a way of sending and receiving data from internal channels. One idea would be to change the enum Performer {
Fiber(FiberId),
Extern(ExternalOperationId),
Nursery,
} This way, the outside world could call I'm not too fond of the name |
On third thought, maybe it would be even easier if we had no extra concept of external channels at all. Instead, the outside world would always use "internal" channels as well. If we want pull-based behavior, we can still do that using a channel with a capacity of 0. |
Regarding one send and one receive channel for communication with the outside world: That would work if we add an extra layer inside Candy that takes care of multiplexing and demultiplexing the messages. Basically the job that would otherwise fall to the Rust code. Using the “internal” channels for communication with the outside world as well sounds good since that should simplify the code while still allowing us to accomplish the same |
(Although I already requested a review before, now there's really nothing more I'd change.) |
Co-Authored-By: Jonas Wanke <contact@wanke.dev>
This PR adds channels, fibers,
try
, and environments.Here are all the new features:
core.try { ... }
catches panics of the closure and instead returns a result, either[Ok, returnValue]
or[Error, panicReason]
.core.parallel { nursery -> ... }
spawns a new parallel scope, in which multiple fibers may run concurrently. Which brings us to the next feature:core.async nursery { ... }
spawns a new fiber and returns a future (which is just a receive port).core.await someFuture
waits for its result. In a way,core.async
andcore.await
are the opposite of each other (core.await (core.async nursery foo) == foo
).Note that panics still propagate upwards – a panicking fiber that was spawned on a nursery also causes the surrounding
core.parallel
to panic. All other fibers spawned on that nursery get canceled (they are not further executed).The
candy run
command now looks for an exportedmain
function and calls it with an environment.Here's everything in action:
Implementation
Although we already spoke about my initial idea in person, I completely changed my approach since then.
Initially, I planned on implementing a fiber tree using an actual tree of Rust values. However, I didn't manage to implement that in a reasonable amount of time (I tried for ~20 hours total). Instead, I ended up with a very simple approach, inspired by the implementation of the BeamVM. In this approach, there's not an actual tree of structs. Rather, a VM maintains a list of fibers as well as a list of channels.
The main advantage of this new approach is the simplicity of the implementation. In particular, there's less accidental complexity that arises from organizing channels and fibers in a tree. For example, no migrating of channels between different subtrees is necessary because all channels are global to the VM anyway. This especially reduces the complexity in Rust, where ownership can make things a lot less complicated.
Looking forward, this architecture also enables straightforward parallelization. We don't have to lock subtrees of fibers, but instead, we can have a pool of worker threads that try to work on the fibers, for example, by choosing a random fiber from the list, and locking it (if no one else is working on it already).
A more advanced approach (also implemented by the BeamVM) seems to be to have fibers assigned to a particular worker thread (similar to the affinity of OS threads) because this improves the cache hit rate (L1 caches are local per core). This is often combined with work stealing. I'm sure we can implement something similar for our VM.
Performance
Entering parallel scopes and creating new fibers is often veeery slow. The reason for that is that if
core
is used inside a fiber, then the wholecore
struct is copied to another fiber. This is not an exponential blowup of runtime like the value-issue we had previously, but fibers are definitely not cheap.In the future, several optimizations can improve the runtime:
core.int.add
may be reduced to✨.intAdd
, so that only✨
needs to be copied to the fiber.createInt
andcreateStruct
. Rather, we could have a separate heap area that contains constant objects. Several other compilers also use this technique (e.g. Lua, Wren). The objects in that heap could have a special header that indicates they are not reference-counted, sodup
anddrop
instructions on those objects are a no-op.