Releases: lunatic-solutions/lunatic
Lunatic v0.13.2
Changes
- Wasmtime updated to 8.0
- Added support for atomic named process spawning
- Added support for process monitoring (@tqwewe)
- SQLite support added (@SquattingSocrates)
- Support for intermediate CA certificates added (@teskeras)
- Show name of registered processes when they fail (@tqwewe)
- Environment spawn limit (@kosticmarin)
peer_addr
host API added (@MarkintoshZ)- Metrics API added (@Roger)
process::exists
API added (@jtenner)- Cargo lunatic wrapper added (@Gentle)
- Improved CLI default arguments (@pinkforest)
- Improved CI workflow (@shamilsan)
- Additional tests added (@sid-707)
Lunatic v0.12.0
Changes
- Compiled modules can now be sent between processes (@tqwewe)
- TLS support added (@SquattingSocrates)
- Metrics added to the VM (@HurricanKai)
- Improvements to distributed lunatic (@kosticmarin)
- Distributed metadata added (@kosticmarin)
- Improved error reporting (@alecthomas)
- TCP read/write timeouts added back (@SquattingSocrates)
- Time API moved from async-std to tokio.rs (@MarkintoshZ)
- FIX: Sender can be dropped during execution (@HurricanKai)
- FIX: Dependency issues (@pinkforest)
Lunatic v0.10.1
Changes
Patch release that fixes Windows builds on 1.64 stable and nightlies. Thank you @pinkforest
Release v0.10.0
Changes
This release is bringing back distributed lunatic 🎉! But this time it's using QUIC as the protocol for node to node communication. Check out this example on how to spawn processes on remote nodes.
Other changes
- Switched from async_std to tokio (@kosticmarin)
kill
host function added (@zhamlin)send_after
&cancel_timer
host functions added (@zhamlin)- changed timeout type in host functions from
u32
tou64
- timeout parameters are removed for networking read/write calls
Changes in lunatic-rs
- UDP support added (@pinkforest)
#[abstract_process]
macro added (@MarkintoshZ)- Added support for distributed lunatic (@withtypes)
- FuncRef support added as a safe interface to send function pointers between processes (@zhamlin)
block_until_shutdown
method added to Supervisor (@MarkintoshZ)OneForAll
andRestForOne
supervisor strategies added (@MarkintoshZ)Debug
,Hash
andEq
traits added for a few types (@MarkintoshZ and @thehabbos007)- All serializers (except the default bincode) are now behind a feature flag.
And a bunch of other smaller performance and bug fixes!
Release v0.9.0
Changes
- UDP support (contributed by @jtenner)
- Added support for
cargo test
whenlunatic
is used as runner. Now lunatic will mimic Rust's behaviour when running guest tests annotated with#[lunatic::test]
. - Temporarily removed support for distributed lunatic while a better design is in development.
Release v0.7.5
Changes
- The CI now builds universal macOS binaries (M1 & Intel support).
- Host functions for TCP read/writes indicate a timeout with a return value now, instead of a generic error. (contributed by @teymour-aldridge)
Release v0.7.4
Changes
Adds local_addr
host function for TCP listeners.
Adds version
host function. (contributed by @teymour-aldridge)
Adds check if processes are spawned before the Wasm module was initialized. (contributed by @jtenner)
Process traps are now logged by default to stdout.
Release v0.7.0
Changes
This is the first release that supports connecting multiple lunatic instances together 🎉. From the perspective of developers that are targeting lunatic there should be no difference between locally running processes or remote ones. Spawning and sending messages to them uses the same APIs.
To turn your local lunatic instance into a distributed node you will need to provide a unique name and socket to bind to. Both of them can be set through the cli.
CLI
To start a distributed node you can run:
lunatic --node 0.0.0.0:8333 --node-name foo --no-entry
This starts a lunatic node with the name foo
listening the specified port. The --no-entry
flag means that this node doesn't have a start function, it will just block forever.
If you want to connect to a node you can pass in the --peer
flag:
lunatic --node localhost:8334 --node-name bar --peer 0.0.0.0:8333 file.wasm
Once you connect to one node all others known ones will be dynamically discovered.
Usage from guest code (Rust)
A great property of lunatic is that much of the functionality provided by the runtime is directly exposed to the code running inside of it. This allows you to dynamically load WebAssembly code from already running WebAssembly code, or to create sandboxed environments to execute code on the fly.
The abstraction of an Environment
, that we used previously to sandbox and limit process resources, fits perfectly into the world of distributed lunatic. Every time you create a new Environment
you need to explicitly add Wasm Modules
to it, because we may need to JIT re-compile the module with the new limitations that have been set. Spawning a process from the same function in different Environments
may use different machine generated code to be more efficient in regards to the provided sandbox.
Now that a Module
may be sent over the network to a computer running a different operating system or even using a different CPU architecture, no changes need to be done to this already existing pattern inside of lunatic.
Here is an example of using the new API from Rust guest code:
use lunatic::{Config, Environment, Mailbox};
#[lunatic::main]
fn main(_: Mailbox<()>) {
// Give full access to the remote environment.
let mut config = Config::new(0xA00000000, None);
config.allow_namespace("");
// Create a new environment on the remote node with the name "foo"
let mut env = Environment::new_remote("foo", config).unwrap();
// Add the currently running module to the environment.
// This allows us to spawn a process from a closure, because the remote module will have the same
// bytecode available.
let module = env.add_this_module().unwrap();
// Spawn a process on a remote machine as you would do it locally.
let _ = module.spawn(|_: Mailbox<()>| println!("Hello world"));
}
This will print out Hello world
on the node labeled foo
. Adding this to the rust library required only a few lines of code changes. The whole implementation complexity stays inside of the VM. From the developer's perspective it's trivial to just send a closure to be executed on a completely different machine that may use a different operating system or CPU architecture.
Known issues
- At the moment nodes send plain text messages between each other and each node connects to each
other over TCP. - If a node disappears from the network linked processes will not be notified that the links broke.
Release v0.6.2
Fixes a bug with lunatic::net::resolve
(#61).
Release v0.6.1
This release fixes a dead-lock bug when TCP streams are shared between multiple processes.