Why is Virgil so fast? #100
Replies: 10 comments 3 replies
-
Nice!
Indeed. This goes right along with what we were discussing in the #79 ; Also, the Virgil compiler parses, typechecks, and runs initializers for all code, but it only compiles reachable code from I'd be interested to see what results you get for |
Beta Was this translation helpful? Give feedback.
-
Very clever.
Here you go (I've also included the Virgil JVM results):
The Hello World + Fibonacci app doesn't really exercise the language and is probably not representative of "real world" code, if you have a better benchmark I could run it. wasm
wasm-optimised
x86-64-linux
x86-64-linux-optimised
x86-linux
x86-linux-optimised
jvm
jvm-optimised
The raw data along with source code, compiler commands and platform information is attached: |
Beta Was this translation helpful? Give feedback.
-
(I'm curious to see the maximum memory allocated during the executions too
(even for the compilations))
…On Tue, Aug 23, 2022 at 11:39 PM Stuart Rackham ***@***.***> wrote:
it only compiles *reachable* code from main(). It doesn't go past ASTs
for anything not reachable from main or needed to run initializers.
Very clever.
I'd be interested to see what results you get for x86-linux.
Here you go (I've also included the Virgil JVM results):
- x86 and x86-64 executables have roughly the same execution times but
the x86 executable is ~10% smaller.
- JVM execution times are also roughly the same as the x86 and x86-64
and, in terms of size, about ~10% smaller than x86 executables.
The *Hello World + Fibonacci* app doesn't really exercise the language
and is probably not representative of "real world" code, if you have a
better benchmark I could run it.
wasm
Compile time (secs) Executable size (B) Execution time (secs)
go 4.16s 428,547 0.61s
rust 0.33s 2,054,632 1.91s
virgil 0.01s 8,802 0.98s wasm-optimised
Compile time (secs) Executable size (B) Execution time (secs)
go 3.80s 191,265 0.62s
rust 0.77s 301,363 0.67s
virgil 0.01s 7,891 1.14s x86-64-linux
Compile time (secs) Executable size (B) Execution time (secs)
go 1.91s 503,640 0.39s
rust 0.31s 3,853,504 1.55s
virgil 0.01s 20,552 0.63s x86-64-linux-optimised
Compile time (secs) Executable size (B) Execution time (secs)
go 1.94s 140,056 0.41s
rust 2.73s 1,653,736 0.33s
virgil 0.01s 19,552 0.64s x86-linux
Compile time (secs) Executable size (B) Execution time (secs)
go
rust
virgil 0.01s 18,568 0.58s x86-linux-optimised
Compile time (secs) Executable size (B) Execution time (secs)
go
rust
virgil 0.01s 17,884 0.62s jvm
Compile time (secs) Executable size (B) Execution time (secs)
go
rust
virgil 0.00s 17,715 0.75s jvm-optimised
Compile time (secs) Executable size (B) Execution time (secs)
go
rust
virgil 0.00s 15,543 0.60s
The raw data along with source code, compiler commands and platform
information is attached:
virgil-results.txt
<https://github.com/titzer/virgil/files/9412833/virgil-results.txt>
rust-results.txt
<https://github.com/titzer/virgil/files/9412834/rust-results.txt>
go-results.txt
<https://github.com/titzer/virgil/files/9412835/go-results.txt>
—
Reply to this email directly, view it on GitHub
<#80 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAAMIVVUO4UZ5A2XFAAXLWLV2WRRHANCNFSM57NK4YSA>
.
You are receiving this because you are subscribed to this thread.Message
ID: ***@***.***>
--
Sent by an Internet
|
Beta Was this translation helpful? Give feedback.
-
That reminds me, I've been meaning to make memory profiling work with the native GC but haven't gotten around to it yet. I know that a typical bootstrap of Aeneas does not cause a single GC. As Aeneas has a 512MB heap running on x86-linux, that means it allocates less than 256MB of memory total for a self-compile. |
Beta Was this translation helpful? Give feedback.
-
@diakopter I added compilation and execution memory consumption columns to the results:
For native compilation targets Virgil wins in terms of minimising hardware requirements (executable memory and storage). The bash commands that generated the results have been added to the attached raw results files. wasm
wasm-optimised
x86-64-linux
x86-64-linux-optimised
x86-linux
x86-linux-optimised
jvm
jvm-optimised
|
Beta Was this translation helpful? Give feedback.
-
Virgil sure is parsimonious. |
Beta Was this translation helpful? Give feedback.
-
I might like garbage collection but I don't like garbage :-) |
Beta Was this translation helpful? Give feedback.
-
(I've been intending to contribute a more sophisticated dual-nursery
generational allocator/collector for collection-heavy programs such as the
ones I use... now that it seems it can support a perfectly precise
collector (registers included), maybe now's the right time..)
…On Wed, Aug 24, 2022, 8:46 PM Ben L. Titzer ***@***.***> wrote:
Virgil sure is parsimonious.
I might like garbage collection but I don't like garbage :-)
—
Reply to this email directly, view it on GitHub
<#80 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAAMIVUV4QHEEJSMBHGMD7LV2266ZANCNFSM57NK4YSA>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
I think part of what makes Virgil great/much better than Rust, C/C++, and whatever in the metrics you cite (code size, execution time, etc.) is that it's totally divorced from the current computing landscape/ecosystem. And by that I mainly mean it doesn't depend on libc and doesn't care about supporting it. There is no Though this does have its own drawbacks such as porting to a new platform can be more annoying and certain things you have to do from scratch (such as networking), not to mention debugging can be a pain. |
Beta Was this translation helpful? Give feedback.
-
Thanks for the link, I love this quote:
|
Beta Was this translation helpful? Give feedback.
-
I ran a Hello World + Fibonacci benchmark comparing Virgil with Rust and TinyGo (the two most often cited Wasm compilers) — the results seem to good to be true!
Virgil outperforms both Rust and TinyGo by orders of magnitude in terms of both compiler speed and executable file sizes. Yes, the 0.00s compile time is correct —
time(1)
reports to the nearest 1/100s (when I compiled my first Virgil program it was so fast I thought it hadn't run).The Numbers
wasm
wasm-optimised
x86-64-linux
x86-64-linux-optimised
WebAssembly Performance
x86-64 Performance
Notes
Virgil Wasm code generated with the compiler
-opt=all
option ran slower than without it but the executable size was ~10% smaller, so currently there's not a lot to be gained using the-opt=all
option.Importing the
fmt
package increased the size of the TinyGo Wasm executable from 8KB to 191KB (an increase of 183KB), whereas importing the VirgilStrings
component increased the size of the Virgil Wasm executable from 3.6KB to 7.9KB (an increase of only 4.3KB).The compiled Wasm files were executed with
wasmtime-cli 0.39.1
Details
The raw data along with source code and platform information is attached.
go-results.txt
rust-results.txt
virgil-results.txt
Beta Was this translation helpful? Give feedback.
All reactions