Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Try to get it running on emscripten/binaryen and port smm-webui #6

Open
sorear opened this issue Jun 26, 2016 · 4 comments
Open

Try to get it running on emscripten/binaryen and port smm-webui #6

sorear opened this issue Jun 26, 2016 · 4 comments

Comments

@sorear
Copy link
Owner

sorear commented Jun 26, 2016

since we're now 7x faster than SMM2 in single-threaded mode, and using ~ a quarter the memory, it's probably still going to beat the JS version even with the overhead of cross-compiling to JS (asm.js / webasm). threads would help even more, but SharedArrayBuffer probably going to take much longer to get here than the Rust emscripten port. OTOH, having everything in a single 128MiB array will allow us to move execution between the main thread and a worker with a zero-copy transferring postMessage.

@sorear
Copy link
Owner Author

sorear commented Jul 3, 2016

It looks like neither rust emscripten nor rust wasm is quite ready for prime time, although both are being actively worked on.

@digama0
Copy link
Contributor

digama0 commented Jul 4, 2016

I think this is moving in the wrong direction. Rust is designed to run on hardware directly. Instead, to provide a web UI it should act as a server, and apache can redirect requests to a specific command line which will produce HTML output. This can be run locally by running a local apache server, and a sufficiently permissioned copy can run in a place like rust.metmath.org.

@sorear
Copy link
Owner Author

sorear commented Jul 4, 2016

"X runs on hardware directly" is not a very meaningful statement.

WASM is not that dissimilar from LLVM IR, and there's no moral difference between "Rust -> MIR -> LLVM -> x86 asm -> [CLASSIFIED]" and "Rust -> MIR -> WASM -> x86 asm -> [CLASSIFIED]" (and remember, the buck does not stop at the ISA). I could do PNaCl instead, PNaCl literally is LLVM IR, though adoption has been weak. I'm not religious about compiler middle ends.

The relevant question is how tightly integrated the engine can be with the UI, and how much interactive response (and scalability!) are you forfeiting by requiring all queries to run on a central node. I see no advantage to requiring frequent roundtrips for serious work (casual browsing may be a different matter, especially if people tend to navigate away before downloading 29MB of HTML).

@digama0
Copy link
Contributor

digama0 commented Jul 4, 2016

Okay, I was reading your proposal as transpilation to JS, which introduces a wildcard overhead factor. If you can get something similar with WASM or PNaCl I could see that working.

For casual browsers (and maybe sandbox work) the remote server is sufficient. In these cases the main benefit is as a replacement to the us2 HTML pages with zero build time. For serious authors, the web UI is trying to play proof assistant, with all the complexities that entails. In these cases, you would want to run it as a local server, and at that point it isn't all that different from the other proposed IDE+smm3 proof assistant plans.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants