Skip to content

Commit

Permalink
docs: updated optimizations docs
Browse files Browse the repository at this point in the history
  • Loading branch information
arctic-hen7 committed Jul 2, 2022
1 parent 2b49591 commit 51ad962
Showing 1 changed file with 3 additions and 10 deletions.
13 changes: 3 additions & 10 deletions docs/next/en-US/reference/deploying.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,18 +14,11 @@ With JavaScript, you can 'chunk' your app into many different files that are loa

If you're getting into real strife with your bundle sizes though, you can, theoretically, split out your app into multiple components by literally building different parts of your website as different apps. This should be an absolute last resort though, and we have never come across an app that was big enough to need this. (Remember that Perseus will still give your users a page very quickly, it's just the interactivity that might take a little longer --- as in a few milliseconds longer.)

However, there are some easy things you can do to make your Wasm bundles much smaller. The first are applied in `Cargo.toml` to the release profile, which allows you to tweak compilation settings in release-mode (used in `perseus deploy`). (Note: if your app is inside a workspace, this has to go at the root of the workspace.)
Very usefully, the Perseus CLI automatically applies several optimizations when you build in release mode. Specifically, Cargo's optimization level is set to `z`, which means it will aggressively optimize for size at the expense of speed, which actually means a faster site, due to faster load times for the Wasm bundle. Additionally, `codegen-units` is set to `1`, which slows down compilation with `perseus deploy`, but both speeds up, and reduces the size of, the final bundle.

```toml
[profile.release]
lto = true
opt-level = "z"
codegen-units = 1
```

The first of these lets LLVM inline and prune functions more aggressively, leading to `perseus deploy` taking longer, but producing a faster and smaller app. The second is Cargo's optimization level, which is usually set to 3 for release builds, optimizing aggressively for speed. However, on the web, we get better 'speed' out of smaller sizes as explained above, so we optimize aggressively for size (note that sometimes optimizing normally for size with `s` can actually be better, so you should try both). The third of these is another one that makes compilation take (much) longer with `perseus deploy`, but that decrease speed by letting LLVM basically do more work.
Notably, these optimizations are enabled through the `RUSTFLAGS` environment variable on the Wasm build, and only in release-mode (e.g. `perseus deploy`). If you want to tweak these changes, you can directly override the value of that environment variable in this context (i.e. you can apply your own optimization settings) by setting the `PERSEUS_WASM_RELEASE_RUSTFLAGS` environment variable. This takes the same format as `RUSTFLAGS`, and its default value is `-C opt-level=z -C codegen-units=1`.

If you're only ever going to export your app, this is fine, but, if you ever use a server, then this will be a problem, as these size-focused optimizations will apply to your server too, slowing everything down again! Unfortunately, Cargo doesn't yet support [target-specific profiles](https://github.com/rust-lang/cargo/issues/4897), so we need to hack our way around this. TODO
*Note: the reason these optimizations are applied through `RUSTFLAGS` rather than `Cargo.toml` is because Cargo doesn't yet support target-specific release profiles, and we only want to optimize for size on the browser-side. Applying the same optimizations to the server would slow things down greatly!*

The next thing you can do is switch to `wee_alloc`, an alternative allocator designed for the web that produces less efficient, but smaller bundles. Again though, that lower efficiency is barely noticeable, while every kilobyte you can shave off the bundle's size leads to a notably faster load speed. Importantly, you still want to retain that efficiency on the server, so it's very important to only use `wee_alloc` on the browser-side, which you can do by adding the following to the very top of your `lib.rs`:

Expand Down

0 comments on commit 51ad962

Please sign in to comment.