Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[WIP] Support router serialization. #90

Closed
wants to merge 48 commits into from

Conversation

nathanhammond
Copy link
Contributor

@nathanhammond nathanhammond commented May 3, 2016

Currently works for all scenarios excepting late-additions via delegate. This may support those and I may just have broken tests, but I'm unsure.

Current status:

  • Make it work.
  • Make it right.
  • Make it fast.

@nathanhammond nathanhammond force-pushed the serialize-state branch 2 times, most recently from f9a6641 to 9c1c805 Compare May 3, 2016 23:12
router = new RouteRecognizer();
router.map = function() {
var original = new RouteRecognizer();
original.map(...arguments);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This probably the parse error.

@nathanhammond
Copy link
Contributor Author

nathanhammond commented May 3, 2016

Uh-oh, benchmarks aren't good:

testing
- End-to-end (runs the mapping and generates the router)
- Serialized (loads from JSON)
running first test, please wait...
  End-to-end ............ 178.20 op/s
  Serialized .............. 0.69 op/s

I'm not sure why it's slow yet.

new RouteRecognizer(serialized);

// Look up time is constant
router.recognize('/posts/1');
Copy link
Collaborator

@chadhietala chadhietala May 4, 2016

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is looking at the wrong RR instance.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch, rerunning benchmark now.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No faster: it wouldn't be, thinking about it. That new RouteRecognizer(serialized); is stupidly expensive for some reason.

@nathanhammond
Copy link
Contributor Author

nathanhammond commented May 4, 2016

@krisselden Can I steal some of your time to learn what I'm doing wrong on the performance side here? I'm going to delay aiming for 100% compatibility until I know it makes sense to do so.

Edit: see below, "simple" problem. No need to involve you yet. 😛

@nathanhammond
Copy link
Contributor Author

Found the problem:

$ ls -la out.json 
-rw-r--r--  1 user  group  63414305 May  3 18:44 out.json

JSON.parse simply can't handle a 60mb JSON object. Also, can you say memory pressure? This needs to be optimized (and I think it can be, pretty dramatically).

@nathanhammond
Copy link
Contributor Author

Can we start having the conversation about whether/how to land the dramatic refactoring currently living in trie.js once I finish making it 100% API compatible?

@nathanhammond
Copy link
Contributor Author

166 assertions of 178 passed, 12 failed.

Closing in on correct, taking a look at early benchmarks. Add and End-to-end seem to be doing something terribly wrong; I suspect that the compact method needs some work. The recognize benchmark is effectively a worst-case-scenario as the generated flat list doesn't lend itself to prefix trie optimizations.

We're likely going to trade runtime performance for serialized performance here (in add), but I want to keep runtime within striking distance of serialized. This means that Add and End-to-end need to reach hundreds of op/s.

I need to rebuild serialization for the trie method to see what we're looking at for rehydration speed as well as JSON file size.


Benchmark Original Trie
Add 489.09 op/s Crash
End-to-end 229.27 op/s 0.54 op/s
Generate 4,032,441.12 op/s 1,467,045.61 op/s
Handlers For 29,003,809.27 op/s 18,566,106.55 op/s
Recognize 352,315.79 op/s 25,253.27 op/s
Serialize 0.69 op/s

@nathanhammond
Copy link
Contributor Author

nathanhammond commented May 16, 2016

New benchmarks with JIT compacting. JIT compacting spreads out the cost in recognize and generate making the first call to each super-slow. I'm guessing, based upon the non-existent benchmark change, that it's throwing out that run as an outlier. 😜 (Assuming that most differences here are just noise.)

Benchmark Original Trie JIT Trie
Add 489.09 op/s Crash 442.14 op/s
End-to-end 229.27 op/s 0.54 op/s 0.59 op/s
Generate 4,032,441.12 op/s 1,467,045.61 op/s 2,790,745.38 op/s
Handlers For 29,003,809.27 op/s 18,566,106.55 op/s 17,657,366.47 op/s
Recognize 352,315.79 op/s 25,253.27 op/s 25,979.61 op/s
Serialize 0.69 op/s

Updates:

  • Improved generate by using push instead of unshift.
  • Improved add by using pure loops.

@nathanhammond
Copy link
Contributor Author

Current status:

  • Rebased on top of @bantic's work. 339 out of 355 tests passing. I know of additional correctness issues that I need to revisit with regards to specificity that we don't have tests for.
  • Many tests now run after a serialization/deserialization pass to check for idempotence of that operation.
  • I've not re-reviewed performance recently.
  • I need to clean up some of the mess I've made in integrating @bantic's work.

@nathanhammond
Copy link
Contributor Author

Superseded by ember-route-recognizer.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants