Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix(hub): ensure that an update is dispatched if any of its topics is subscribed and allowed #688

Merged
merged 2 commits into from
Aug 12, 2022

Conversation

dunglas
Copy link
Owner

@dunglas dunglas commented Aug 12, 2022

There is a flaw in the current logic. According to the spec, an update must be dispatched if:

  • The subscriber is subscribed to at least one of its topics
  • The subscriber is allowed for at least one of its topics (even if it's not the one used to subscribe)

https://mercure.rocks/spec#subscribers

This fixes #651 (thanks @mab05k).

@divine @kevburnsjr, this changes a bit the logic introduced in #578.
Would you mean to check it doesn't introduce any performance regression?

@kevburnsjr
Copy link
Contributor

Looks faster according to my tests.

> benchstat a.txt b.3.txt
name                                                    old time/op    new time/op    delta
Subscriber/topics_1_concurrency_100_matchpct_1-16         90.2ns ± 3%    86.9ns ± 6%     ~     (p=0.421 n=5+5)
Subscriber/topics_1_concurrency_100_matchpct_10-16        90.1ns ± 5%    93.2ns ± 6%     ~     (p=0.222 n=5+5)
Subscriber/topics_1_concurrency_100_matchpct_100-16       95.2ns ± 7%    95.1ns ± 7%     ~     (p=1.000 n=5+5)
Subscriber/topics_1_concurrency_1000_matchpct_1-16        88.5ns ± 4%    87.1ns ± 3%     ~     (p=0.421 n=5+5)
Subscriber/topics_1_concurrency_1000_matchpct_10-16       90.5ns ± 3%    90.6ns ± 4%     ~     (p=1.000 n=5+5)
Subscriber/topics_1_concurrency_1000_matchpct_100-16      88.2ns ± 4%    90.0ns ± 5%     ~     (p=0.310 n=5+5)
Subscriber/topics_1_concurrency_5000_matchpct_1-16        87.2ns ± 5%    83.8ns ± 3%     ~     (p=0.095 n=5+5)
Subscriber/topics_1_concurrency_5000_matchpct_10-16       87.3ns ± 5%    87.1ns ± 3%     ~     (p=0.690 n=5+5)
Subscriber/topics_1_concurrency_5000_matchpct_100-16      90.1ns ± 3%    92.9ns ± 3%     ~     (p=0.056 n=5+5)
Subscriber/topics_1_concurrency_20000_matchpct_1-16       85.8ns ± 5%    89.0ns ± 5%     ~     (p=0.222 n=5+5)
Subscriber/topics_1_concurrency_20000_matchpct_10-16      86.1ns ± 2%    87.9ns ± 4%     ~     (p=0.310 n=5+5)
Subscriber/topics_1_concurrency_20000_matchpct_100-16     89.0ns ± 3%    86.7ns ± 2%     ~     (p=0.222 n=5+5)
Subscriber/topics_5_concurrency_100_matchpct_1-16          320ns ± 6%     214ns ± 3%  -33.06%  (p=0.008 n=5+5)
Subscriber/topics_5_concurrency_100_matchpct_10-16         318ns ± 5%     218ns ± 8%  -31.41%  (p=0.008 n=5+5)
Subscriber/topics_5_concurrency_100_matchpct_100-16        319ns ± 4%     211ns ± 2%  -33.80%  (p=0.008 n=5+5)
Subscriber/topics_5_concurrency_1000_matchpct_1-16         325ns ± 4%     215ns ± 7%  -33.86%  (p=0.008 n=5+5)
Subscriber/topics_5_concurrency_1000_matchpct_10-16        326ns ± 5%     217ns ± 6%  -33.54%  (p=0.008 n=5+5)
Subscriber/topics_5_concurrency_1000_matchpct_100-16       315ns ± 7%     212ns ± 4%  -32.65%  (p=0.008 n=5+5)
Subscriber/topics_5_concurrency_5000_matchpct_1-16         324ns ± 7%     208ns ± 4%  -35.69%  (p=0.008 n=5+5)
Subscriber/topics_5_concurrency_5000_matchpct_10-16        328ns ± 7%     213ns ± 8%  -35.04%  (p=0.008 n=5+5)
Subscriber/topics_5_concurrency_5000_matchpct_100-16       340ns ± 7%     210ns ± 8%  -38.21%  (p=0.008 n=5+5)
Subscriber/topics_5_concurrency_20000_matchpct_1-16        321ns ± 7%     204ns ± 3%  -36.24%  (p=0.008 n=5+5)
Subscriber/topics_5_concurrency_20000_matchpct_10-16       334ns ±10%     214ns ± 6%  -35.82%  (p=0.008 n=5+5)
Subscriber/topics_5_concurrency_20000_matchpct_100-16      323ns ±12%     210ns ± 5%  -35.11%  (p=0.008 n=5+5)
Subscriber/topics_10_concurrency_100_matchpct_1-16        1.20µs ±11%    0.67µs ±12%  -44.02%  (p=0.008 n=5+5)
Subscriber/topics_10_concurrency_100_matchpct_10-16       1.14µs ±16%    0.67µs ±11%  -40.82%  (p=0.008 n=5+5)
Subscriber/topics_10_concurrency_100_matchpct_100-16      1.06µs ±12%    0.64µs ± 9%  -39.52%  (p=0.008 n=5+5)
Subscriber/topics_10_concurrency_1000_matchpct_1-16       1.20µs ± 7%    0.67µs ± 9%  -44.22%  (p=0.008 n=5+5)
Subscriber/topics_10_concurrency_1000_matchpct_10-16      1.13µs ±10%    0.67µs ± 8%  -40.87%  (p=0.008 n=5+5)
Subscriber/topics_10_concurrency_1000_matchpct_100-16     1.09µs ±16%    0.68µs ± 8%  -37.66%  (p=0.008 n=5+5)
Subscriber/topics_10_concurrency_5000_matchpct_1-16       1.18µs ± 6%    0.67µs ± 9%  -42.96%  (p=0.008 n=5+5)
Subscriber/topics_10_concurrency_5000_matchpct_10-16      1.08µs ±14%    0.68µs ± 8%  -37.13%  (p=0.008 n=5+5)
Subscriber/topics_10_concurrency_5000_matchpct_100-16     1.14µs ±11%    0.66µs ± 7%  -41.57%  (p=0.008 n=5+5)
Subscriber/topics_10_concurrency_20000_matchpct_1-16      1.17µs ±16%    0.65µs ± 3%  -44.60%  (p=0.008 n=5+5)
Subscriber/topics_10_concurrency_20000_matchpct_10-16     1.19µs ±14%    0.68µs ± 6%  -43.25%  (p=0.008 n=5+5)
Subscriber/topics_10_concurrency_20000_matchpct_100-16    1.28µs ± 6%    0.66µs ±11%  -48.19%  (p=0.008 n=5+5)

Might just be that the new logic is more optimal for the test data set.
In any event, I don't see this causing performance issues.

@dunglas
Copy link
Owner Author

dunglas commented Aug 13, 2022

I tweaked the logic again because this patch broke the cache: #689

@divine
Copy link
Contributor

divine commented Aug 15, 2022

Hello,

I've deployed the latest beta version to production. Let's run it for a week.

We're again under server limits... going up to 16M of notifications per day with up to ~4k subscribers. Probably time to upgrade the server 😄.

Thanks!

@divine
Copy link
Contributor

divine commented Aug 22, 2022

Hello,

JFYI, all looks good. Hard to measure performance (importing thing it doesn't go over server limits) as the number of notifications differs from time to time.

Currently:
image

Previously:
image

Thanks!

@dunglas
Copy link
Owner Author

dunglas commented Aug 22, 2022

Thanks for testing @divine! I've an idea to improve the algorithm a bit more (using gob instead of ascii86 to encode the key), but this shouldn't change significantly the performance.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Unable to Subscribe to Private Messages
3 participants