Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add a benchmark to analyze performance on consequtive calls #20

Open
wants to merge 4 commits into
base: master
Choose a base branch
from

Conversation

lessless
Copy link

@lessless lessless commented Apr 21, 2020

Hi,

This PR is a result of trying to understand UAInspector performance. The first change is just a convinience - it points data lookup path to a default download directory.

The second one is a test that will call UAInspector.parse/1 on an elements of a list. I don't yet know why but ua_inspector performance degrades when it's being consequentively called on a list with even 10 elements.
Also performance on data provided in bench scripts doesn't correspond with the one I observed the other day. Fortunatelly it was possible to reproduce the issue with sample data from https://github.com/51Degrees/Device-Detection

mix bench.parse

Name                        ips        average  deviation         median         99th %
Parse: hbbtv           153.38 K        6.52 μs   ±443.95%           5 μs          23 μs
Parse: desktop          31.17 K       32.09 μs    ±75.55%          27 μs          88 μs
Parse: bot              25.24 K       39.62 μs    ±72.65%          33 μs         113 μs
Parse: smartphone       19.38 K       51.61 μs    ±55.51%          46 μs         125 μs
Parse: tablet           18.84 K       53.07 μs    ±58.75%          44 μs         144 μs
Parse: unknown          18.59 K       53.79 μs    ±97.79%          44 μs         157 μs
 mix bench.parse_list                                                                                                          11:45:39
Operating System: macOS
CPU Information: Intel(R) Core(TM) i7-7660U CPU @ 2.50GHz
Number of Available Cores: 4
Available memory: 16 GB
Elixir 1.10.2
Erlang 22.3

Benchmark suite executing with the following configuration:
warmup: 2 s
time: 1 min
memory time: 0 ns
parallel: 1
inputs: 1, 10, 100, 1_000
Estimated total run time: 4.13 min

Benchmarking UAInspector.parse/1 with input 1...
Benchmarking UAInspector.parse/1 with input 10...
Benchmarking UAInspector.parse/1 with input 100...
Benchmarking UAInspector.parse/1 with input 1_000...

##### With input 1 #####
Name                          ips        average  deviation         median         99th %
UAInspector.parse/1         40.45       24.72 ms    ±75.23%       19.81 ms      108.81 ms

##### With input 10 #####
Name                          ips        average  deviation         median         99th %
UAInspector.parse/1          4.67      214.20 ms    ±46.67%      181.41 ms      668.52 ms

##### With input 100 #####
Name                          ips        average  deviation         median         99th %
UAInspector.parse/1          0.54         1.84 s     ±6.78%         1.87 s         1.94 s

##### With input 1_000 #####
Name                          ips        average  deviation         median         99th %
UAInspector.parse/1        0.0552        18.13 s     ±0.00%        18.13 s        18.13 s

@lessless lessless changed the title Add benchmark on a list of user agents Add a benchmark to analyze performance on consequtive calls Apr 21, 2020
@lessless
Copy link
Author

It seems that I missinterpreted Benchee results - there is no performance degradation.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant