Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Aggregating manual test results #171

Open
RhyG opened this issue Nov 6, 2023 · 2 comments
Open

Aggregating manual test results #171

RhyG opened this issue Nov 6, 2023 · 2 comments
Labels
enhancement New feature or request

Comments

@RhyG
Copy link

RhyG commented Nov 6, 2023

When running Flashlight tests via Maestro with flashlight test <maesto test file> the results are nicely aggregated into a single result, which makes comparing against different runs easy. Is it possible to do this when manually executing tests? Not sure if it doesn't exist, or I've just missed it in the docs.

We have found that due to the time Maestro spends letting the app go idle our tests aren't quite as accurate as we'd like them to be. So instead we've been manually running tests via flashlight measure, running ~5 times, then grabbing the worst result. Is there a way to merge results like what is done when automating measures, but with manual tests? Happy to help with this if it doesn't exist, just point me in the right direction.

Thanks heaps, love the tool it has massively changed our performance measuring game.

@Almouro
Copy link
Member

Almouro commented Nov 13, 2023

Hi @RhyG,

You're right, there's no way to do this out of the box with flashlight measure (except dowloading each JSON of measures from the webapp individually and merging them via a script, but that'd be tedious)

Changing flashlight measure

At the moment, when we click on the "Start" button, the socket sends the "start" event from the webpage to the CLI which we listen to here https://github.com/bamlab/flashlight/blob/main/packages/commands/measure/src/server/ServerSocketConnectionApp.tsx#L31
This will create a new result (instead of a new iteration) in here https://github.com/bamlab/flashlight/blob/main/packages/commands/measure/src/server/socket/socketState.ts#L49

Might make sense to restructure the page and make it so we have a start/stop button and:

  • a button to add a new result
  • a button to add a new iteration

If you're interested in doing this, happy to provide some more guidance!

Maestro being slow

We also noticed Maestro can be pretty slow at starting and stopping tests, is that also what you're experiencing?
We actually have a fork that can be used with npx @perf-profiler/maestro@rc but it's not ideal, we should hopefully soon open a PR to Maestro to remedy this

@Almouro Almouro added the enhancement New feature or request label Nov 13, 2023
@RhyG
Copy link
Author

RhyG commented Nov 13, 2023

Thanks for the response @Almouro, I'll hopefully find some time to poke around and see what I can do. Super appreciate the help.

That's a big part of the slowness we're seeing yeah, interesting to hear others are experiencing it. We also have noticed that Maestro seems to "wait" for arbitrary amounts of time to allow the app to go idle to some degree, which means the JS thread and FPS have time to recover beyond what a regular user would experience. I'll check out your branch and see if it helps out at all. Thanks again!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants