-
Notifications
You must be signed in to change notification settings - Fork 597
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
options to incorporate Lighthouse into our Pipeline #5126
Comments
I agree that performance benchmarking is an important part of the pipeline to be added. I do have some questions about what seems like a proposal before we dig further in:
I'd like to see more of an investigation into what we feel we should be benchmarking, what would be a good way to test that, and then I think we can look at tools. As it is now, it seems like we're looking at tools without really having a clear, defined, and accepted criteria to compare them against. |
|
Thanks @Ibrahimmaga, with regards to #2 - it seems like this is focused on testing website performance, is that correct? In that case it seems like lighthouse is a fine option to go with. With that said, I am curious though if the website is the right place to start here. Most of the time, I think it's highly likely that changes to With the website being fairly steady over time, consider the following scenario and it's potential implications: We add a few new components across the foundation and components packages. None of those components are used on the website, so there are no changes to website performance when run through lighthouse as part of the pipeline. We decide a month or so later to add a new section to the site and include one or two of these components. Performance at this point drops by 10% with significant memory issues. At this point, it's quite late and potentially problematic to address the underlying performance problems. The CI fails due to a PR that merged over a month ago and we now need to figure out what the underlying issue is, how to address it without breaking, etc. It seems like a more Should we instead start by setting up benchmarking for the components and catching these issues when and where they are introduced? That doesn't mean we can't also test the website, but I don't actually think testing the website on every pipeline run is going to be a good method of preventing performance regressions. |
Testing library perf and website perf are two different things entirely, with very different technologies and approaches involved. @nicholasrice already has some basic tools for testing performance of fast-element library changes. I would like to automate this eventually, but there's a lot more to do before we'd be ready for that I think. @chrisdholt The tests we do for fast-element and that infrastructure probably don't make sense for testing the components. I think you would need to come up with a plan for how you want to handle that. We could help implement the infrastructure once you know what you want. Independently, we want our website to perform well. We've already run tests to determine areas we need to fix, but I think it makes sense to incorporate the testing into the build pipeline, especially if there is already a GH action that makes it easy. Then we'll be able to catch issues as we evolve the site, whether it's adding new features to the home page, publishing new explorer builds, or the re-design that's being discussed. @awentzel @Ibrahimmaga Can we restrict this pipeline step to running only when changes are made under It would be good to do this before the asset compression task so that we can also, via automation, track the improvements for integrating the compression. |
I don't disagree with this sentiment, but I still am not sure of the value at this point in the pipeline given how stable the site tends to be with exception of inheriting from actual controls - that's likely where a majority of the changes tend to and will happen. How often do we expect the site, it's assets, etc, to be changing? If we've done a cost/benefit and we think that a pipeline task is the most efficient method here, that's fine; however, if the pipeline turns into something which hinders our build times and process more than it helps, I'd like us to be prepared to reconsider. |
Unless there is a mandate to take on a compliance process for privacy, security, or accessibility that negatively impacts our productivity with additional pipeline requirements there would never be a change we are requesting that would detriment our productivity. DevOps' primary goal is to eliminate waste and improve productivity for our contributors. Lighthouse allows us to analyze and track how source code changes impact performance and other attributes on released websites. Understanding this analysis helps us know when there could be a problem. Maintaining fast and reliable productions sites for our community, incorporating modern best practices, and using our own tools are highly important for acquiring new customers and maintaining high customer satisfaction. |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
This seems like a non-issue as the site appears to be performant enough and the next iteration of the docs site will be even more performant which will launch with |
🙋 Feature Request
investigate options to incorporate Lighthouse into our Pipeline results so that during a PR, there will be output that describes the LightHouse results. The goal here is for each pull request if performance decreases by 5% we block the PR
🤔 Expected Behavior
none
😯 Current Behavior
none
💁 Possible Solution
check performance manually
🔦 Context
This action integrates Google's helpful Lighthouse audits for webpages — specifically testing for Performance, Accessibility, Best Practices, SEO, and Progressive Web Apps. Right now, the action will print the five scores (out of 100) to the output and upload HTML and JSON versions of the report as artifacts. In the next release, the action will let you specify thresholds for each test and optionally fail this step if they are not met.
💻 Examples
The text was updated successfully, but these errors were encountered: