Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

options to incorporate Lighthouse into our Pipeline #5126

Closed
Ibrahimmaga opened this issue Aug 31, 2021 · 8 comments
Closed

options to incorporate Lighthouse into our Pipeline #5126

Ibrahimmaga opened this issue Aug 31, 2021 · 8 comments
Assignees
Labels
area:dev-ops Pertains to build, CI, and other dev-ops work area:site Pertains to work on the web site. chore Maintenance or non-code work closed:by-design Closed by design improvement A non-feature-adding improvement status:under-consideration Issue is being reviewed by the team. warning:stale No recent activity within a reasonable amount of time

Comments

@Ibrahimmaga
Copy link
Contributor

🙋 Feature Request

investigate options to incorporate Lighthouse into our Pipeline results so that during a PR, there will be output that describes the LightHouse results. The goal here is for each pull request if performance decreases by 5% we block the PR

🤔 Expected Behavior

none

😯 Current Behavior

none

💁 Possible Solution

check performance manually

🔦 Context

This action integrates Google's helpful Lighthouse audits for webpages — specifically testing for Performance, Accessibility, Best Practices, SEO, and Progressive Web Apps. Right now, the action will print the five scores (out of 100) to the output and upload HTML and JSON versions of the report as artifacts. In the next release, the action will let you specify thresholds for each test and optionally fail this step if they are not met.

image

💻 Examples

@Ibrahimmaga Ibrahimmaga added the status:triage New Issue - needs triage label Aug 31, 2021
@Ibrahimmaga Ibrahimmaga self-assigned this Aug 31, 2021
@chrisdholt
Copy link
Member

I agree that performance benchmarking is an important part of the pipeline to be added. I do have some questions about what seems like a proposal before we dig further in:

  1. What makes Lighthouse the right choice for our project?
  2. What are we benchmarking performance wise? What is a good test of performance and how we measure any regressions given the nature of our various exports?
  3. Have we investigated other solutions? It seems like we haven't as the only alternative above is to check manually...

I'd like to see more of an investigation into what we feel we should be benchmarking, what would be a good way to test that, and then I think we can look at tools. As it is now, it seems like we're looking at tools without really having a clear, defined, and accepted criteria to compare them against.

@Ibrahimmaga
Copy link
Contributor Author

I agree that performance benchmarking is an important part of the pipeline to be added. I do have some questions about what seems like a proposal before we dig further in:

  1. What makes Lighthouse the right choice for our project?
  2. What are we benchmarking performance wise? What is a good test of performance and how we measure any regressions given the nature of our various exports?
  3. Have we investigated other solutions? It seems like we haven't as the only alternative above is to check manually...

I'd like to see more of an investigation into what we feel we should be benchmarking, what would be a good way to test that, and then I think we can look at tools. As it is now, it seems like we're looking at tools without really having a clear, defined, and accepted criteria to compare them against.

  1. we have been using GTMatrix for the last 9 months to evaluate our performance every month. GTMatrix use Lighthouse API which is a google chrome development tool to measure performance. That's why it is a good choice for the project.
  2. A good test performance is 90% currently https://www.fast.design. has a score of 98%.
  3. have we investigate other solutions? That's no. we don't have another solution at this moment except manually checking the performance. It is not really a new tool. we have been doing this manually, now we are suggestion to automatically check it. So that if we added something to the site that will decrease the performance by 5%, we will fail the CI.

@chrisdholt
Copy link
Member

Thanks @Ibrahimmaga, with regards to #2 - it seems like this is focused on testing website performance, is that correct? In that case it seems like lighthouse is a fine option to go with.

With that said, I am curious though if the website is the right place to start here. Most of the time, I think it's highly likely that changes to fast-element, fast-foundation, and especially the composed fast-components are going to be the places where performance may see the largest changes. If we only test the website, we're going to benchmark against a small subset of those changes. Would it make more sense to prioritize performance benchmarking of components first to catch possible issues at the build gate for new features, etc?

With the website being fairly steady over time, consider the following scenario and it's potential implications:

We add a few new components across the foundation and components packages. None of those components are used on the website, so there are no changes to website performance when run through lighthouse as part of the pipeline. We decide a month or so later to add a new section to the site and include one or two of these components. Performance at this point drops by 10% with significant memory issues. At this point, it's quite late and potentially problematic to address the underlying performance problems. The CI fails due to a PR that merged over a month ago and we now need to figure out what the underlying issue is, how to address it without breaking, etc. It seems like a more

Should we instead start by setting up benchmarking for the components and catching these issues when and where they are introduced? That doesn't mean we can't also test the website, but I don't actually think testing the website on every pipeline run is going to be a good method of preventing performance regressions.

@EisenbergEffect EisenbergEffect added area:dev-ops Pertains to build, CI, and other dev-ops work area:site Pertains to work on the web site. chore Maintenance or non-code work improvement A non-feature-adding improvement status:under-consideration Issue is being reviewed by the team. and removed status:triage New Issue - needs triage labels Sep 7, 2021
@EisenbergEffect
Copy link
Contributor

EisenbergEffect commented Sep 7, 2021

Testing library perf and website perf are two different things entirely, with very different technologies and approaches involved.

@nicholasrice already has some basic tools for testing performance of fast-element library changes. I would like to automate this eventually, but there's a lot more to do before we'd be ready for that I think.

@chrisdholt The tests we do for fast-element and that infrastructure probably don't make sense for testing the components. I think you would need to come up with a plan for how you want to handle that. We could help implement the infrastructure once you know what you want.

Independently, we want our website to perform well. We've already run tests to determine areas we need to fix, but I think it makes sense to incorporate the testing into the build pipeline, especially if there is already a GH action that makes it easy. Then we'll be able to catch issues as we evolve the site, whether it's adding new features to the home page, publishing new explorer builds, or the re-design that's being discussed.

@awentzel @Ibrahimmaga Can we restrict this pipeline step to running only when changes are made under sites?

It would be good to do this before the asset compression task so that we can also, via automation, track the improvements for integrating the compression.

@chrisdholt
Copy link
Member

Independently, we want our website to perform well. We've already run tests to determine areas we need to fix, but I think it makes sense to incorporate the testing into the build pipeline, especially if there is already a GH action that makes it easy. Then we'll be able to catch issues as we evolve the site, whether it's adding new features to the home page, publishing new explorer builds, or the re-design that's being discussed.

I don't disagree with this sentiment, but I still am not sure of the value at this point in the pipeline given how stable the site tends to be with exception of inheriting from actual controls - that's likely where a majority of the changes tend to and will happen. How often do we expect the site, it's assets, etc, to be changing? If we've done a cost/benefit and we think that a pipeline task is the most efficient method here, that's fine; however, if the pipeline turns into something which hinders our build times and process more than it helps, I'd like us to be prepared to reconsider.

@awentzel
Copy link
Collaborator

awentzel commented Sep 28, 2021

Unless there is a mandate to take on a compliance process for privacy, security, or accessibility that negatively impacts our productivity with additional pipeline requirements there would never be a change we are requesting that would detriment our productivity.

DevOps' primary goal is to eliminate waste and improve productivity for our contributors.

Lighthouse allows us to analyze and track how source code changes impact performance and other attributes on released websites. Understanding this analysis helps us know when there could be a problem. Maintaining fast and reliable productions sites for our community, incorporating modern best practices, and using our own tools are highly important for acquiring new customers and maintaining high customer satisfaction.

@stale
Copy link

stale bot commented Apr 16, 2022

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@stale stale bot added the warning:stale No recent activity within a reasonable amount of time label Apr 16, 2022
@janechu
Copy link
Collaborator

janechu commented May 29, 2024

This seems like a non-issue as the site appears to be performant enough and the next iteration of the docs site will be even more performant which will launch with @microsoft/fast-element v2.

@janechu janechu closed this as completed May 29, 2024
@janechu janechu added the closed:by-design Closed by design label May 29, 2024
@github-project-automation github-project-automation bot moved this to Done in DevOps Aug 27, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area:dev-ops Pertains to build, CI, and other dev-ops work area:site Pertains to work on the web site. chore Maintenance or non-code work closed:by-design Closed by design improvement A non-feature-adding improvement status:under-consideration Issue is being reviewed by the team. warning:stale No recent activity within a reasonable amount of time
Projects
Status: Done
Development

No branches or pull requests

5 participants