Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Don't fail localhost pages with INSECURE #6655

Closed
paulirish opened this issue Nov 26, 2018 · 26 comments · Fixed by #8865
Closed

Don't fail localhost pages with INSECURE #6655

paulirish opened this issue Nov 26, 2018 · 26 comments · Fixed by #8865
Assignees
Labels

Comments

@paulirish
Copy link
Member

Testing a local site using a self-signed certificate seems pretty reasonable.
Wdyt about us allowing localhost in our INSECURE checks?

@connorjclark
Copy link
Collaborator

connorjclark commented Nov 26, 2018

I believe this is already possible, provided you add your private keys to your local keychain. Without that step the local, self-signed certificate is deemed insecure.

@patrickhulce
Copy link
Collaborator

I believe this is already possible, provided you add your private keys to your local keychain.

It definitely is though I second the point that finding a smoother way around this for localhost development would be nice. Even if that's just updating the docs with the steps and pointing folks to it when LH fails it would be better than fatal cryptic error and lots of duplicate bugs :)

@connorjclark
Copy link
Collaborator

connorjclark commented Nov 26, 2018

+1 to updating docs. I'm not aware of any tools that treat localhost certs differently just for developer ease. As one example, for webpack-dev-server you must self-sign + add to your keychain. It should be a familiar process for anyone who has worked with certificates.

@paulirish what about 1) write some docs about how to self sign + add to your OS's trusted keys 2) if insecure & localhost, provide link to said documentation (so, what @patrickhulce just said :) )

@jimmydief
Copy link

It would be awesome if it was possible to audit a localhost site served over HTTPS today using a self-signed certificate without having to first trust that certificate on the system level. Puppeteer offers an ignoreHTTPSErrors option for the same that enables functional testing for this use case.

We're using Lighthouse in our CI pipelines for performance testing and managing certificates on the various CI hosts adds some unfortunate extra complexity.

@patrickhulce
Copy link
Collaborator

Looks like we just need a few lines from puppeteer to make this happen

await client.send('Security.setOverrideCertificateErrors', {override: true});
client.on('Security.certificateError', event => this._onCertificateError(event));

_onCertificateError(event) {
    this._client.send('Security.handleCertificateError', {
      eventId: event.eventId,
      action: 'continue'
    }).catch(debugError);
}

@brendankenny
Copy link
Member

I think we should require users to do --chrome-flags="--ignore-certificate-errors" themselves, but we would have to update our check (since an insecure cert will trigger our check).

But why does localhost need https? Most things should treat it as secure. Is h2 not allowed on http localhost?

@patrickhulce
Copy link
Collaborator

Surely there's some difference in behavior between the two though, right? Why would puppeteer go with the protocol method instead of the flag 🤔

Is h2 not allowed on http localhost?

Last I checked it was not and was the last pain point I had of localhost dev. I think this also applies to dev servers in privately hosted CI though. You might spin up a box for CI testing that doesn't have all the certs accepted but still want to use H2/service worker/etc on a non-localhost origin.

@jimmydief
Copy link

h2 is indeed our reason for running localhost via https.

I have tried launching Chrome independently via chrome-launcher and passing ignore-certificate-errors and/or allow-insecure-localhost but Lighthouse itself still throws an INSECURE_DOCUMENT_REQUEST error.

Happy to spend some time on this if there's interest.

@brendankenny
Copy link
Member

I have tried launching Chrome independently via chrome-launcher and passing ignore-certificate-errors and/or allow-insecure-localhost but Lighthouse itself still throws an INSECURE_DOCUMENT_REQUEST error.

yeah, this was a check added (IIRC) so that lighthouse wouldn't just sit there with a bad certificate page for 45 seconds before timing out and moving on. Instead it throws early as soon as it sees that Chrome isn't going to trust that connection.

Adding an option to disable that check (so it would hit the timeout instead) seems fine, but I'm loath to have some automated way of circumventing Chrome's security protections. People use LH to test sites with authentication, for instance, and I don't want to be responsible for folks XSSing themselves :)

(of course, having some docs that say "just copy and paste --chrome-flags="--ignore-certificate-errors" onto the command line!" isn't much different than automating it, but it seems like a distinction with a difference...)

@patrickhulce
Copy link
Collaborator

patrickhulce commented Feb 20, 2019

@brendankenny this is not the situation we are in though. IIUC, I think @jimmydief is saying that even though --ignore-certificate-errors is being used and the bad certificate page is not being shown. Lighthouse still throws. This would also explain the explicit workaround behavior in puppeteer.

Example: lighthouse https://expired.badssl.com --chrome-flags="--ignore-certificate-errors" exits immediately with a fatal error.

@paulirish
Copy link
Member Author

I got feedback from an MSC that our change here broke his flow. He's using a self-signed cert for localhost and not added it properly.

I suspect that is more common than people trusting the cert on their machine. I think we should allow this behavior... but perhaps add an audit that flags this localhost/untrusted cert issue. That way, they'd at least get results.

@connorjclark
Copy link
Collaborator

connorjclark commented Feb 20, 2019

@brendankenny this is not the situation we are in though. IIUC, I think @jimmydief is saying that even though --ignore-certificate-errors is being used and the bad certificate page is not being shown. Lighthouse still throws. This would also explain the explicit workaround behavior in puppeteer.

Example: lighthouse https://expired.badssl.com --chrome-flags="--ignore-certificate-errors" exits immediately with a fatal error.

I confirmed that with and without that flag, the same security states are seen from the protocol.

diff --git a/lighthouse-core/gather/driver.js b/lighthouse-core/gather/driver.js
index 299a8c78..fe2653e9 100644
--- a/lighthouse-core/gather/driver.js
+++ b/lighthouse-core/gather/driver.js
@@ -780,6 +780,7 @@ class Driver {
        * @param {LH.Crdp.Security.SecurityStateChangedEvent} event
        */
       const securityStateChangedListener = ({securityState, explanations}) => {
+        console.log({securityState, explanations});
         if (securityState === 'insecure') {
           cancel();
           const insecureDescriptions = explanations

Seems --ignore-certificate-errors disables interstitials and removes feature gating for things like WebSockets, but does not modify how the protocol sees the security state.

Adding an option to disable that check (so it would hit the timeout instead) seems fine, but I'm loath to have some automated way of circumventing Chrome's security protections.

This approach would require a flag for the DevTools client too btw. Which would probably lead to checking it for the first time it's needed, and then never ever unchecking it, which is scary.

People use LH to test sites with authentication, for instance, and I don't want to be responsible for folks XSSing themselves :)

I'm wondering if we don't really need to concern ourselves about that? If a machine is compromised such that a malicious website is being served at localhost, it would seem that the entire machine is compromised anyways. (EDIT: Actually, I'm failing to realize the attack could also come from the network level, if the DNS had been spoofed or something? I have a sketchy understanding of security in regards to this.)

--

I'm leaning towards total disabling of the security check for localhost. A special audit just for localhost seems fine too, although at first glance I'm unsure how that audit would be made with our current setup (how do we move the current code to a gatherer, and still early exit for non-local sites?).

There is another complication for any automated approach, which is that not all local sites use the hostname localhost. What if the local site uses any other hostname? Seem an opt in for "this is a local site" flag would be necessary to handle this completely.

@jimmydief
Copy link

IIUC, I think @jimmydief is saying that even though --ignore-certificate-errors is being used and the bad certificate page is not being shown. Lighthouse still throws. This would also explain the explicit workaround behavior in puppeteer.

Yeah, that's what I'm referring to. Thanks!

I suspect that is more common than people trusting the cert on their machine. I think we should allow this behavior... but perhaps add an audit that flags this localhost/untrusted cert issue. That way, they'd at least get results.

An audit is an interesting idea, would make Lighthouse useful for something like page ranking where you want to down-rank sites with bad HTTPS or performance but would still always be able to complete an audit.

I'm wondering if we don't really need to concern ourselves about that? If a machine is compromised such that a malicious website is being served at localhost, it would seem that the entire machine is compromised anyways. (EDIT: Actually, I'm failing to realize the attack could also come from the network level, if the DNS had been spoofed or something? I have a sketchy understanding of security in regards to this.)

Yeah, browser certificate validation helps to guard against man-in-the-middle attacks where DNS is spoofed. Since the attacker generally wouldn't be able to provide a valid certificate for the domain as they don't have the private key used to generate it, browsers are able to want about this when https is used. Puppeteer and Chrome seem to have taken the stance that it's okay to disable this via flags though so I think it makes sense for Lighthouse to be consistent.

There is another complication for any automated approach, which is that not all local sites use the hostname localhost. What if the local site uses any other hostname? Seem an opt in for "this is a local site" flag would be necessary to handle this completely.

localhost is a bit special in that CAs will not provide certificates for it. Other hostnames should be able to get legit certificates for free from Let's Encrypt.

@abarre
Copy link

abarre commented Feb 22, 2019

Yeah, browser certificate validation helps to guard against man-in-the-middle attacks where DNS is spoofed. Since the attacker generally wouldn't be able to provide a valid certificate for the domain as they don't have the private key used to generate it, browsers are able to want about this when https is used. Puppeteer and Chrome seem to have taken the stance that it's okay to disable this via flags though so I think it makes sense for Lighthouse to be consistent.

In our use case (see #7292), the DNS resolution is locally change via Chrome flag to evaluate the benefit of a CDN or front end optimizer before actually plug in it. I also believe that an option that permits to bypass this validation would be useful for this use case. I would be happy to patch Webpagetest to add this flag.

@paulirish
Copy link
Member Author

Reviewing the history here:


My take:

Looking back, we changed behavior not because we wanted to throw on insecure sites but because some of them triggered protocol_timeout situations (before we had implemented that).

It seems like the PROTOCOL_TIMEOUT failure would now actually catch the ERR_CERT_SYMANTEC_LEGACY cases, though admittedly the error message would be less useful.

In the last PR, @Janpot advocated for a different approach than what he implemented:

IMO lighthouse should treat security errors in a similar way as the http-status-code audit. Basically, accept insecure urls and use the Security.securityStateChanged to craft an audit that fails on those.

I agree that we shouldn't throw if we can technically finish loading the page (Seems like our new NO_FCP stance aligns :).

I propose we remove our fatal INSECURE error and rely on PROTOCOL_TIMEOUT for the few cases where it'd apply. (The securitystate listeners should log.error in case of a problem.) And we should add an audit for valid & happy certificate--that'll be a great place to put the documentation we've been discussing.

How bout dat?
+@patrickhulce, @hoten, folks.

@connorjclark
Copy link
Collaborator

connorjclark commented Apr 19, 2019

wfm. I suppose the current security state listeners (in addition to logging an issue when it finds one) can create a base artifact, to be consumed by a valid-ssl audit? Or can the entire logic be moved to a gatherer

It seems like the PROTOCOL_TIMEOUT failure would now actually catch the ERR_CERT_SYMANTEC_LEGACY cases, though admittedly the error message would be less useful.

why is it that we can continue after an expired ssl cert (given we make it unfatal) but not ERR_CERT_SYMANTEC_LEGACY? It seems this is a compromise we shouldn't need to make. I searched https://cs.chromium.org a bit but found nothing.

@brendankenny
Copy link
Member

Does that mean deferring to Chrome for the --ignore-certificate-errors part? If so, I'm on board.

Are there times we can finish loading the page, but would be testing an error page, though? Because that kicks the can back down to pageLoadError, where it was before, and we'd still have to decide what to do with the results.

@paulirish
Copy link
Member Author

Does that mean deferring to Chrome for the --ignore-certificate-errors part?

Yes.
Though I will point out that Headless team says they plan to remove that flag at some point, but it'll be a while based on its use just within Chrome testing infrastructure. :)

why is it that we can continue after an expired ssl cert (given we make it unfatal) but not ERR_CERT_SYMANTEC_LEGACY?

No idea, tbh.

Are there times we can finish loading the page, but would be testing an error page, though?

Hmmm. This sounds familiar but I don't entirely recollect our behavior on bad https and interstitial pages.

@Janpot
Copy link
Contributor

Janpot commented Apr 19, 2019

In the last PR, @Janpot advocated for a different approach than what he implemented:

Yeah I kind of saw making it opt-in as first step in that direction. But what you propose works just as well for me.

@patrickhulce
Copy link
Collaborator

patrickhulce commented Apr 19, 2019

I propose we remove our fatal INSECURE error and rely on PROTOCOL_TIMEOUT for the few cases where it'd apply

SGTM!

Seems like our new NO_FCP stance aligns

Yep, love it! I propose doing the same thing #8340 does for NO_FCP to PAGE_HUNG and INSECURE_DOCUMENT_REQUEST 👍

Are there times we can finish loading the page, but would be testing an error page, though?
Hmmm. This sounds familiar but I don't entirely recollect our behavior on bad https and interstitial pages.

If you're referring to "finish loading" as just not PROTOCOL_TIMEOUTing then that should be covered by getPageLoadError failing every single gatherer preemptively. If you're referring to all of our load checks passing (including security), then we're stuck in that situation currently with no changes and the new direction shouldn't impact our ability to fix that hole either :)

@holmesadrianh
Copy link

I believe this is already possible, provided you add your private keys to your local keychain. Without that step the local, self-signed certificate is deemed insecure.

As a front end web developer with almost no experience with certificates, where would I start to resolve this issue? It is important for me to be able to audit my local site but without a valid SSL certificate, Lighthouse does not work. Is this something out of my ability and would I need to push back with my team? I dont think they would particularly know either though. Are there any hacks for Chrome or Extensions to create a certificate? Any help is appreciated.

@connorjclark
Copy link
Collaborator

As a front end web developer with almost no experience with certificates, where would I start to resolve this issue?

It'll depend on what OS you run (for generating the keys) and what server stack you use (how to configure your server to use them).

If you're using Mac: https://medium.com/@jonsamp/how-to-set-up-https-on-localhost-for-macos-b597bcf935ee

With the key and cert files, you should be able to pass it to w/e server stack you use. Look up the documentation for whatever you use.

@URSTYLE
Copy link

URSTYLE commented May 2, 2019

@hoten adding self-signed cert to the local keychain is resolving that issue? I was trying to do that (and as you see on the screenshot, the certificate is "marked as trusted") on my machine but I still get INSECURE DOCUMENT REQUEST :) https://gyazo.com/f68d6139d23a62318c9e387a753c1942

@brendankenny
Copy link
Member

@URSTYLE Lighthouse just instruments Chrome; it doesn't handle certificates in any special way. If Chrome can open the page without warning then Lighthouse can, so you may have more luck looking for tutorials elsewhere (stackoverflow or wherever).

@MichaelJCole
Copy link

This doesn't work. Chrome complains "The Private Key for this Client Certificate is missing or invalid" while nginx happily uses the cert. Lighthouse still complains even with the CLI flag.

This is way overkill on "security". I'm evaluating lighthouse and this is where I quit. I just wanted to turn this feature off.

@patrickhulce
Copy link
Collaborator

@MichaelJCole the version of Lighthouse in which this has been fixed has not yet been released.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.

10 participants