-
Notifications
You must be signed in to change notification settings - Fork 94
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature Request: Disregard or elsehow mark tentative tests. #36
Comments
From @jgraham on August 25, 2017 10:37 FWIW I think this should be a non-goal. The point of the dashboard is to give knowledgable participants (i.e. browser vnedors) insights into where interop is lacking, and improvements they should make. It's not intended to be used for marketing comparisons, so it doesn't matter if implementations look "behind" on a spec that is in the early stages but still has testcases. |
From @foolip on October 9, 2017 13:19 I think we should treat tentative tests as different in the kinds of metrics we're discussing in web-platform-tests/results-collection#83 and in any per-browser reports (for prioritization purposes) that are produced. Tentative tests are ones where the best shape of the spec is still unknown, and someone is trying to figure it out by doing more research, shipping things, etc. I think that at least to begin with, we want to apply roughly zero pressure on odd-one-out scenarios involving tentative tests. Rather, we need a regular per-directory triage of tentative tests to figure out which can be made non-tentative. |
From @foolip on October 9, 2017 13:24
Yep, I agree. Looks like we don't have an existing issue for this, so: web-platform-tests/results-collection#146 |
With recent discussion on searching/filtering (@mdittmer) what comes to mind is an |
Having implemented this in #1611, it occurs to me in testing it that this is really replicating existing behavior:
Do we think it provides enough ergonomics value to include anyway? I'm leaning towards 'yes', but only very slightly. @Hexcles @foolip for their thoughts. |
I think it is worth it, not least because it’s more discoverable and can be included in autocomplete suggestions. |
From @otherdaniel on August 25, 2017 9:14
Thanks for the dashboard!
Please add some feature to disregard tentative tests (or maybe account for them separately), so that implementing a proposed feature won't count "against" browsers.
Example: I am currently doing a first draft implementation of extending SRI (subresource integrity) with digital signatures, as proposed to (but not ratified by) the W3C webappsec group. To support that work, I have added webplatform tests for this feature, in the subresource-integrity dir. The dashboard now makes it look like browser are behind on implementing this, despite it merely being a proposed feature.
References:
https://github.com/w3c/webappsec-subresource-integrity/blob/master/signature-based-restrictions-explainer.markdown
Also, maybe related: Chrome/Chromium seems to run wpt tests with experimental features enabled, while your dashboard runs the release config. That's probably the better choice; but making both, release + experimental result sets available might be useful.
Copied from original issue: web-platform-tests/results-collection#99
The text was updated successfully, but these errors were encountered: