Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

report(a11y): Don't count non-applicable a11y audits toward score #4052

Merged
merged 9 commits into from
Jan 5, 2018

Conversation

robdodson
Copy link
Contributor

This updates the accessibility report so it no longer counts non-applicable tests toward the user's score. Currently lighthouse gives folks an artificial "boost" to their a11y score because aXe was not filtering out tests which matched 0 nodes. That should be fixed in the next release of aXe (dequelabs/axe-core#637).

This PR also updates how the number of passed/failed/notApplicable audits are reported by counting any grouped audits toward the total. Previously it was just counting the number of groups. So even though we have 35 non-manual a11y audits, it looked like we had far fewer. Here's the new look (note the 12 passed, 15 not applicable):

image

@robdodson robdodson changed the title report(a11y): Don't count non-applicable a11y audits toward score. report(a11y): Don't count non-applicable a11y audits toward score Dec 14, 2017
@robdodson
Copy link
Contributor Author

robdodson commented Dec 14, 2017

I'm not sure if that travis failure is related to my changes.


(edit:paulirish) i restarted the travis build. i doubt its related. just some flakyness

Copy link
Collaborator

@patrickhulce patrickhulce left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nice this is a big win!! can't wait to start our conversion too :D

mind adding a quick test to make sure the not applicable section gets created?

elements.forEach(function(element) {
scratch.appendChild(element);
});
const subAudits = scratch.querySelectorAll('.lh-audit-group .lh-audit');
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we just query for .lh-audit and always return that number?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sure! would you like me to keep the failure check that returns elements.length if subAudits is 0?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

querying for .lh-audit should just work if I'm remember correctly, if they return different results though I'm probably misremembering and you can ignore me :)

} else {
groups.failed.push(audit);
if (audit.score === 100) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: let's collapse this with an else if

@@ -484,6 +529,12 @@ class CategoryRenderer {
const passedElem = this._renderPassedAuditsSection(passedElements);
element.appendChild(passedElem);

// don't create a not applicable section if there are no not applicables
if (!notApplicableElements.length) return element;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

seems like we need to move these if (!array.length) checks inside some braces instead of early return

might be a current bug with no manual audits if everything fails :)

@@ -50,6 +50,7 @@ class ReportGeneratorV2 {
* @return {number}
*/
static arithmeticMean(items) {
// debugger;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

leftover :)

// "Not Applicable".
if (result.notApplicable) {
audit.weight = 0;
result.informative = true;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

super nit: can we move this line to be the last of the three? just something about me expects to see score, weight, other...in that order 😆

@robdodson
Copy link
Contributor Author

oof, getting those tests in was rough...

Maybe I was doing something silly, but it seemed like I needed to manually edit the sample_v2.json. This file is a bit stale (doesn't include manual a11y audits) and even though my PR adds a new notApplicable property to all of the audits, I didn't want to go through and manually add it to each entry. Is it possible to generate this file from a test page and cache it?

For instance, this test, as previously written, should actually fail once the manual audits are added because it just looks at the number of audit-groups and assumes it will be a certain size. It might be nicer to add a class to failed audit groups so they can specifically be selected in tests. As it stands, the only way I could differentiate failing audit groups from the others is by selecting for div instead of details. Pretty brittle :\

@robdodson
Copy link
Contributor Author

I tried regenerating the sample data but it broke an unrelated performance test and i'm not entirely sure why. Maybe we can look at it when folks are back in the office.

Copy link
Collaborator

@patrickhulce patrickhulce left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks for working on this @robdodson!! awesome stuff!

@robdodson
Copy link
Contributor Author

looks like axe 2.6.1 is out with the notapplicable fix. let me rev and add it to this pr

@robdodson
Copy link
Contributor Author

@patrickhulce Just rebased against master because the scoring module was changed a bit. I think this is ready to be squashed and merged once travis comes back.

@robdodson
Copy link
Contributor Author

blargh why are these smoke house tests failing all of a sudden?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants