-
Notifications
You must be signed in to change notification settings - Fork 667
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[css-color-6] color-contrast() should allow specifying multiple contrast algorithms that need to be satisfied #7357
Comments
To take one example, in The Science of Color & Design by James O'Leary discusses their hybrid color model HCT which uses the L axis from CIE Lab (Tone) but Hue and Chroma axes from CAM16 (Hue, Colorfulness). Contrast is then the difference in Tone:
Note that the minimum contrasts for small and large text (50 and 40) are different from the thresholds for WCAG 2.1 and for APCA; thresholds are algorithm-specific. |
@supports (color: color-contrast(#F00 vs #00F, #0F0 to wcag3(AA))) {
:root {
--target-ratio: wcag3(AA);
}
}
.custom-label {
background: var(--some-bg);
color: color-contrast(var(--some-bg) vs #111, #eee to var(--target-ratio, wcag2(AA)));
} |
I think that is overblown, the article says (several times) "if the ‘APCA Lightness Contrast’ is more accurate...". If it isn't, the conclusions are not applicable. When I've run usability testing with people with low vision, there has been good correlation between colours they struggled with, and WCAG 2 fails. I've been following the work (as a non-colour expert) and I think APCA is probably a better formula, and we should continue to work incorporating a better formula into WCAG 3. I'm just requesting that people in W3C don't use language like "severely broken", when the overall impact of the current guideline is still a net positive. |
Also from that article:
47% false positives and 63% false negatives with a sample size of 8000 tested color pairs does not fill one with confidence. "Broken" seems applicable.
In the cited article, on a calibrated screen and with my normal color vision (slight age-related macular yellowing) the APCA was more accurate in all cases. I agree that more testing is needed. |
Hi Lea @LeaVerou
Legally mandated is a strong term for the narrow areas where there is actual codification into law. In the USA, the ADA does not mandate WCAG 2 contrast (indeed, the native ADA signage regulations were gutted of any specific contrast guidance regarding architectural signage some time ago). For government sites and government procurement, the 508 rules do specify WCAG 2 contrast, but with two big exception clauses: 1. Commercially available. If something is needed but no commercially available solution is WCAG 2 compliant, it does not have to comply. As for case law, the 11th circuit vacated Winn-Dixie in February, so that is moot. Above is federal level. For state level there is mainly New York and California. I don't have Lexus Nexus access at the moment, but from what Ive seen, no cases relating to contrast have gone to trial and won on merits. Most are out of court settlements, and I'm going to guess many of these were relying on Winn-Dixie. For other nations it's a grab bag, but in nearly all cases the specification of WCAG 2 is limited to governmentally controlled entities or sites. An exception is Finland. In Australia it extends to non-governmental sites, but last I checked it was level A only, so contrast is not included. In Canada, there is some case law, but a number of exclusions.
I'd like to introduce you to Bridge-PCA. It is fully backwards compatible to WCAG 2 but using APCA technology, it fixes the problem of false passes. What is lost is the greater design flexibility of the full APCA guidelines. Bridge-PCA was created specifically to answer the question of "meeting legal obligations by the absolute letter of WCAG 2, regardless of actual veracity". I do not suggest Bridge PCA as a permanent solution—it is specifically a stop-gap, stepping-stone to address various concerns. It does a much better job calculating for dark mode for instance, and also has enhanced conformance levels. The npm package is:
And the demo tool is https://www.myndex.com/BPCA/ |
I've been watching James' developments with interest. I'm a little surprised at the use of L* instead of J. ∆L* has an interesting attribute in that is sort of lines up with WCAG 2 — the implication is, that contrast using a plain L* suffers the same issues as WCAG2. Digging up some of the early comparison tables, here's one with ∆L*: An L* simple difference, when the lightest color is white, lines up with a little less than 40 = WCAG 3:!, 50 = 4.5:1, and 62 = 7:1 ... and still as colors get darker, contrast is over reported. I've also tried this with multiple offsets, but that still is not polarity sensitive. But wait, isn't L* perceptually uniform?L* is based on Munsell value, so it is perceptually based.... on diffuse reflected light using large color patches, when the observer is in the defined ambient/adaptation conditions. Viewing text on a self-illuminated monitor is a different matter, and a different perception. "Middle contrast" for text and other high spatial frequency stimuli isn't at 18%, it's up between 34 and 42 (ish). As such, ∆L* without some offsets and massaging is not much different than WCAG 2 as this chart shows: |
Me too. And I realize I need to start publishing sooner rather than later. |
Yes. Going to all the trouble of taking viewing conditions into account, to calculate the hue and chroma from CIECAM16, while ignoring them to calculate tone (L*), is odd, and not well explained in the article. |
What is SmrsModWbr? I assume it is a modified Weber, got a reference? |
"Somers Modified Weber" is a further offset and scaling following an idea from Peli/Hwang's Modifed Weber, and I indicated that was from a series of evaluations in 2019 of various contrast maths. I abandoned it as it does not track the full range very well, and reverse polarity is also sketchy. The thing with Weber and Michelson contrasts is that they track at very low, threshold contrasts, but they do not predict what happens at supra-threshold readability levels, and the difference is significant. Maureen Stone (PARC, NIST) and Larry Arend (NASA) had written about using ∆L* for luminance contrast, and in some experiments adding in scaling and offsets, that avenue began to indicate the shape of the perception. This led to greater consideration of CAMs and perceptual models, notably CAM02, R-Lab, Barten's model and Hunt's model. When you consider that viewing a self-illuminated display and reducing to luminance (as that is the parameter essential for readability) this allows a subset of CAM input conditions, permitting a reasonable simplification to determine luminance & stimulus size-based readability contrast via perceptual lightness/darkness difference. |
HI @svgeesus And to add: I covered the Peli/Hwang modweber in thread 695 back in 2019. It is based on the essential idea behind WCAG 2 contrast math, but makes the "flare" component asymmetrical. My iteration goes with the asymmetry, but changes amplitude and also incorporates additional scalings. But as I indicated, a dead alley—linear scalings or offsets don't accurately model supra-threshold perception. Stevens and others pointed out the inaccurate nature of the 180-ish year old Weber, and Stevens indicated the different perception curve shapes varied based on spatial frequency related issues. Michelson is sensitive to spatial frequency, but is not uniform in terms of position relative to adaptation. |
TL;DR: The source for the delta L measure is contrast ratio. They are equivalent. And APCA can be measured that way too The conversations around contrast tend to assume too much and go too far too quick, something very simple and tremendously helpful has gotten missed: #1) all* the contrast measures are based on relative luminance, Y in XYZ (all? pedantically, contrast ratio/WCAG through 2.x, APCA as documented for WCAG 3 draft) The intellectual hole there is the rule of thumb we give designers is contrast ratio 3.0 is a L* delta of 40...but the actual maximum is 38.3, and its as low as ~31. This is the effect Andrew mentions in message that starts with "I've been watching James' developments with interest." I'd prefer to use J or something more advanced, but CAM16 J is dependent on more than luminance, and I haven't seen anything remotely convincing that says we can count on non-luminance contrast (I know Andrew is thinking about / working towards a measure that includes hue/chroma in contrast measure, but in a purely a11y context, I'm not sure it's relevant. you'd need to know if a user had a CVD, what it was, and how severe it was to make it relevant for a11y. and that's even before we start talking about how perceptually weak chroma is compared to luminance) |
Here's a Google Sheet for visualizing this. You should be able to edit contrast ratio/SAPC values and see the graphs updated |
In the discussion re: charged language about WCAG 2.1 being broken, there's a category error occurring. It's far, far, from being severely broken. Contrast measures are for a11y, guaranteeing legibility for the population. Both the article and Chris mention which is more legible for them, while noting they have normal color vision. In that case, the exercise is "which has more contrast for viewers with full vision" This is very different from the goal of an a11y standard and contrast measurement: covering the population. To do that, you need a story for how you're handling the ~10% of users with skews in perception of hue and chroma (some can't see it entirely!) who aren't going to see as much difference as you are. Additionally, this falls far short of the standard scientific approach I've seen to measuring this, reading speed. This, in addition to the last leg of severely broken being one has a larger gamut of dark colors that pass, the other has a larger gamut of light colors passing[1] is worrisome to me. There seems to be a significant gap in things that are simpler to understand, but harder to talk about it. [1] this is a weak argument because this is a relative judgement, and frankly, the fact APCA neither explicitly models flare, nor does its delta L behavior show it reflecting that white cant get lighter but black can lighter in the presence of flare, make APCA the one that has worrisome behavior if I had to pick one. Even though I love it and can't wait for it to be a standard! |
Hi @jpohhhh This material is so much easier to discuss live with examples, instead of text, but here's a bunch ahead of the call. First, I do want to mention as kindly as I possibly can that your take on APCA does not reflect the reality or underlying theory. So I want to ask if you've read any of the documentation or white papers? My concern is that I must not have explained things correctly, which is apparently my Achilles heel !! The canonical documentation is at the main repo, and I'll list links in order of preferred reading later in this post. But very quicklyJames, I am wondering where you read or came to the following opinion:
I ask because these statements are categorically false....! 😳
I am very concerned that such a line of thought is out there—did you read this somewhere? What prompted this? Further Addressing Your Posts Above:
Which draft? I looked at the spreadsheet—but the functions are hidden so I can't examine the math or method....?? It does not look right. See the readme at the main repo, or at the apca-w3 npm page. Do not use anything labeled SAPC. One or two little bits:
Contrast is important for 100% of sighted users. Human vision is a spectrum as wide as the human experience, and our vision changes over our lives: we're essentially blind at birth, and it takes 20 years to develop peak contrast sensitivity... and then we hit our 40s, and presbyopia sets in. At 60, it's all down hill from there!
The short answer is that the APCA guidelines are directly following the long established, peer reviewed scientific consensus of modern readability research, especially for low vision, and particularly Dr. Lovie-Kitchin, Bailey, Whittaker, et alia, and also Legge, Chung, etc..... Okay, we need to define a couple things: 1) Readability Contrast—Prime Focus of APCA 2) Discernibility Contrast—A Different Beast, processed differently in the brain.
This last point is the most critical to understand, because not only are perceptual lightness estimation power curves curved, but the perception of contrast resulting from the distance of two different lightnesses is ALSO curved, by which I mean the contrast change from threshold to supra-threshold is also very non linear relative to the distance (difference). And the shape of THAT curve is ALSO dependent on spatial frequency. Take a look at this image: The two yellow dots are EXACTLY the same. As far as XYZ or LAB or the sRGB vaules being sent to the monitor are concerned, both yellow dots are identical. And yet they both look distinctly different. The photometric difference between two light or dark "things" does not define contrast perception. It is not contrast. And neither WCAG 2 ratio or ∆L* are uniform to perception. Historically this has not been a problem BECAUSE, in physical PRINT, it is almost always black ink on white paper. And even if not, there was a designer there to lock it in place. Why This Is So Important TodayThe WEB is dynamic content, not locked into place as printed words on paper are, and today there is the desire to have automatic properties in CSS and automation for things like auto darkmode or auto high-contrast. AUTOMATION of colors is only realistically possible if you have perceptually uniform methods. While L* may be more or less uniform to perception of low spatial frequency diffuse surfaces under ideal illumination conditions, I can tell you that L* does not predict lightness perception of high spatial frequency elements (text) on a self illuminated monitor. I'll fill you in on this this afternoon, but in short: Human perception is very literally curves upon curves.Now a few things to set the record straight, particularly for anyone reading at home.... RE: APCA
COLOR VISION INSENSITIVITYThere is this misunderstanding out on the web, and I am not sure the source, but I do see it a lot in the context of accessibility. I believe it may be in conjunction with the WCAG 2.0 understandings documents. Here are the facts.
APCA DOCUMENTATIONI've been working hard to clear up these misconceptions, so I've been trying to organize the documentation is a logical manner. The links listed in this section are placed in an order that starts with the plain language overview before getting into the minutiae. Easy Start ↓
Visual Comparisons and Related Articles ↓
The Official Demo Tool ↓
Deeper Technical & Theory ↓These are all a bit rough or in draft form, and are much deeper dives into the underlying theories.
That's it for this post, thank you for reading. —Andy Academic Peer Reviewed Published Science as referenced above1 • Spatial visual function in anomalous trichromats: Is less more? Non-Academic General Audience8 • What’s Red & Black & Also Not Read? Footnotes
|
Any observations I are off this code, I believe we confirmed this is the latest and greatest: https://github.com/Myndex/apca-w3/blob/master/src/apca-w3.js
However, once other people, in a more formal setting, justify that perspective via a couple people with standard color vision noticing that magnitude of luminance difference is different from magnitude of color difference...that's...not good. At all.
My claim:
I'm very surprised this wouldn't be embraced, given how beneficial it is for designers who must work with these algorithms.
|
Hi @jpohhhh
Just to point out, this is not new (I started this project with post #695 circa April 2019) and I am not the only one, this issue has been widely criticized, including back in 2007 when objections from IBM were ignored for instance. I layout the basis of the problems in the 44,000 word thread #695 back in 2019.
First, this is not true on the fact of it. I do not have standard vision. I WAS legally blind due to severe early onset cataracts, and now 6 surgeries later I have low vision. Yay. But the conflation of visual function and color insensitivity is a spurious one. For readability, only achromatic luminance contrast is critical. Color is useful for discrimination of objects, but not for reading. These are two completely separate visual functions.
It is only mentioned in the "understanding document" but is NOT considered in the algorithm. In APCA, the algorithm specifically derates red and fails it as part of the math. And the protan compensator does so even more strongly. |
Not exactly. WCAG 2.1 takes two luminances (CIE Y) as input. APCA uses a non-standard transfer function for linearizing sRGB and thus the
No.
Yes.
Yes. We convert L* to Y, compute the other Y, and then can if we want convert that back to L*
For that color, yes, if we want to replicate the WCAG 2.1 algorithm. Doing so seems pointless because the problem with WCAG 2.1 is that is uses a non perceptually-uniform measure (CIE Luminance) instead of a perceptually uniform measure (CIE Lightness). |
And just to be clear: I am going to change the input section so that the linearization transform used matches "standard" as defined in CSS 4/5. However, the lightness predictions will still not be a "standard CIE 1931 Y" as discussed below. DetailEach color input to APCA will be the separate, normalized,, linear R,G, and B values, and a flag indicating the colorspace. A key reason for this is the need for compensation for protan (red insensitive vision) and also possibly for the (potential, but still under review) halation/glare-compensation, as these features work by adjusting/offsetting the RGB to Y coefficients in a minimally invasive way. And also for certain automated color-contrast functions, which need to identify hue and chroma. Additional Considerations: Reverse APCA & Auto Dark ModeThe fact there was no perceptually uniform contrast metric prior to APCA is simply that it was less important, when a designer was ultimately making color choices. BUT: The overwhelming reason that perceptually uniform contrast is needed today is the need for AUTOMATED color and contrast adjustments. It is not possible to make "good" color or contrast adjustments without the eye of a designer, unless there is a perceptually uniform model with which to do so.
In both cases, these can be linearized RGB values, and in all cases the color-space must be known. Additional Considerations: CIE 1931I have been looking at Judd/Vos as a potentially more appropriate path to emulating display response Y (something Sony has been working with as a means to correct metameric issues with narrow band primaries). However, recent research in relation to CIE Technical Committee 1-98 entitled "A Roadmap Toward Basing CIE Colorimetry on Cone Fundamentals." indicates a potential shift is colorimetry, and likely important implications for display technology. For more background on this, see R.W.Pridmore. "A new transformation of cone responses to opponent color responses" So, I've been slow to make changes till I have a chance to review some of this further. RE: Lost in Color SpacesAt present, APCA is being demonstrated with SDR sRGB as this is the web default. The future of multi-colorspace web content system anticipates the above issues will only increase in importance. Some questions we need to be asking are:
These questions lead to the following key question:
RE: Alpha Blended ColorsAll alpha blending or compositing must be done prior to sending to the APCA inputs, as in most cases the alpha blend is happening in a gamma encoded space. The TL;DRTo summarize: there are at least four contrast related factors that involve the independent RGB channels, and/or coefficients used to transform RGB into a photometric luminous intensity, as applied to the purpose of improving readability of text on self-illuminated monitors.
And
|
In preparation for discussing the various contrast algorithms, the color.js documentation on contrast might be useful may be helpful. Note: Weber and Michelson are broken in color.js currently, investigating. |
We resolved to add |
I fixed that bug, which was a stupid typo of = for === /facepalm |
They still seem broken here, unless they really are that bad. |
By broken I mean that they were returning Yes, they are both not very good, especially for light text on dark backgrounds. |
I'm curious to know, is there going to be specific formulas that are supported by |
In the context of the black/white page, both the unmodified Weber and Michelson have a polarity sensitive issue, as neither are perceptually uniform. While both are useful in research relating to the JND, neither are useful for practical design guidance for supra-threshold contrasts. I evaluated these and all other available contrast models in 2019, along with many variants (some of which I mention later in this post). As for the black and white flip page: Weber: (lighter - darker) / darker
Michelson: (lighter - darker) / (lighter + darker) So if white is 1 and black is 0, we can see why both of these fail to define a useful "flip point". For White:Weber: (1.0 - color) / color
Michelson: (1.0 - color) / (1.0 + color) For Black:Weber: (color - 0) / 0
Michelson: (color - 0) / (color + 0) So, as we can see for black, Weber produces infinity and Michelson = 1, and in both cases white vs any color will never be the higher compared to black vs any color. Regarding the infinity fix:in Weber.js, there is: return Y2 === 0 ? 0 : (Y1 - Y2) / Y2; To fix devide by 0, which would be infinity. But in returning 0, it hides that the actual result should be a maximum. As a result, in the black/white page, the Weber shows white text, when in reality it should show black text for all cases similar to Michelson, due to the nature of these algorithms. If I may suggest to consider instead: return Y2 === 0 ? 50000 : (Y1 - Y2) / Y2; The reason: the darkest sRGB color above black is Bonus Round!I don't know if you want to play with these, but there are other variants, some are interesting, and we evaluated all of them in 2019. Among the variants is a couple modified Webers where a brute-forced offset is added to the denominator. Sometimes this is claimed to be a "flare" component, but in reality is in effect a "push" to a suprathreshold level. Three Mod WebersThese assume Y is 0.0 to 1.0: hwangPeli = (Y1 - Y2) / (Y2 + 0.05);
somersB = ((Y1 - Y2) / (Y2 + 0.1)) * 0.9;
somersE = (Y1 - Y2) / (Y2 + 0.35); However these do not track polarity changes particularly well, and have a mid-range "bump". 𝜟𝜱✴︎ (delta phi star)A better and interesting modification is this delta L* variant we created on the path toward SACAM (and APCA). Here, create Lstar from the piecewise sRGB->Y and L* per the standard CIE math, then: deltaPhiStar = Math.abs(bgLstar ** 1.618 - txLstar ** 1.618) ** 0.618; This mainly works for "Light Mode" but does not track dark mode quite as well. Also, while this is close to parity with light mode APCA at Lc +90, lower contrasts are over-reported, and it does not match in dark mode. Some of this can be addressed with scales and offsets. Nevertheless, I thought you might find these variants interesting. APCA builds on these early experiments, but has added value in terms of polarity sensitivity and wider range for better guideline thresholds. Double Bonus RoundRegarding the simple concept of a black/white flip, I have this interactive demo-page: FlipForColor which includes a brief discussion. For a deeer dive, there is a CodePen, and a Repo, and a Gist that discusses this and related issues including font size and weight as it relates to flipping. Thank you for reading |
Pull request? |
Oh, that would have been a good idea but already done |
Co-Authored-By: fantasai <725717+fantasai@users.noreply.github.com>
Closing this, as @fantasai and I made the prose edits, however we currently only have one algorithm. |
Assuming we can specify the contrast algorithm at all (see #7356), we should be able to specify multiple of them as a safeguard for algorithm bugs, as well as to satisfy legal constraints.
E.g. we know that WCAG 2.1 contrast is severely broken, yet it is legally mandated that websites pass it. Once we have a better contrast algorithm, we may want
color-contrast()
to find us a color pair that both satisfies WCAG 2.1, as well as the new improved algorithm.Syntax could just be a space-separated list of contrast algorithms.
(Issue filed following breakout discussions between @svgeesus, @fantasai, @argyleink and myself)
The text was updated successfully, but these errors were encountered: