-
-
Notifications
You must be signed in to change notification settings - Fork 85
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Higher precision for srgb to linear-srgb #596
Comments
Or if we want to go accurate only honoring the original input constant |
As far as the International Electrotechnical Commission. (1999). IEC 61966-2-1:1999 is concerned, those are the values for the transfer function: https://cdn.standards.iteh.ai/samples/10795/ae461684569b40bbbb2d9a22b1047f05/IEC-61966-2-1-1999-AMD1-2003.pdf. As far as most people are concerned, and when comparing expectations of sRGB everywhere else, that would be the expectation. While using the described values might be "more accurate", I'm not sure it would translate to any noticeable difference in actual use. But maybe I'm wrong 🤷🏻. |
Agreed that it's probably not enough for human to notice. However, it is significant enough for computer to notice. For example, in Sass we use a 11 digit precision for comparing if colors' value are the same or not. The transformation matrix in dart-sass has 17 digits, and the color.js has 16 digits, so after space conversion, it is still accurate enough after rounding to 11 digits in Sass. However, if the conversion only has 4 digits precision to begin with and we're trying to compare them at 11 digit precision, we would get lots of inaccuracy. |
Can you provide a minimal example? I'm not sure I understand the scenario. |
See the test code and result below. When input is around the cutoff point and the difference being around 1e-16, you can see the current output generate output with difference greater than 1e-10, where with the improved v2 (using
const v1 = {
from: val => {
let sign = val < 0 ? -1 : 1;
let abs = val * sign;
if (abs > 0.0031308) {
return sign * (1.055 * (abs ** (1 / 2.4)) - 0.055);
}
return 12.92 * val;
},
to: val => {
let sign = val < 0 ? -1 : 1;
let abs = val * sign;
if (abs <= 0.04045) {
return val / 12.92;
}
return sign * (((abs + 0.055) / 1.055) ** 2.4);
}
}
console.log('---')
console.log('v1.to difference:', v1.to(0.0404500000000001) - v1.to(0.0404500000000000))
console.log('v1.from difference:', v1.from(0.0031308000000000) - v1.from(0.0031308000000001))
console.log('v1.to difference:', v1.to(0.04044823627710784) - v1.to(0.04044823627710783))
console.log('v1.from difference:', v1.from(0.00313066844250060) - v1.from(0.00313066844250061))
console.log('v1.to difference:', v1.to(0.0392857142857143) - v1.to(0.0392857142857142))
console.log('v1.from difference:', v1.from(0.00303993463977844) - v1.from(0.00303993463977845))
const v2 = {
from: val => {
let sign = val < 0 ? -1 : 1;
let abs = val * sign;
if (abs > 0.003130668442500607) {
return sign * (1.055 * (abs ** (1 / 2.4)) - 0.055);
}
return 12.92 * val;
},
to: val => {
let sign = val < 0 ? -1 : 1;
let abs = val * sign;
if (abs <= 0.04044823627710784) {
return val / 12.92;
}
return sign * (((abs + 0.055) / 1.055) ** 2.4);
}
}
console.log('---')
console.log('v2.to difference:', v2.to(0.0404500000000001) - v2.to(0.0404500000000000))
console.log('v2.from difference:', v2.from(0.0031308000000000) - v2.from(0.0031308000000001))
console.log('v2.to difference:', v2.to(0.04044823627710784) - v2.to(0.04044823627710783))
console.log('v2.from difference:', v2.from(0.00313066844250060) - v2.from(0.00313066844250061))
console.log('v2.to difference:', v2.to(0.0392857142857143) - v2.to(0.0392857142857142))
console.log('v2.from difference:', v2.from(0.00303993463977844) - v2.from(0.00303993463977845))
const v3 = {
from: val => {
let sign = val < 0 ? -1 : 1;
let abs = val * sign;
if (abs > 0.003039934639778432) {
return sign * (1.055 * (abs ** (1 / 2.4)) - 0.055);
}
return 12.923210180787853 * val;
},
to: val => {
let sign = val < 0 ? -1 : 1;
let abs = val * sign;
if (abs <= 0.03928571428571429) {
return val / 12.923210180787853;
}
return sign * (((abs + 0.055) / 1.055) ** 2.4);
}
}
console.log('---')
console.log('v3.to difference:', v3.to(0.0404500000000001) - v3.to(0.0404500000000000))
console.log('v3.from difference:', v3.from(0.0031308000000000) - v3.from(0.0031308000000001))
console.log('v3.to difference:', v3.to(0.04044823627710784) - v3.to(0.04044823627710783))
console.log('v3.from difference:', v3.from(0.00313066844250060) - v3.from(0.00313066844250061))
console.log('v3.to difference:', v3.to(0.0392857142857143) - v3.to(0.0392857142857142))
console.log('v3.from difference:', v3.from(0.00303993463977844) - v3.from(0.00303993463977845)) |
The sass-embedded npm package uses color.js internally. Sass will consider two colors the "fuzzy equal" if their channel values' differences is less than 1e-11 after rounding, like the examples in the test code which is 1e-16, however, it is currently possible that after space conversion, Sass would suddenly consider the converted colors different because the channel value difference is more than 1e-10. |
You mention that dart-sass uses 11 decimal places, but why? With the current EOTF, in this one very specific area, we are at worst still at 32 bits of precision 7-8 decimal places, probably plenty enough precision in all practical cases. In most cases, the precision will be much higher. I think the CSS spec requires only a minimum of 10 or 12 bit precision. From a practical standpoint, I don't think the average user would ever notice a problem. With that said, I don't think the change would cause significant differences in results either, so it wouldn't technically break anything that people would notice either. The edge case for this discontinuity is pretty narrow. If it were me, I'd probably prefer to keep to the spec, accepting this as a quirk of the space, but I can see the appeal of trying to "fix" the space. I'll let others comment with their opinions. |
Sass serializes to CSS at 10 digits precision, but internally fuzzy compare at 11 digits precision. It is indeed a little bit strange: sass/sass#3953
I noticed it too when porting Sass Color 4 support from Dart/JS to Ruby. Sass has recently released support for Color Level 4. The sass npm package uses a dart based color implementation, while sass-embedded package uses color.js. Some of our color conversion tests only pass at a very low 4 decimal places when comparing output of the two different color implementations - we don’t even get 7-8 digits precision, and I’m reviewing places where things might be off.
Indeed. I don’t think it will have any real user impact as long as we keep |
I'm not entirely sure what you are testing though. Are your tests testing what you think they are testing? Are you pushing results past reasonable floating-point errors? Does this actually affect round-tripping? I am skeptical that the current EOTF actually affects round-trip conversions or shows any meaningful difference when taken into context of normal conversions with normal floating point error. Can you demonstrate a real-world practical case that would benefit from this change? |
A more compelling case is probably the round-trip of While the unaltered values preserves precision to 5 significant digits, assuming that I guess the argument would be if you have to recalculate the inverse for a better round-trip, why not recalculate to better align the curve at the same time. |
I'm actually not sure if 0.0404482362771082 and 0.00313066844250063 is any more "accurate" than doing something like 0.04045 and 0.0031308049535603713. I think we are guessing about the exact intent of the original numbers and what is the most correct. One solution just calculates the inverse assuming the original value is right, and the other adjusts both values such that they intersect, but that doesn't mean that either is the true intent just that they both solve the precision problem by using the same precision in the forward and reverse. |
Never mind, I ran through the experiment myself, I see now why 0.0404482362771082 and 0.00313066844250063 was selected. They essentially graphed the logic above the cutoff and the logic below the cutoff and found the intersection of those two methods, one being a parabola, the other a line. That is why those two points are chosen. That makes more sense now that I ran through it. Those points being more accurate, but also maintaining the same precision between them improving round trip. |
Correct, they are basically better approximation of roots (parabola of the two curves) for the following math formulas:
Note: that with With the unrounded value of
Note: wolfram alpha may still show two roots due to float precision issue, but mathematically there is only one root. First, I think using better approximations for the roots (parabolas) is almost a no brainer. One the other hands, we have the question of using This would be the most precise as it follows the original mathematical intent with no rounding at all (other than a tiny loss from float precision). The only concern is that it will create a drift around My personal take is that the original IEC was so old at the time people probably did not care about precision that much as you can see the drift after rounding is not noticeable by the standard of human eye. However, when converting color and sometimes even goes round trips multiple times between different color space, I think we do want to be as accurate as possible. |
Yep, I'm convinced the results are better in the sense of more accurately approaching what I think the true intention was as to how the cuttoff was supposed to work, especially after analyzing low light round trip and physically plotting it out as I wasn't sure they were doing what I intuitively thought they should do, but now I see those are indeed aligned. I think keeping 12.92 as is would be the best route if a change was to be made as it keeps the The question is whether Color.js wants to be more accurate or more accurate to the spec as it is defined. I think @svgeesus would be the one to comment on the direction to take here. I know this library is meant to align with the CSS spec. I don't think results would be meaningfully different than what the CSS spec produces as defined now, but it would certainly yield cleaner results, but it would also stray from the official sRGB spec, if that matters. |
For anyone interested, here are the results reproduced as a sanity check of the claim: from scipy.optimize import fsolve
def equations(p):
x, y = p
return (y - ((x + 0.055) / 1.055) ** 2.4, y - x / 12.92)
# Find first root
print(fsolve(equations, (0.0, 5.0)).tolist())
# Find second root
print(fsolve(equations, (5.0, 0.0)).tolist()) [0.03815479871331798, 0.00295315779514845]
[0.04044823627710784, 0.003130668442500607] |
I think your value of 0.04044823627710784, 0.003130668442500607 are slightly better than mine. I will update the previous comments. |
sRGB has been:
It doesn't seem worthwhile to make a new, |
I do think that matters, yes. |
Yep, that is certainly my main argument. Currently, we follow sRGB, but any change would be sRGB adjacent. CSS does not ask for anyone to support 32 bit+ accuracy, but only 10 bit accuracy for sRGB. Any other desired constraints are self imposed by the library. |
Quote from https://en.wikipedia.org/wiki/SRGB:
https://entropymine.com/imageworsener/srgbformula/ - As this linked article says, we might not want to change 12.92 given that is somewhat fundamental to the published standard (even if it is already inaccurate), however, I do think we could use points
0.04044823627710784
and0.003130668442500607
instead of0.04045
and0.0031308
to make the conversion slightly more accurate going back and forth.The text was updated successfully, but these errors were encountered: