-
-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
canonicalize unicode identifiers #5434
Comments
+100 for this. Any strategy for ensuring that homoglyphs are merged seems like a big improvement to me. |
+1 for canonicalizing everything. |
(We should probably also normalize the Unicode identifiers, in addition to canonicalizing homoglyphs.) |
On possible software package that we could adapt for this might be utf8proc, which is MIT-licensed and fairly compact (600 lines of code plus a 1M data file). |
+1 for canonicalization and normalization. We certainly don't want the same disambiguation issues with combining diacritics and nonprinting control characters (like the right to left specifier). The Unicode list contains quite a few characters with combining diacritics already; not sure if it's exhaustive though. |
Actually, it looks like the utf8proc library completely solves this problem, because it implements (among other things) the standard "KC" Unicode normalization which canonicalizes homoglyphs. I just compiled the utf8proc library and called it from Julia via:
and then
works (the second argument is various canonicalization flags copied from the Moreover, the utf8proc canonicalization functions (including Unicode-aware case-folding and diacritical-stripping) would be useful to have in Julia anyway. I vote that we just put the whole utf8proc into |
Awesome, thanks for doing the legwork on this. |
That sounds like a really good idea to me. |
KC has one case that we probably don't care about but seems worth mentioning: superscript numerals will be normalized to normal numerals. (We probably don't care because why would you have superscript numerals in a numeric literal, but this seems like the sort of thing to be abused in a future International Obfuscated Julia Coding Contest.) |
That's not totally ideal; |
I've actually used |
We also have to avoid normalizing out different styled letters that represent different symbols in mathematics. |
The problem with |
@JeffBezanson may be referring to what UAX #15 calls font variants (see Fig. 2). They give as an example So it seems that we are learning toward canonical equivalence, as opposed to full compatibility equivalence, in which case NFD may be sufficient rather than NFKC. |
For variable names, I don't see the superscript/subscript being as much of a problem, other than i.e., |
Our use case is very different from something like a text formatter, which wants to know that superscript 2 is a 2. In a programming language any characters that look different should be considered different. We can perhaps be flexible about superscripts, but font variants of letters have to be supported. |
The initial issue raised involved confusion over U+00B5 MICRO SIGN and U+03BC GREEK SMALL LETTER MU. Normalization type NFD would not fix this problem since U+00B5 has only a compatibility decomposition to U+03BC and not a canonical decomposition. NFKC will fix that issue. The utility at http://unicode.org/cldr/utility/transform.jsp?a=Any-NFKC%0D%0A&b=µ is useful for this. |
@JeffBezanson, I'm not convinced that "characters that look different should be considered different." One problem is that, unlike LaTeX, we cannot rely on a particular font/glyph being used to render particular codepoints. U+00B5 and U+03BC look distinct in some fonts (one is rendered italic) and not in others, for example. Moreover, even when codepoints are rendered distinctly, the difference will often be subtle (χ² versus χ2) and hence an invitation for bugs and confusion. (That's why these variants work for phishing scams, after all.) I would prefer to simply state that identifiers are canonicalized to NFKC, so that only characters that look entirely distinct (as opposed to potentially slight font variations) are treated as distinct identifiers. It's useful to have variables named |
There are several different levels of distinction being discussed:
These call for different approaches. To deal with "indistinguishables" it's pretty clear that we should just normalize them. At the other end of the spectrum, this is a pretty lousy way to deal with "weak confusables" – imagine using both I've intentionally avoided Unicode terms here to keep the problem statement separate from the solution. I suspect that we should first normalize source to NFD, which takes care of collapsing "indistinguishables". Then we should warn if two identifiers are the same modulo "compatibles" and "confusables". That means that using composed and uncomposed versions of |
@StefanKarpinski Good summary! but I think you have the wrong conclusion. I was once challenged to find out why My preference would definitely be to make Julia consider all possible ambiguous characters equal, and give a warning/error if someone use identifiers that is considered equal because of rule 2 and 3. I do not read Unicode codepoints, and i do not have a different word for |
Well, that's why it should warn. Whether it considers them the same or different is somewhat irrelevant when it causes a warning. I guess one benefit of considering such things the same rather than keeping them different is ease of implementation: if the analysis is done at the file level, you can canonicalize an entire source file and warn if two "confusable" identifiers are used in the same source file and then hand the canonicalized program off to the rest of the parsing process without worrying any further. Then again, you can do the same without considering them the same by doing the confusion warning at the same step but leaving confusable identifiers different. |
As a practical matter, it is far easier to implement and explain canonicalization to NFKC, taking advantage of the existing standard and utfproc, than it would be to implement and document our own nonstandard normalization. (There are a lot of codepoints we'd have to argue over.) We can also certainly issue a warning whenever a file contains identifiers that are distinct from their canonicalized versions. (But I think it would be an unfriendly practice to issue a warning instead of canonicalizing.) |
It seems unfortunate to me to canonicalize distinct characters that unicode provides specifically for their use in mathematics. Should we use a different normalization, maybe NFD, for string literals? |
I don't think string literals should be normalized at all by default, although we should provide functions to do normalization if that is desired. The user should be able to enter any Unicode string they want. |
+1 for what @stevengj said. There's something to be said for preserving user input as much as possible. (What if the user wants to implement a custom normalization, for example...) |
Just to be perverse, let's say we normalize to NFKC, and Quaternions.jl gets renamed ℍ.jl. Then |
I've actually rampantly made the assumption that package names are ASCII largely because I think it's opening a whole can of worms to use non-ASCII characters in package names. |
I'm much more concerned about identifier names. I don't think merging |
@stevengj – what about the χ² vs. χ2 issue? Your proposal silently treats them as the same, which strikes me as almost as bad as the (thus far hypothetical) problems we're trying to avoid here. |
Actually, no, it's worse – at least you can look at the contents of your source file and discover that two similar looking identifiers are actually the same. If χ² and χ2 are treated as the same identifier, there's no way to figure it out short of finding the obscure appendix of the Julia manual that explains this behavior. I find that unacceptable. |
of course. but i think, for sanity, that it needs to be a universal property of symbols. being able to make symbols through I think all symbols should be normalized the same way regardless of how they are entered. While it makes sense to me to treat different ways of writing the same unicode character as equivalent, it doesn't make sense to me to treat different unicode characters as sometimes equivalent.
I disagree. The beauty of having unicode identifiers is in being able to use them freely, even if it means you need to upgrade your tools. normalizing symbols won't fix #5903, since by the time the parser has decided it is a symbol, it is too late to redefine it as a separate operator. instead, I think it is more akin to the question of whether arbitrary expressions can be used as infix operators. Since they can't, it is a limited subset of operators that would be affected by allowing full-width alternatives to the half-width punctuation. Therefore, I believe that it is reasonable to make that modification without resorting to full NFKC for all symbols. Somewhat unrelated, but I would require that all code be normalized to the standard ascii half-width operators for pull requests to any of my repositories. Even if they are defined to work identically, and differ slightly visually, it poses a maintenance hazard if find/replace doesn't see them that way. |
As far as I can tell, no one is opposed to NFC normalization, so we should probably do that. For anything beyond that, perhaps we should wait until we have more input from users in languages that utilise non-latin character sets, since these are the parties most affected. As a monoglot, I have no real opinion as to what would be best, but I suspect the answer could be different for different languages. I think @vtjnash may have hit upon a good solution, at least in the interim: provide recommended guidelines, along with a script for testing whether code satisfies those guidelines, which could be used as an appropriate git hook or incorporated into travis tests. This could be enforced for Base and other JuliaLang repos, but if people really want to use two different mus in their own code, then they can. Moreover these guidelines could be later amended based on feedback without breaking existing code. |
I concur with @simonbyrne that we keep NFC and be cautious about going beyond. A guideline about choosing names, to me, is better than forcing a controversial behavior (i.e. quietly tying identities that look noticeably different). In terms of asian full-width character. I think it might be better to raise an error (or at least a warning) when people are using |
I don't claim that NFKC is perfect; it is certainly possible to write obfuscated code even in ASCII. Just that it will cause far fewer problems than the alternative of NKC. The fact that there is no perfect solution is not an argument that we should do nothing. NFKC is a widely accepted, standardized, and continually updated way of normalizing strings so that different input methods generally (if not always) produce the same codepoints and that many (even if not all) codepoints with slightly different renderings but similar meanings are identified with one another. Losing the ability to use @vtjnash, the question of whether |
Ok, "do nothing" might not be the best solution, but it does have the nice property of being very transparent. You can see what's going on just by looking at code points and seeing that they are different. Similarly, erring on the side of treating identifiers as different will tend to produce not-defined errors, while silently equating identifiers will tend to produce subtle bugs. Probably almost nobody has a good intuitive grasp of what NFKC does. If it were really true that it specifically targeted differences due to input method, that might be valuable, but instead it strikes me as a giant random list of equated code sequences. |
Moving, as too contentious to block 0.3. |
This is why I've been arguing for an error. Our general philosophy is that if there's no obvious one right interpretation of something, raise an error. NFC is fine-grained enough that we can be sure that NFC-equivalent identifiers are meant to be the same. NFKC is coarse-grained enough that we can be sure that NFKC-distinct identifiers are clearly meant to be different. Everything between is no man's land. So we should throw an error. Otherwise, we are implicitly guessing what the user really meant. Not canonicalizing to NFKC is guessing that distinct identifiers are actually meant to be different. Canonicalizing to NFKC is guessing that distinct but NFKC-equivalent identifiers are meant to be the same. Either strategy will inevitably be wrong some of the time. |
Only if you exclude Chinese (Unihan) characters; I've already provided counterexamples. |
I'm willing to say that if Unihan has decided to ignore the standards on this matter, that is not our problem. |
That sentence is illogical; Unihan is part of the Unicode standard. You can say that the standard is inconsistent. All I'm saying is that none of the arguments I have heard in favor of NFKC are actually sufficient to cover the corner cases in Unihan. |
Unless there's some even coarser equivalence standard that works for Unihan as well, NFKC is the best we've got and we're not going to get into the business of deciding what Unicode characters should or shouldn't be considered equivalent. If there isn't such a standard, then the mismatch between Unihan and NFKC is the Unicode consortium's problem as I said, not ours. |
I agree with @StefanKarpinski: there's not much to win by silently normalizing identifiers using NFKC. If we report an error/warning, people will notice the problem early and avoid much trouble. Julia IDEs will be made smart enough to detect cases where two identifiers are equal after NFKC normalization, and will suggest you to adapt automatically when typing them. OTC if the parser does the normalization, you will never be able to trust |
I'm just concerned that the ambiguity detection might be silent while you On Wed, Mar 5, 2014 at 9:43 AM, Milan Bouchet-Valat <
|
Seems like this can be closed. |
Perhaps Lint is the right place to catch this. |
This is somewhat controversial thing, but I have made a decision: identifiers should be normalized to fold visual ambiguity, and the normalization form should be NFKC. Rationale: 1. Compatibility decomposition is favored over canonical one because it provides useful folding for letter ligatures, fullwidth forms, certain CJK ideographs, etc. 2. Compatibility decomposition is favored over canonical one because it provides more protection from visual spoofing. 3. Standard Unicode transformation should be favored over anything ad-hoc because it's predictable and more mature. 4. Normalization is a compromise between freedom of expression and ease of implementation. Source code is not prose, there are rules. Here are some references to other languages: SRFI 52: http://srfi-email.schemers.org/srfi-52/ Julia: JuliaLang/julia#5434 Python: http://bugs.python.org/issue10952 Rust: rust-lang/rust#2253 Unfortunately, there aren't very many precedents and open discussions about Unicode usage in programming languages, especially in languages with very permissive identifier syntax (like Scheme). Aside from identifiers there are more places where Unicode can be used: * Characters are not normalized, not even to NFC. This may have been useful, for example, to recompose combining marks, but unfortunately NFC may do more transformations than that, so it is no go. We preserve the exact Unicode character. * Character names, on the other hand, are case-sensitive identifiers, so they are normalized as such. * Strings and escaped identifiers are left untouched in order to preserve the exact spelling as in the source code. * Directives are case-insensitive identifiers and are normalized as such. * Numbers should be composed from ASCII only so they are not normalized. Sometimes this produces weird parses because characters that look like signs are not treated as such. However, these characters are invalid in numbers, so it's somewhat justified. * Peculiar identifiers are shit. I'm sorry. Because of NFKC is is possible to write a plain, unescaped identifier that will parse as a number after going through NFKC. It may even look exactly like a number without being one. There is not much we can do about this, so we produce a warning just in case. * Datum labels are mostly numbers, so they are not normalized as well. Note that sometimes they can be treated as numbers with invalid prefix. * Comments are ignored. * Delimiters should be ASCII-only. No discussion on this. Unicode has various fancy whitespaces and line separators, but this is source code, not a rich text document in a word processor. Also, currently case-folding is performed only for ASCII range. Identifiers should use NFKC_casefold transformation. It will be implemented later.
As discussed on the mailing list, It is very confusing that
throws a
µ not defined
exception (because unicode codepoints 0x00b5 and 0x03bc are rendered almost identically). This could easily be encountered in real usage because option-m on a Mac produces 0x00b5 ("micro sign"), which is different from 0x03bc ("Greek small letter mu").It would be good if Julia internally stored a table of easily confused Unicode codepoints, i.e. homoglyphs, and used them to help prevent these sorts of confusions. Three possibilities are:
foo not defined
exceptions could check whether a homograph offoo
is defined and let the user know if so.My preference would be for the third option. I don't see any useful purpose being served by treating
μ
andµ
as distinct identifiers.The text was updated successfully, but these errors were encountered: