-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Function Syntax #6
Comments
I don't think we want something too weird and new, so we should follow the conventions of existing language families. Do we want to follow the functional style which makes variable and function assignment similar (and maybe support partial application):
or do we want something from the 'C' family:
or something like Rust (without the types for inference):
Or perhaps python like:
|
@keean wrote:
I have argued here, here, here, and here to not to adopt that syntax. Also that syntax is more ameniable to global type inference, because it otherwise requires a duplicate line to declare the type of the function. Insurmountable negative factor is that vast majority of the world's programmers will be unfamiliar with it (unnatural, not second nature, fights their ingrained second nature), because it is too different from the most popular languages, JavaScript, Java, C, and C++. I argued it raises the cognitive load too much both at declaration and at the call site, because it requires computing too many factors (inference, precedence, declaration site types and arity). From the author of the acclaimed Learn You A Haskel:
You seem to have agreed? Btw, partial application can be done with the other syntax styles. I will explain later. |
@shelby3 I agree I just want to capture everything relevant here. I also remember we wanted something like Python style indenting rather than curley brackets. Do we want to have a keyword like 'def' or 'fn' for function definitions? |
I actually quite like Rust's syntax for definitions, rather than Python or TypeScript. One problem is with combining TypeScript style signatures with Python style indenting:
The problem here is we might want to leave out types:
It makes it impossible to tell whether the return type is missing. Whereas with Rust style:
|
@keean wrote:
I was just about to post a similar point. We can't use the C/Java-style of type precedes instance name:
Also I wouldn't want some verbose gaudy type as the prefix any way.
Haskell puts the optional types on a separate line instead which I think is unfamiliar, consumes extra lines and violates DNRY, which I don't like (then afaik all the types have to be provided, not the option of just some of them). |
@keean wrote:
Since yesterday, I've been trying to think if there is a way we don't have to prefix the function declaration (aka definition) with I've been thinking about all these issues too. 😉 Inside the body of the typeclass declarations we certainly don't need the prefix on methods. But for functions declared inside the body of another function, we would have a grammar conflict with function calls. One way to get around that is to force a space between the function name and The grammar is really a holistic design and we can't design it piecemeal and be sure to not have CFG conflicts. Feedback? |
@keean wrote:
I don't like the alternative syntax with an arrow In the LL(k) grammar I was working on in May, I devised a solution which eliminates the need for the trailing I presumed we will enforce a standard indent as agreed by @SimonMeskens. The rule is that if the following line is indented by the standard amount, then it is a new code block. If it is indented more than the standard amount, then it is the continuation of the prior line. I realize that enforcing indenting spacing is something that programmers will disagree on because they all have their preferential amount (and some want tabs and others want spaces), but this is open source and consistency (for readability) is more important than allowing people that freedom to make code unreadable for others (such as when the tabs don't line up!). Btw, I am hoping we decide for 2 spaces indent to conserve horizontal space for deeply indented blocks? I notice from your examples, that you like more spaces when you indent (4 or 5?), but the really skilled programmers I've seen use 2 spaces, because 2 spaces is enough and also it helps align at the tail of the
Three spaces uses 50% more horizontal space and it aligns with the
Any block indenting greater than 3 spaces is going to piss off some programmers. Any programmers who think they need 4 spaces for block indenting are amateurish. Our eyes can't easily detect it at 3 spaces, and I find 2 is enough (and I am blind in one eye and blurred in the other eye). What is your opinion? I could tolerate 3 but I don't think it is ideal. But 4 makes me not happy:
And 5 is ridiculous:
Most of my coding life I used 3 spaces, but when I saw that expert coders were using 2 spaces, I realized I didn't need 3. |
What about the wildly popular arrow syntax? It's familiar, due to being included in JavaScript, TypeScript, Java and C# and it's quite concise (unlike using a keyword like Block lamda (daft sample to pad it out a bit): inc(x: Int): Int ->
let y = x + 1
return y Notice that absence of type on Now, the problem arises for me, when you want to type curried functions or just simple lambdas. What do we use for indicating a type goes from one to the other? TypeScript for example uses the fat arrow for both, this is syntactically very confusing. I think for this reason, we could also replace the arrow with double colon. I personally think double colon is the nicest separator, but it's slightly less popular. I'll introduce something else I like too: using filter(list: List of Item, pred: Item -> bool): List of Item ::
let out := List(Item)
for(item of list)
if(pred(item))
out.add(item)
return out
filter(myList, x => x = 5) Notice that we might as well just drop the inc(x: Int): Int
return x + 1 I kept the code quite imperative for now, to not assume too much yet. If we do get rid of brackets, I might prefer something like this I think? Still keeping brackets for calls maybe? Starting to get alien. Still less confusing than Haskell though. filter
list: List of Item
pred: Item -> bool
return List of Item
=>
let out := List(Item)
for item of list
if pred(item) and not item.empty
out.add(item)
return out
filter(myList, x => x = 5) |
@SimonMeskens wrote:
That solves the Your suggestion won't work for eliminating the I potentially like your idea of consistency between the syntax of anonymous lambda functions and regular function implementations (but maybe they shouldn't be equated since one is not a singleton, is not named, and other is a named, singleton?), but you've rendered the new block syntax inconsistent with
Also this would be inconsistent with
|
That seems fair enough, was just offering some ideas. Personally, I really like homoiconic languages, but I'm not sure how feasible that is for this one. Here's a good article on it: |
I am very sleepy so I hope the following is coherent. On further thought, I must overrule my prior suggestion for an enforced number of spaces for indenting. As much as I would like to have this consistency, I don't think it will be popular because any choice will offend most people who don't like that choice. Number of spaces for indenting is a personal preference. I looked at the grammar I wrote in May and now I remember how I solved this problem and maximized consistency. First we must consider that these and So I suggest that
So if we choose
Notice above instead of Some people suggest to reverse the direction of the arrow? |
That seems pretty clear to me 👍 |
Some interesting ideas, I quite like the uniform assignment syntax. Haskell and other functional languages define functions as:
It just means you need an LR parser. So I don't see a problem with:
I would suggest using '=' consistently. However I also like the anonymous function syntax and using normal assignment:
That looks odd for something like the identity statement. I would suggest the block begins if there is nothing on the same line after the '=>' . I think block indent depth should be set by the indent of the first line. Line continuations would be any kind indented more than this. However there would be problems where there is a block and a line continuation like:
What about optionally naming the anonymous function to help debugging. JS allows this:
I think these forms will require an LR parser, but I am okay with that. |
Should we allow no brackets for a single argument function?
|
@shelby3 wrote:
But could we at least agree to disallow tabs for indentation (i.e. only spaces allowed)? My attitude is if it pisses off a few people, then I am unsympathetic to their preference, because if I as the reader don't know what tab width settings the programmer was using, then the blocks of code don't align on columns when I view it. This is an era of open source and code must be transferable without ambiguity. Most programming text editors these days have an option to automatically convert all Could I get a vote on the above suggestion? (the thumbs or thumbs down on this comment) |
Another requested vote is should we disallow lines which contain only spaces? This would remind the programmer (and his IDE) to delete such lines from the code. Such lines make for clutter in version control differencing when they are later removed (cleaned up). Not putting them there in the first place would be better. |
@keean wrote:
No. I am going to argue now at this early stage that we need to have some guiding principles. And I think one of those principles should be to not unnecessarily create multiple ways to do the same thing when we can easily avoid doing so (which is one of the complaints against Scala). Especially not creating another variant of syntax just to save two characters. This is inconsistency for the reader. And remember typically there can be 10 - 100 times more readers of each source code base, then there were coders of that source code. We want to minimize the number of variant cases the new programmer has to learn. |
Okay, that makes sense to me, so it has to be:
Presumably we are happy with all sorts of variations on type information like:
|
I am not yet 100% firm on my prior suggestion to unify function definition around the @keean wrote:
The multiple ways to optionally annotate type declarations on arguments and result value, seems acceptable to me, because it is a consistent rule that also will apply to reference declarations as well. |
On this tangent of variant syntax to define an anonymous (unnamed) function, I have liked a shorthand from Scala which enabled very concise expressions: @keean wrote:
set_callback(_ + 1) That is saving us much more than two characters and it can really remove a lot of verbosity if we further enable it to handle more than one argument. I disliked that Scala used multiple
set_callback2(_ + _) I suggest instead with a limit of 3 arguments: set_callback(_ + 1)
set_callback2(_ + __) Note these should not cross function call lexical parenthetical scopes, i.e. the following are not equivalent: setCallback((x) => f(() => x))
setCallback(f(_)) Normally I would argue against creating a new variant to do the same thing. But the above is so geek-cool, that I think it would be a beloved 😍 feature. My marketing senses/intuition prioritize love/passion ❤️ over rigid design guidelines. Also I think I can make a case that the significant increase in brevity makes it more than just another way to do the same thing. We can't get that same brevity with the other syntax. Thus I argue it is necessary. Am I mistaken on the love for this alternative syntax? P.S. afair, Scala even allows the following, which I argue against, as it violates the guideline for not unnecessarily creating variant syntax: set_callback(_:Int + 1)
set_callback2(_:Int + __) |
I quite like the
|
@keean wrote:
I had that idea also, but it makes it more verbose for the single argument case. If we prefer that, I'd perhaps suggest instead: set_callback(_ + _2) But I didn't suggest that because it is inconsistent, violating one of the other guidelines I have for not proliferating special case rules in order to keep the cognitive load for the new programmer reasonable. Also in Scala that conflicts with the tuple member names. We might want to not reserve numbered names when they can be useful in general for names. Also there is another reason I favored this:
And that is because it visually literally looks like a slot (i.e. "fill in the blank"); whereas, I do understand that Another possible design choice, is only support it for one argument. Anyone else want to share an opinion? P.S. I know the choice is imperfect (I admit this |
I find it hard to distinguish
Although that makes me think how do we differentiate between a single like |
In the interest of brain dump disclosure... I wish we could use Unicode symbols, then we could use (although it is a slippery slope towards "symbol soup" dingbats source code):
Or:
Unfortunately Unicode doesn't define the character set for numbers above horizontal brackets, so the above are the two best available choices. I realize we can't use Unicode for something a programmer needs to be able to type in quickly, but the world is changing and there are now configurable customized hardware keyboards and otherwise it is possible on Android, Windows, and Linux to software remap the keys on any hardware keyboard. A programmer can remember to type But I loathe special requirements on system configuration and/or specialized hardware. So this would need to be super compelling and it isn't. It would also for the time being be a discrimination against the developing world, where they may not have access nor funds (or be forced to use a shared computer in a netcafe) to purchase the custom hardware keyboard. Edit: Dec. 3, 2017 it’s possible to render with HTML+CSS the Unicode “bottom brackets” and “open box” with the superscripted number shifted in the desired position: So I’m thinking the programmer could type Special typing assistance isn’t required, i.e. the text can contain either Ditto for example I will propose to allow |
@keean wrote:
I don't (and I'm blind in one eye), but
Agreed. That is why I wrote we couldn't go beyond 3 arguments, maybe not even more than 2.
But that isn't orthogonal to ordering, which sometimes can't be re-ordered to fit. And doesn't allow using the slot more than once (which you acknowledged). And I find that confusing semantically (it is so inconsistent with the way variable names work). It adds cognitive load. I remember that being a source of confusion for me when I was learning Scala and still might be a gotcha when I am not paying careful attention.
That violates the generalized guiding principle that I thought we agreed on. It adds specialized syntax to fix the shorthand when the programmer can otherwise revert back to the normal syntax for unnamed function. And afaics that proposal gains nothing significantly justifiable in brevity:
|
@shelby3 wrote:
I thought having an assignment that you can use in an expression would be in line with the 'everything is an expression principal'. It would be usable in any expression, not just in the special short function syntax.
If you want to use each variable more than once or in a different order, use the original longer syntax :-) I am not a huge fan of the |
@shelby3 wrote:
The proposed shorthand isn't suitable for more than 2 or 3 arguments, regardless of syntax we choose. It isn't justifiable to have Thus I logically conclude (including the logic in my prior comments) that the shorthand should only support at most 2 or 3 arguments and the syntax should be Otherwise choose not to have a shorthand syntax. @shelby3 wrote:
|
@keean wrote:
I failed to communicate my point. I am not making a decision here about whether an "assignment-as-expression" is desirable or not (since afaics that is mostly an orthogonal design issue, and since this wasn't a compelling use-case for it). Rather I am stating it doesn't make the use-case of the shorthand any less verbose.
That logic is incongruent with my other point:
Point is that if we add the shorthand, then it should respect normal semantics of variable names. This is one of those insights into language design that Scala's designers apparently didn't do well in some (many or most?) cases.
Didn't you write otherwise upthread? Or perhaps you meant all design uses for
Any way, holistic design requires a lot of thought, so it is easy to end up flipflopping. I generally agree of course in the stated principle that "readability beats conciseness".
It isn't just the issue of being concise in string length, but also the crowdedness (symbol soupy-ness) of the juxtaposed double
I have one more alternative syntax idea for the shorthand, assuming that we will set a rule that all variable names are lowercase (or at least that all variable and function names begin with a lowercase letter or underscore):
That would conflict with the convention of uppercase letters for type parameters (such as inside of typeclass method bodies), but we could require type parameters to be a minimum of two characters (or just disallow But I still think I prefer the underscore as it looks like a slot. Otherwise drop the shorthand proposal. |
Wrong terminology @keean. A function of arity 3 has type
where D is not a function type. |
This is my current view as well. It has to be a literal tuple (syntactic restriction), but the type given to the function can be correct. I type recursive types internally as a graph structure, and convert to mu notation when printing, which is normally written as |
I disagree, arity is the number of arguments a function takes. Here an arity one function returns an arity one function, returns an arity one function that returns a value. Side effects could happen in any of those arrows. To ensure an arity three function where side effects only happen after all three arguments are passed requires |
@SimonMeskens wrote:
Could someone please clearly explain what that 'mess' is? I don't see it. As you pointed out, a non-curried function that inputs 3 arguments is equivalent to a curried function that only accepts one argument which is a triplet tuple. @skaller wrote:
ftfy. A non-curried function of arity 3 has a type:
@keean wrote:
Right it is syntactic restriction, but we could support destructing an instance OTOH, @skaller's preference also has "absurd" flaws too, e.g. if I want to distribute a triplet tuple to an arity 3 curried function, then I must write some verbose tuple destructuring and application such as So it doesn't matter which preference we choose, afaics they are both "absurd" according to @skaller's logic. If someone can explain some fundamental reason only one of the preferences is less messy, I would appreciate it. Rather I prefer the local reasoning and mainstream familiarity of explicitness of arity with forced grouping and I have explained above we can eliminate common cases of extra parens. @skaller math doesn't come in only one conceptual abstraction. There is more than one way to skin a cat. Type theory is not the same as category theory. We create type systems and syntactic conventions that optimize the programmer experience, not the arbitrary chosen mathematical abstraction. That is not to claim that there isn't more degrees-of-freedom in a category theory model with only functions as nodes on a graph and types as graph edges. @keean's typing model is types as nodes on the graph and edges between them for inference. Edit: I would like to make the analogy to building houses with grains of sand or with windows, bricks, doors, etc.. The most elemental model @skaller prefers is also very amorphous. |
The big idea is that if every function is defined as taking one value and returning one value, there are a number of operations you can do with functions, without having to know arity. Every function in the language is exactly the same, so you can write in a point-free style easier, composition becomes easier and partial application becomes trivial. A language which has higher arity functions or one that forces a single tuple as input looks the same to the end user, but the latter one just has the advantages I listed. In practice, this is exactly the same, it's just a way of reasoning about it. It's the same argument as @keean saying he wants to treat every function as an IO Monad. To the end user, this is invisible, but in the context of purity and assumptions you can make, it's a huge difference. |
@SimonMeskens wrote:
But if we can curry functions (on demand), then we can have that too. I don't see why we need to commit to curried by default? Edit: I am not implying you are proposing that. Just trying to understand if @skaller has any point other than just a preference of priorities of the default. |
I never said that we should curry by default? That would make the language harder to use for a lot of people. I'm saying we should have functions with multiple arguments, but treat those like they are a function from tuple to value, instead of higher arity. For the user, it looks exactly the same. |
@SimonMeskens I agree and just wondering if there is anything fundamentally broken in making that choice. Note the syntax I am proposing for currying a 3 arity non-curried function is:
Which is equivalent to:
For partial currying (i.e. partial application of non-curried):
|
@keean the question is what is the difference in type between:
And:
I was thinking So the type of former is So that would settle the preference over whether to use You had disagreed with me about certain side-effects being inapplicable, but I (and @skaller) argued I think convincingly that the only side-effects that matter are those that our type system guarantees. If we pass around the IO monad as external state, then every |
@shelby3 I went down the whole different arrows thing in the past and I am not convinced it will keep things DRY. Personally I think functions should be functions, and we should infer monadic effects (so you don't have to put different arrows). You then use constraints at certain points, so for example if you want a particular callback to have no side-effects then you can give it the appropriate constraint. Now it would be a compiler error to try and pass a side-effectful function as the argument for that callback. For optimisation purposes purity inference is better than annotation because the compiler will always get it right, and you won't forget to change the arrows if you edit a function which would lead to the compiler making false assumptions. This also links to the idea of zero arity functions behaving as values, which I think is a mistake and we should not do. The reason is it will confuse imperative programmers. If something is a 'const' expression we can evaluate at compile time if performance is a concern. Otherwise if the value depends on runtime values it needs to be executed at runtime do treating it as a value is wrong anyway. |
@keean what you are really saying is you refuse to have pure annotations and it has nothing to do with anything you've written prior about arrow notation, since this is the first time we've discussed using two arrows to signify pure annotation. What you wrote in the past is you didn't prefer different syntax for inline and non-inline function definitions. And I concurred. As for DNRY (aka DRY), it is a fact of programming that we provide different implementations that accomplish similiar goals, e.g. iterators versus enumerables and mutables versus immutables, because they provide different characteristics. If you don't like that, then perhaps programming is not the right field for you. Perhaps you'd also prefer we don't offer monadic effects composition since it will also have the same trait that if you don't want to use that form, then we have to duplicate all of it in the alternative form that others may prefer to use in their coding. The purpose of purity annotation is to signify the programmer's intent so the compiler can check his intent. The purpose of purity in types, is again so intent is expressed. You are trying so hard to find any excuse possible to take seriously the need for pure and impure functions. I don't see how you are arguing logically. Only pure zero argument functions are uni-valued (one constant value). Non-pure zero argument functions do not behave as uni-values. |
Writing your intent for purity seems the wrong idea. You do not want to say when something is pure, you want to say when you need something to be pure (especially in am imperative language where everything is impure by default unlike Haskell). Consider:
What do I gain from marking this pure? |
@keean wrote:
Why didn't you just come right out and say that what you want is for functions to be able to behave as both pure and impure simultaneously via inference, because you think this will help DNRY? Can't you be more explicit, so that readers don't have to piece together your indirect implied meaning. Because you think that if a function is not marked pure, then it won't require all its invariants to be pure when the caller of said function does not require purity. I see at least three problems with your idea:
I argue it is better to be explicit and provide a pure and impure version of a function, such as if the function inputs any callbacks or mutates any data structures in place. Otherwise there is no need for two variants of the function. Afaics, you are putting too many unnecessary expectations on inference and this is going to make inference very complex and brittle especially when we start adding higher-order features for solving the Expression Problem. |
I don't think we need purity anyway, just side-effect control. What do we gain by marking a function pure? |
I am preferring finer grained invariants of employing read-only annotations on arguments than annotations on whether functions are pure. The only non-local side-effects (within our static type system!) that can occur within a function not due to mutating arguments are mutating external state accessed not via arguments. Am I correct, that the only valid (in correct design) use case for accessing state external to a function which is not a function argument/parameter are for lexical closures that facilitate modular programming?
Should we somehow restrict these such as only allowing lexical closures within the same file in order to approximate the local reasoning that pure functions and referential transparency aim to accomplish, i.e. no closures on imported scopes (aka global variables, I guess those are not lexical scopes correct)? So then there are no impure functions (even in partial application) which are not lexically local and visible. (Pure functions in theory also enable some optimizations and compiler reasoning, yet sufficiently local, contained lexical closures may also)
The point is that pure functions and Monads are not a panacea. Thus afaics, the goal is localizing invariants and reasoning where feasible, but this is never an absolute. |
Added some analysis of default function argument parameters. Note scroll to the top of the linked Wikipedia page for the difference of my revision to the page. What is interesting is the two general ways to model default arguments:
Note if we supported both variants then the TypeScript code generated by our transpiler would output variant #2 function signatures as not accepting default arguments, because every call site in the generated code would explicitly supply the default argument values (thus the aforementioned code bloat). Named function argument parameters is a separate issue, except that named arguments are only useful if they follow a default argument parameter unless one argues they are also useful as a form of type-checked comment for descriptive local reasoning and/or so call sites do not need to be refactored when a default argument parameter is inserted at the function definition site. I prefer the C# syntax at call site of This has another advantage that names specified in type signatures look very familiar to other languages, e.g. It makes no sense to specify names on the types of callbacks nested within callbacks, since the only purpose of names on types of callbacks is so the body of function that inputs the callback can employ names at call sites. Generated TypeScript code can simulate named parameters with destructing parameters— which apparently is efficiently handled by engines as of ES6. But there are some pitfalls. However, even though it may be efficiently handled, since we may want to allow mixing named and unnamed in the same call site, our transpiler generated TypeScript code should explicitly pass all arguments (including |
Making the anonymous (aka lambda) function definitions as concise as possible (see also) and remove fugly nested parenthesis without proliferating limited special cases of abbreviated lambdas and single argument functions, has been facilitated by the proposal to push the optional type annotation off to the RHS, because no longer need parenthesis when there is a result type annotation. Thus the parenthesis are are not necessary to delimit the tuple of function arguments, if the argument parameters are space-delimited instead of comma-delimited to avoid the grammatical ambiguity conflict with comma-delimited lists (i.e. within tuples) in general. Space-delimited function arguments are not ambiguous because juxtaposed expressions can otherwise never occur because line breaks to the block indent column position are enforced between expressions (instead of1 JavaScript’s optional semicolon Although I would like to have only one way to define functions, there are conflicting priorities in that for functions defined outside any other function body (i.e. at the top-level block at column position 0 and within typeclass implementations) there is no grammatical ambiguity between function definitions and calls because function calls are not allowed outside of a function body, thus these can be most concisely written in the syntax of a function call: The names and variant #2 defaults for arguments in a function definition with the type annotation off the RHS do not need to be duplicated in said type annotation. But for function signature definitions in typeclasses and for callbacks (aka HOFs) in function signatures where the body of the function is not specified, then the names and said defaults can be optionally (or even only partially) merged into the type signature (instead separated off to the RHS): 1 Offering optional semicolons in order to place more than one expression on the same line (as in Python), so for example inline (non-block-indented) anonymous functions can have more than one expression, would be maybe desirable if it is feasible that our grammar does not require they be enclosed in matching curly braces.
|
I was using square brackets for type-parameters, not function arguments. As in
Could be written with a separate type signature:
Note: If the type system has universal quantification we don't need to declare |
|
There are still type variables with implicit universal quantification, but you don't need to introduce them with type parameters. Here are two equivalent function types, first the parametric type:
Now the type with implicit universal quantification
A key difference is the first example with explicit type parameters is not actually a type, it is a type constructor and you must provide both type parameters for it to be an actual type (unless you have HKT) and it is monomorphic. The second with universal quantification is a type, and it is polymorphic. This shows up in Rust which has parametric types, you cannot pass a 'generic' function with unspecified type parameters as an argument to a function. In Haskell with universal quantification you can. |
Interject that an intersection of functions is a type. But that reminds me of our point that intersections and typeclasses seem incongruent. So typeclasses for overloaded functions might be problematic. I do not have knowledge or experience as to why we would ever use typeclasses on function instances.
The main complaints against function name overloading I see are:
My motivation for function name overloading is that replacing succinct typing of a common semantic with a Cartesian product exponential explosion of naming seems to be regressive to the point of typing. And unions (disjunction) input argument parameters are not the same thing as intersections of functions, and are less efficient. Note TypeScript offers overloads as an intersection of functions, but unlike Java, the selection of the function is not statically determined and instead by runtime case-type logic. There appear to be some mitigations for the aforementioned complaints:
EDIT: default and named parameters fulfill some of the use cases where function overloading would be needed. Typeclasses fulfill other cases where only the data types change because typeclasses provide data type independence. Since function overloading seems to be incompatible with typeclasses (i.e. our aim is to employ typeclasses instead of concrete data types on function arguments, except perhaps for low-level highly optimized code) and sound type inference, we should probably not provide that feature. |
@keean wrote:
Apparently Rust can not even pass a function with specified type parameters into a function, because Rust does not offer HRT, which must be simulated with what is analogous to FCP (i.e. passing a named data type which wrap a type parametrised function and then unwrapping to call it generically). My proposal for inference of HRT looks syntactically like universal quantification but it is just inferring the rank of the HRT. But to transpile this HRT to TypeScript (which I presumed also does not support HRT although this comment claims otherwise) I suppose will require employing FCP and/or eliding/erasing the typing if TypeScript does not support HRT. |
One of the first things we need to do is decide on a function syntax, for application and definition.
The text was updated successfully, but these errors were encountered: