-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Modularity #39
Comments
@shelby3 @keean signature EQ = sig
type t
val eq : t * t -> bool
end
structure EqInt = struct
type t = int
val eq = Int.eq
end
fun f (X : EQ) = struct
type t = X.t
val eq = X.eq
end Function f should have a signature of the form EQ where type t -> EQ where type t string<-readline...
X.t=eval_to_type(string) Because of this, most statically typed languages disallow such things. |
@sighoya it would help if you referenced the problem you are questioning, I am not sure what you are trying to show or disprove? |
I want to prove that dependent types are not needed for modules. Edit: |
You example has no runtime polymorphism. You need to introduce an existential type, so that you want to run Eq over a list of existential values. |
You mean something like this: signature EQ = forall x sig
type t=[x]
val eq : t * t -> bool
end or signature EQ = sig
type t=[forall x]
val eq : t * t -> bool
end I don't know if the latter is a valid syntax, but I mean a heterogenous list, i.e. contain values of different types. fun f (X : EQ) = struct
type t = X.t
val eq = X.eq
end should have signature of EQ where t=[forall x] ->EQ where t=[forall x], meaning that t is a list where the elements are of type: Variant Type(runtime Union Type) for which no type erasure and no dependent type is needed. |
That is different because with a union type you are limited in which types you can put in. Given a "String|Int" you can only put a string or integer into the heterogeneous collection. With an existential type we can specify the interface, say I can actually demonstrate the unsoundness without using an existential type, just think about polymorphic recursion. You cannot monomorphise a recursive polymorphic function. |
This type is impossible because the type escapes the existential. |
@keean wrote
Thats right, therefore Variant Type != Union Type
Aha, here an existential takes the role of a trait/typeclass object.
Corner cases/Undecidable issues @keean wrote
Yes it breaks the rules of existentials. Further, I don't see the need for dependent types for module purposes but maybe you mean runtime polymorphism is inevitable, then yes. |
But if the instance of the module’s type is statically known at the time when the value is applied at the call-site then it’s no different than applying typeclasses at the call-site. So are typeclass bounds dependent typing? I guess it’s because we erase the instance’s type from the static knowledge when we assign it to a module signature (but have to reify it with runtime dispatch), thus we don’t track it statically? Isn’t that the analogous to assigning an object to an existential type thus erasing it from static tracking? The key difference is when the type of the value can’t be known statically, then we need to track the type of a value at runtime with some form of runtime dispatch. Which is analogous to existential types with typeclasses. What am I missing? Trying to understand why you see a problem with this form of dependent typing that you don’t see with typeclasses?
Ditto cannot monomorphise typeclass bounds with polymorphic recursion. Have to use runtime dispatch. We had a discussion about that in the past on these issue threads. @sighoya wrote:
Well I think he may be defining dependent type correctly, yet I agree that seems like it’s just runtime polymorphism. And I am also wondering why he dislikes it given it seems to occur because of the inevitable need for runtime polymorphism. I thought dependent typing was when we can check the values of a value statically?? With runtime polymorphism we’re not enforcing the type of the value statically. There’s no monomorphisation going on. |
No typeclasses only depend on the type of the thing, not the value of the thing. Now you would have the same problem if you had type-classes with associated types, but in Haskell they get around it by saying the associated types must be globally coherent with the type parameters of the type class instances. This makes the type-class type-parameters, and the associated types of that type-class form a type-family. |
@sighoya a variant type is also limited in the types you can put into it (at least it is in OCaml). Without existentials (runtime polymorphism) polymorphic recursion can still cause problems with modules that requires dependent types to support. Now you can have modules without dependent types, if you limit the language to having everything monomorphisable. This means no runtime polymorphism, no polymorphic recursion etc. |
@keean wrote
My thoughts were onto dlang's Variant Type. Edit:
I thought depending typing was when we cannot check the type of a variable at compile time (statically), because the variable's type depends on the variable's value and maybe of other variables's values. Dependent types are indeed very useful. Imagine a type called DeterminantNotZeroMatrix, which is defined at follows: DeterminantNotZeroMatrix<Type eltype,Int n,Int m>={Matrix<eltype,m,n> mat | det(mat)!=0} Edit: ambigious use of m, changed it to mat. Once constructed this kind of dependent type, we can invert this Matrix without to check every time if the matrix is invertible. But @keean, I think you need a runtime polymorphic time rather than dependent typing |
There is a proposal to add type classes to C#. The funny thing is that the proposed implementation of typeclasses looks like a lot of a light form of Modular Type Classes |
This kind of variant is not modular, as there is no way to represent a universal type tag. This kind of variant requires Run Time Type Information, which is a big problem for me, as you lose type erasure (and theorems for free) as well. I can give more details why this is bad (or perhaps why I don't like it if I am being generous).
Runtime type information is worse (see above) with dependent typing we can still resolve issues at compile time. For example if we try and print a value of an associated type, we can add the "Print" constraint to that module. This can be propagated up to the interface specification for the current module such that we say the argument for this function must be an iterator where the ValueType of the iterator is printable. These are dependent types (because the type passed to print depends on the value of the module passed to the function) but we solve by placing constraints on the types which we propagate at compile time. The latest we would need to check these constraints is when linking a dynamically loaded module, we still retain type erasure and don't have to find a way to store types in memory at runtime. I think its bad to store types in memory at runtime due to the Russel paradox. If every memory location has a type, and a memory location contains a type then the type of a type is a type and we have created the conditions for a paradox. Everything in memory at runtime is a value, now we can create a mapping from values to types, but we cannot do this globally, because with separate compilation we don't know what values other compilers assign to which types. So we have to locally map values to types which is what an OCaml variant does (a disjoint sum type). |
@keean wrote
Looks like for me you need a trait/typeclas object @keean wrote
The type Type is a meta type which is a non well founded set including itself |
@sighoya it seems you agree with my reasoning, but trait objects are Rusts name for existential types, and you are back to needing dependent types if you use modules. It all comes down to the difference between a type class and a module. In a type class the implementation is chosen only using the type information of the type class parameters. In the lambda cube this makes an associated type a type that depends on another type. Because a module is first-class the associated type depends on the actual value of the module passed, not just its type, hence an associated type is a type that depends on a value (otherwise known as a dependent type). |
@keean In my eyes, it is one solution. But you can also force the functor to return a Maybe structure in which its contained/wrapped structure depends on the value condition. |
@sighoya not quite. With first class modules (and Functor) you can pass modules/functors as arguments to functions. Because associated types with modules depend on the value of a module and not just its type, you need dependent types even without variants, existentials etc. You just need first-class modules and associated types to have the problem. |
@sighoya Let me have another go, as I don't think I have done a very good job of explaining, and may have got a bit side-tracked by other issues. The type system only tracks the type of modules or functors. It is possible for two modules which conform to the same signature (type) to have different associated types. The associated types are not part of the signature.
Because EqInt and EqFloat are values of the type EQ the type system does not track the difference between them. All the type system sees is type EQ in order to see what value is passed to f we have to use partial-evaluation or abstract-interpretation. However nether of these methods can solve general computational problems, and you very quickly end up running the whole program. This is why compile-time evaluation is restricted to constant-expressions in C++ and other languages. In other words we could resolve this for limited cases, but most compilers don't even try. |
@keean wrote:
Right, This is the intention of associated types, to not include these in the type signature, otherwise, they are parameters. @keean wrote:
Ah, I see your problem. The problem is that the signature of eq changes in dependence to the input module. And this is unmodular, right? I don't think the problem is because that modules are values. You even have the problem if the modules are defined statically but the definition of the modules are hidden from the user. For example, you specialize a function f with some unknown static module EqA and call f with values depending on EqA. f<EqA>(1,2) or f<EqA>("1","2") ? I think it is not a problem that modules are first class values, the problem is that associated types are unmodular. |
Ah.... I must correct, your are right: a=io.read().asInt()
module m::EQ
if a=2 m =EqInt else m = EqFloat
f(m)(1,2) #allowable? |
I could imagine to disallow: a=io.read().asInt()
module m::EQ
if a=2 m =EqInt else m = EqFloat
f(m)(1,2) #allowable? by the compiler and to pattern match over all possible outcomes, but the user cannot reason why the above code is not valid (because it is not inferable from the type signature), damnit! |
@sighoya We can only reason about code using the types, otherwise we are executing the code. If we execute the code we have an interpreter not a compiler. The whole point of a compiler is to translate code from one language (the source language) to another (the target language). This gets fuzzy when we can partially evaluate code where the inputs are static, but as soon as the input depends on IO we cannot even do that. So when you write short code examples it's easy to say "well the compiler knows this value", but in real code it es exceptionally rare, except for constant values like PI. Even with constant values you have the catch22 of not wanting to execute code until it has been type checked to make sure it does not crash the compiler, but your type checking depends on the values which require you to run the code. So in reality partial evaluation does not work for this kind of problem. That leaves abstract interpretation as the only method, and this is not a well known technique even amongst compiler writers. Dependent types are by far the best solution, or avoid the problem by forcing global coherence on module instances by making the associated types a type-family over the module type parameters ( what Haskell does). |
Some Questions: 1.) How does Ocaml handle these things? Does it crash? 2.) What is if we force the user to pattern match over the associated type in a module of type EQ: module m::EQ
m= EqInt if a=2 else EqFloat# now, the compiler knows that m.t could be different
f(m).eq(2,2) # Here the compiler knows eq must be of type Int->Int->Bool, so the user have to check if m.t is Int The compiler will throw an error and forces the user to rewrite the code above to: module m::EQ
m= EqInt if a=2 else EqFloat
match m.t=
Int -> f(m).eq(2,2)
_ -> Error/Nothing What I'm missing? 3.) What is your plan to this problem? |
We discussed this already in the past in our discussion of rankN HRT. When we store callbacks that have a typeclass bound as values, we lose the definition of the function body. Thus there’s no way to monomorphise the body because we don’t know the body of the function that is the callback because more than one function definition can match the same set of typeclass bounds or alternatively the caller of the callback may be supplying existential types instead of statically known instances. We have to resort to runtime dispatch. So indeed typeclasses do depend on the value of the thing when we have to throw away static knowledge of which values correspond to which types in our imperative soup (also for example existential types aka trait objects). If we can eliminate the need for imperative soup…
…if our modules are applicative/pure and all imperativity occurs top-down outside any modules, then in theory we can have polymorphic recursion without requiring runtime polymorphism in the modules, although at the junction between the imperative layer and the modular layer, must be runtime dispatch to monomorphised branches of the modular code. Btw, the Pokemon example employing type family fails to solve the Expression Problem. New Pokemon types can’t be added independently. The code could be reorganized to not use associated types if runtime polymorphism is allowed and then it would also be more modular and extensible. So again I ask @keean what the objection to dependent typing is, if all it means is to enable runtime polymorphism? Runtime polymorphism is necessary for modularity, solving the Expression Problem, and because static typing is limited in expression? @sighoya wrote:
Well even when simulated with modular abstract types, I don’t see why they couldn’t be monomorphized in the cases where there’s no runtime polymorphism?
So we agree that dependent typing is the problem where we need to know the type of values. I’m saying that I thought dependent typing was the ability to type those values statically with total order knowledge about all possible states of the program. You’re saying that that actually it’s the ability to declare some invariants independent of the values, but isn’t that what a type is? An
So what you’re saying is the user can create new types and enforce semantics on those types. But isn’t that what typing is for? It seems to me that runtime polymorphism is somehow being conflated with dependent typing in our discussion? @keean wrote:
RTTI is bad because the programmer can access it and use it as a hack for polymorphism (c.f. the mess that is the Go libraries). Whereas what we really need is static/dynamic polymorphism and runtime only where it must be. But are you conflating runtime polymorphism with RTTI? Existential types (aka Rust’s trait objects) don’t erase the runtime information about types, because it’s required for runtime dispatch. Yet the programmer has no access to the RTTI.
That presumes that modules aren’t first-class values:
If they are values then you can’t know until runtime what the types of the abstract types are. So then you must use runtime dispatch.
We discussed this before in the past. Use a cryptographic hash to have decentralized creation of a universal tag. Again I think you’re conflating access to RTTI as evil with type erasure as good. There’s a middle ground, which is RTTI for runtime dispatch is not evil (when the programmer can’t access that RTTI). As for Russell’s paradox, it’s omnipresent in many different facets of programming and there’s absolutely nothing you can do to solve that. If you could solve it, then our universe would be provably bounded. Life is necessarily imperfect else we couldn’t exist. @sighoya wrote:
This is related to why we can’t have unbounded/open ADTs and use them to select implementation of typeclass instances, because then we can’t prove anything is ever disjoint. Monomorphism and the Expression Problem are in tension with each other, abstractly because of Russell’s paradox. @keean wrote:
ftfy. Associated types are a typeclass concept. Abstract types are a module concept. Afaik, the paper Modular Type Classes employs abstract types to emulate the associated types of typeclasses.
It seems you argued above that the type system can track the differences and monomorphise (in some cases) and I agreed except I pointed out that when the modules are first-class values that get passed around, then the instances of the signature can’t be known statically.
What does that sentence mean? Do you mean that not everything can be monomorphised?
Typeclass bounds at call sites also throw away the information about the types at the call site, but the difference is the static information is thrown away at the call site and not at the construction of module instance with a functor. In the case of typeclass bounds of callback functions, the body of the function is statically unknowable. Since we know we can emulate typeclasses with modules per Modular Type Classes, we know that not all use of modules can’t be monomorphised. The key is really referential transparency. Because when we/compiler knows the code isn’t overwriting (generic references to) values with different types (which causes the requirement for the value restriction!), then the compiler can statically trace the type of the instance of a module to it’s use in a function and thus monomorphise. That is why C++ and other non-pure languages can’t do it. All of us discussing here know that PL design has different axes that are highly interrelated. We must analyse holistically over all axes simultaneously in order to explore the design space.
The key point of your comment is that when the code is no longer pure, then the compiler (nor the human) can reason about it.
I wish we’d write abstract types instead of dependent types. Dependent types seems to be a more broad term. Yeah modules need abstract types as opposed to type parameters (because per prior discussion they otherwise lose encapsulation and can’t be non-explicitly/boilerplate composed in partial orders, i.e. must always know all the instance types when composing), and if you don’t have referential transparency, then probably cannot monomorphise them. But referential transparency really is about top-down dependency injection. So in some cases we can top-down inject the types and monomorphise but in other cases the top-down imperative layer will lose static tracking of the types and thus we must punt to runtime polymorphism. Per the link above, what I’m not clear on now is whether we really need both modules/typeclasses and algebraic effects? Seems like maybe all modular genericity can be modeled as I would appreciate you guys checking my logic on this. @sighoya wrote:
No that is modularity. Multiple implementations of the same signature. (Also multiple signatures for the same implementation). |
@shelby3 wrote
@shelby3 wrote
Valid point and I'am think you are right. What I presented was a refinement type which correlates with dependent types but are not the same. After further investigations, Languages which support Variant Types already partially implements dependent types because The question is, why not simply use the Variant Type/Union Type or a Sum Type (Maybe, Either) to represent Varieties? concat :: Vec a n -> Vec a m -> Vec a (m+n) . fun::Vec a (x**2-y**4+z**4-z)-> (Vec a x, Vec a y, Vec a z) Because the length of a Vector is integral, you can construct diophantine problems which are undecidable @keean Again, I see not clearly why you need dependent types instead of simple runtime polymorphism (Sum Type, Variant Type) @shelby3 wrote
The problem with the pokemon example is that Haskell is missing Union Type with the mechanism to typecheck the value and to monomorphize over it. They do this manually in this example which is bad practice, especially when the associated type Winner is of type * meaning I could instantiated the *-Type not only with the pokemon or foe type, but also with any other type. @shelby3 wrote
Totally Agree.
@shelby3 wrote
Hmm... Yes. I've formulated it badly. What I meant is that the user cannot see how the signature changes at design time and compile time (at least not always). Therefore I presented a possible solution in my previous post which should overcome this problem, at least in my mind. |
Dependent typing breaks some desirable type system properties like decidability. If the type system becomes a fully usable turing-complete programming language, then why have two different turing complete programming languages. However it might be less problematic than the alternatives. |
This is a complex question to answer, but it is along the lines of "I cannot clearly see why everyone cannot program in Lisp, as it is turing complete and you can express any program in it." |
@sighoya The barber paradox was used by Russel as a concrete example of his paradox. If we now take these to be sets, the set of people who do not shave themselves is paradoxical because of the barber. If types are names for sets of values then we can see how this applies. Further we can have kinds who's values are types. In type theory we would call these "Universes" and Universe zero would be types that contain values. The universe of a type is the highest universe of any member of the set plus one. So kinds have a universe number of one. The ordering of universes prevents the Russel paradox because no type can be determined in terms of a type in the same universe. |
@keean wrote:
But is it not the same case as before, why not simply abort the construction of a to-be-shaved type set with a exception. This is the advantage of decision procedures in a constructive world where we don't have to state it is or not and instead favor a third solution.
I don't like constructors defining types as a type should be defined by itself not by a helper, explicitism vs implicitism. I see similar examples very often: class A
{
int i;
A(int i)
{
if(i<0)
throw new Exception()
this.i=i
}
} What is the type of A as a set? What here happens is that the constructor mutates the record type A to include the constructor input as well: If I implement f(Int,Int):D does D= x | g(Int) needs to be changed? |
I have slightly different way of visualizing it and explaining it that may help some others get a different perspective on the same issue. AFAICT, @keean and I are in agreement. @keean wrote:
Note I also mentioned negation:
Unboundness by itself isn’t ever paradoxical although it could be uncountable if the domain is not finite.
Right. I still don’t think it is a paradox. Negation (as Socrates pointed out) assumes a total order (or a finite domain). Unboundedness presumes a total order doesn’t exist1. Thus Russell’s paradox is really just an inconsistent set of requirements. It’s asking for something and its antithesis simultaneously. I like this explanation of Russell’s paradox:
An example of a set which contains itself is the set of all things which include themselves in any way. For example a recursive function (i.e. it includes a call to itself). So the set of things which include themselves in any way, also includes itself. Another example is the set of those which are or contain those that shave others. The set would then contain itself. We can’t argue that the set itself doesn’t shave others because we have changed the inclusion rule from @keean’s “those which shave others” to “which are or contain those that shave others”. If we then add that it can only include sets that are not members of themselves, we have the first contradiction (as quoted above) because the set contains itself by the first rule, but not allowed by the second rule. So we have specified conflicting rules. But that seems sort of obvious that if we say can contain and can’t contain in the same criteria then we have a conflict and not a paradox. The paradox is that if we write the first rule to not contain such as the @keean’s “those which shave others” so we can reason that the set itself doesn’t shave others and thus it is not a set which contains itself. But by the second rule (and as quoted above a contradiction) it does contain itself because it doesn’t contain itself. IOW, the second rule is always a paradox because those sets which are not members of themselves are thus members of the rule and members of themselves. The second rule is contradictory with itself. So what this demonstrates is that universal negation is a total order and can’t exist. Total orders do not exist. A related discussion about Godel, total orders not existing, and the limitations of math is worth injecting here. Make sure you click the “Load more…” seen on the thread to display the entire discussion. The outcome of this is that we can’t conclude anything about the totality of the universe because it is asymptotic. If we could have total omniscience then we wouldn’t exist because all uncertainty would be lost. And thus entropy couldn’t progress inexorably to maximize uncertainty. Time would be reversible and indistinguishable. Everything would be static. So the perception of a dynamic universe requires the lack of perception of omniscience. Does that mean we have free will? Well yes in a partially ordered perception we have free will, but universally our free will is just noise and not ordered.
Of course I mean spacetime. Unboundedness is an informational dimension our universe. And again the point is that negation presumes a total order. How do you prove there will never be a neon colored swan? So the problem of the conflicting rules is that sets that include themselves are open to unbounded inclusion (they never terminate unless we only consider a finite subset of our existence as the domain), but negation presumes bounded exclusion. Russell’s “paradox” is resolved without restricting to finite domain in the analogous way that FCP solves the unsoundness of HRT by creating a hierarchy of types wrapping the unboundedness in an explicitly enumerated domain of nominal constructors which makes it impossible for inclusion to recurse unbounded over the same domain as itself.
Do you mean to convey that negation of a kind that can’t contain itself (by definition since if it does contain itself then it is its kind + 1, i.e. we add a nominal wrapper when we recurse) can never conflict because this is analogous to the resolution of Russell’s “paradox” mentioned above. 1 This is why for example due to the FLP theorem that blockchains or Byzantine fault tolerant consensus doesn’t work (i.e. don’t have a total ordering for their domain) without bounded network synchrony. That said total ordering is partial ordering in the unbounded network synchrony context, which is why BFT is not unassailable. Analogously as we already discussed, this is why type systems aren’t unassailable. @sighoya wrote:
As I wrote in my prior post, I don’t like FCP-like nominal constructors as means to avoid unsound recursion, because it causes a lot of additional boilerplate. But sometimes that’s the only way to get a sound type system. But you’re pointing out something different about constructors. You don’t like when the types of the program don’t accurately represent the dependent types (i.e. the allowed values that populate the declared types). IOW, you don’t like runtime values that escape from the invariants that the type checker enforces. Unfortunately the only way to get what you want is fully dependent typing with something like Coq or Epigram. And those are not suitable for general purpose programming. Very tedious and inflexible. Type systems really can never be complete. Gödel’s incompleteness theorem tells us that a set of computable axioms can never be both consistent and total. And it’s really conceptually just the same analogous concept I explained above in this post. |
@keean Instead, I would usually prefer more compile time reflection (CR) which D offers limited in Traits which can be used for instance to read out additional record fields for a record polymorphic type. I also don't like to add or remove types at evaluation time (runtime, compile time), this should be prohibited in any case. |
@sighoya seems we are in agreement about this :-) I am currently thinking we can have a special type-constructors for boxing. Not sure what a good name for it is though. |
I wrote:
So strong modules are modules parametrised on interface signatures. This means we can write code that operates on specific data types, functions, and typeclasses contained in a signature without needing to know the implementations of those signatures. Then some other code can import that signature parametrised module and select a specific implementation for that signature thus creating an concrete instance of the module. This has some overlap with (Named Instances for Haskell Type Classes, pg. 4) the functionality of typeclasses. Typeclasses enable signature parametrised functions which operate on the typeclasses in the signature. Typeclasses are parametrised by data types and contain functions. Essentially they can both accomplish signature parametrised modularity if we remove the global canonicity of typeclasses, but typeclasses are more granular at the level of a function of a module of functions so each function definition implicit transitive interface bounds don’t have to be all explicitly lifted up to module parameters (thus more degrees-of-freedom but more noisy function declarations) which is claimed to at least in Haskell sometimes make type inference more problematic. http://blog.ezyang.com/2014/08/whats-a-module-system-good-for-anyway/ In an up-thread post, I had also linked to a Rust forum discussion of claimed advantages for parametrised modules:
That Rust OP admits the parametrised modularity can be achieved with typeclasses:
My opinion is we should move forward first with only typeclasses and my variant of the proposal to forsake global canonicity. And see if we really encounter use cases that require signature parametrised mixin modules. If we can avoid adding an overlapping paradigm for modularity, that might be best. Either way, we’ll still need non-parametrised (aka “weak”) modules for encapsulation and reuse. @jdegoes argued against fine grained modularity. EDIT: I have reverted my typeclass modularity proposal back to the “subtypeclasses” concept but with module name prefix on instance selection. |
@shelby3 wrote:
As you don't like multiple ways to solve a problem, why you need signatures on top of typeclasses. Typeclasses should also be able to constrain module parameters so why not us them? |
@sighoya Type-classes provide specialisation (as distinct from generalisation which is provided by parametric types). Modules generally provide separate compilation, data-hiding and specialisation. For this language it would seem bet to limit modules to separate-compilation and data-hiding. I guess the issue here is that Haskell has these kind of modules, and there are calls for it to implement something more full featured like ML modules. The question is whether these calls are justified, or whether they are from people that don't understand how to implement what they want with typeclasses? |
This is a good question, but it was not my intention to challenge if a module system is worthless for you. The point is that @shelby3 wrote that module parametrization should be constrained by signatures. |
One thing i don't understand with typeclass is in the context of highly interactive applications where the function to be executed on an object depend on global state that are exterior to the object. For example in photoshop the effect of a click depend on which tool and option is selected. With OO paradigm, it's easy to have a global class and instanciate the good class/module when a global state change, and runtime polymorphism can achieve this. With typeclass how it's made in haskell i have more hard time to see how to achieve this same thing. I can see how typeclass can be useful for program that are designed like linear algebra, always having the same function applied on an input, and rather designed as monolithic thing. Or typeclass would need to be sort of 'first class', and having a way to select the good set of function to use depending on runtime state. Which i don't see how it would be possible if typeclass function are monomorphized at compile time. With the dictionary thing it would seem more possible to load and use specific typeclass function depending on runtime state. |
@sighoya wrote:
@keean he means typeclass bounds instead of signature bounds on modules. We would still retain typeclass bounds on functions also. That’s a reasonable question, because my conjecture is that parametrised modules provide less fine-grained grouping of parametric generalization which provides some benefits such as less noisy repetition of typeclass bounds on function declarations. With monomorphisation then typeclass bounds on modules is equivalent to signature instantiation of modules. I think you are correct. Application of typeclass bounds to instances doesn’t require function application. Unless @keean objects, I will incorporate this into the proposed Zer0 grammar I’m close to completing. Note @keean had mentioned using typeclasses as modules before. Why didn’t Backpack think of this? Ah probably because of Haskell’s canonicity which I have proposed to forsake. Canonicity is why I think they were forced to separate the module system into a different concept of signatures. Remember again that I recently identified “strong modularity” combined with existential quantification as the source of anti-modularity unsoundness, so my proposal to forsake canonicity has to avoid that combination. Haskell doesn’t have the non-disjoint, structural unions which I have proposed and unions may suffice and be superior for most of the use cases we would otherwise use existential quantification (and the unions are bound to typeclass bounds at application to a function call site, not in the union object itself as is the case for existential quantification so it side-steps the unsoundness with strong modularity). |
@NodixBlockchain typeclasses don’t force immutability. If you want to change the pointer to the function to callback, AFAICT there’s nothing in typeclasses preventing that. Once again you are raising an issue in the wrong thread. This is the Modularity thread. You should have asked that question in for example the Why Typeclasses issues thread. Please try not to thread jack and take threads off topic. I want to be able to find relevant discussion. I would be more than happy to discuss this topic with you if you raise it in a thread that is more related to the topic you want to discuss. |
I have reverted my typeclass modularity proposal back to the “subtypeclasses” concept but with module name prefix on instance selection. That linked post explains our modularity ideas for typeclasses have an issue with existential quantification— i.e. runtime polymorphism other than bounded union dynamic polymorphism. |
The reasons why associated (aka abstract) types of typeclasses can’t be replaced by HKT and multi-parameter typeclasses:
|
Important post about default typeclass instances, overlapping problems, and alternative solutions that apply to modules. |
On further thought, that clumsiness is due to the incorrect design of Scala’s (arguably Martin Odersky’s) corner-case laden “solution” was affectionately referred to as The Longest Suicide Note Ever Written. A better solution seems to be having So then we really don’t necessarily require associated types for handling this case. We could define strings to be |
On further thought (and realize Microsoft broke my ability to edit my prior comment post, or @keean removed my privileged to do so?), we don’t have to incur the inefficiency of forcing |
@shelby3 no changes in permissions from me, but maybe the defaults have changed? |
I will try accessing from the laptop with newer versions of all browsers including IE, and report if I have the same problem. |
@shelby3 IE is not a good browser, even Microsoft realised this and tried to fix in Edge. Edge is better if you have windows 10. I used to really like Chrome, but these days I tend to use Firefox. Why not install Firefox? |
I meant Edge (not IE) on my 2017 model ASUS laptop running Windows 10. :shudder: I hate Microsoft so much I do not even bother to remember the names of any of their new sh8t. |
Late Binding of Interface vs. Modularity?I wrote:
I wrote:
I’ll re-summarize the issues in one place to make this easier to digest and come back to.
Note AFAICT Scala 3 is ostensibly not enabling said partial orders correctly. Type Inference IssuesYet another blog also written by the co-developer of the Haskell modularity project Backpack What’s a module system good for anyway? discusses the issues with abstracting over for example the numerous string types in Haskell (e.g.
[…]
[…]
I discussed this with @keean and he seemed to concur with my point that Haskell’s type classes don’t have to be monolithic. Employ class inheritance to extend only those finely grained type classes required by each separate use case and automatically inherit the extant instance implementations, e.g.: class Concat a b c where
concat :: a -> b -> c
class (Concat a a a, Ord a) => ConcatOrd a
instance (Concat a a a, Ord a) => ConcatOrd a Thus it appears that the claimed advantage for ML-like signatures and functors distills to some perceived advantage w.r.t. type inference. Type inference ambiguity is the result of underconstrained type parametrisation. Which is why in my Haskell example code above I set the So afaics (which was @keean’s point) what this distills to is that if a programmer wants to (or unwittingly does) write underconstrained polymorphism then the capability exists in these PLs to do so. I really don’t understand how those I quoted above can claim that a ML-like module system won’t also suffer type inference ambiguities if the programmer attempts to write underconstrained polymorphic code? I understand that ML functors enable selective choice of interface(s) from signatures, but afaics as I demonstrated by example code above, so does type class inheritance. Those ML signatures and resultant ML interfaces can still be underconstrained polymorphic if the programmer so chooses. I understand (as I explained in the previous section of this comment post) that ML binds those interfaces to the object at time of construction unlike type classes which delay binding interface to objects until the call site of a function which requires such interfaces. Thus I conclude that the cited blog is a red herring. Note that experience often trumps theory in the PL realm, so perhaps I am missing something which hasn’t been clarified sufficiently for me (and @keean) in the cited blog — which is why I discussed it with @keean given he has much more experience with Haskell and ML-like languages than my nearly nil experience with those PLs. |
Prior discussion:
#14 (comment)
#33 (comment)
#8 (comment)
Please kindly move modularity discussion here and not pollute issue threads which are specialized on their thread title (speaking to myself also, not to any specific person), so that we can have modularity discussion organized in a thread of its own.
The text was updated successfully, but these errors were encountered: