-
Notifications
You must be signed in to change notification settings - Fork 2
Think about type class/implicits encoding #5
Comments
Thanks for putting this issue on the radar! One question: In your post, would it be acceptable to force the user to explicitly declare combinations of type classes that share an ancestor. E.g.
It is somewhat tedious to write all these forwarder methods, but it's a one off cost. Moreover there are some ideas for language features to make this less tedious. E.g., using a hypothetical
The question is, whether in principle the discipline of closing all inheritance triangles that way is feasible. Given Cats, how many combiners would we need? What about foreseeable extensions to Cats? |
@odersky In the Cats scenario, I don't think that many combiners are needed. Naively, But more significantly, I'm not certain that your proposal entirely resolves the problem. Consider the classic issue with MTL-style monad transformer typeclasses (which are, as I mentioned at Scala Days, basically the current state of the art in pleasant stackable effects): def foo[F[_]](implicit MS: MonadState[F, String], MO: MonadOption[F]): F[Int] =
Monad[F].pure(42) (for context, So the problem here is figuring out where we get the My proposal (which @adelbertc linked) proposes we resolve this by tagging instances with "compatible indirect derivations" into "coherence domains". Its primary downside is it doesn't provide any actual checking for this behavior, so it's possible to generate scenarios where the runtime behavior may actually be undefined, simply by falsely declaring two incompatible instances are in fact compatible. I'm curious as to your thoughts on this proposal. To be clear on my opinion here… I don't much care what approach we take, so long as it is scalable and addresses the issue in some way. My primary concern is that we don't go the "full Haskell" route and simply forbid local instances, since I'm really not a fan of the consequences of that design decision. But given the rest of Scala's design space, that doesn't seem to be particularly plausible design route anyway. :-) |
According to my count, it's just I am skeptical about the monad transformer typeclass approach for stackable effects, at least as far as dotty is concerned. I believe we will have much better ways to model most effects as implicit capabilities. There are a few effects that influence control flow which cannot be modelled as capabilities. But these should be few, and therefore manual combiners may again be a feasible option. |
In @adelbertc's example article, you're correct that only
I'm skeptical about monad transformers as well, and really so is the entire Haskell community. This is why MTL exists in the first place. As I mentioned at Scala Days, your description of capability-based effects in Dotty sounds almost exactly like what exists in MTL today. The type signature I gave is an example of this. Rather than reading that type signature in terms of a stack of effects, just look at Regardless of what approach Dotty uses, I think we're going to have to solve this problem. Think about my example again in highly abstract terms. Given two capabilities A and B, where A = A' ∪ C and B = B' ∪ C, and some expression which requires capability C, how do you get C unambiguously? MTL provides a great example of this, both because it's a practical implementation of a capability-based stackable effect system via typeclasses, and because it demonstrates that the control-flow affecting effects (yay, english…) are actually quite numerous. |
Some scattered thoughts I have:
I wonder if we can have a subset(type) relationship across these domains, if that makes sense or is even a thing.
While those are probably the most common ones, my concern is the general principle that the naive encoding of type classes via implicits/traits + subtyping does not work for type classes as we know it. Given that encoding, any given type class hierarchy must be linear/cannot branch/cannot share a common superclass.
Yep, |
These are good points.
We agree it should not matter how you get C. So, yes, it seems promising to convey that fact somehow to the type checker. If I take your example literally it would seem more natural to annotate the type or type constructor (C in this case), which should express something like: "for each type instance of C the behavior of all implicit values of that type instance is the same". We could get individual coherence domains by functorizing, i.e. make C into an inner class so that type instances are path dependent types that each can be implemented individually. By contrast, the coherence domains you propose seem to attach uniqueness to the implicit instances instead of to the type they implement. That could also work, although I am not keen on some of the syntactic details. Also I am dubious about using types as tags like that. But that's just a gut reaction, which might change with exposure. |
At first glance, this sounds very similar to an idea that @milessabin has been exploring to constructively encode coherence via type intersection on inner types.
Phantom type tags are used a lot in type-level programming in Scala (and in Haskell), simply because Scala's type system is ultimately nominal rather than structural. Polymorphism makes this complicated obviously, but it's a pretty well-used idiom, which is why I reached for it. I didn't want to introduce a secondary but non-orthogonal mechanism for uniqueness tagging when named types already have that mechanism defined in a relatively robust fashion. The other advantage with unifying coherence tags with types is that it allows the computation of tags as a consequence of type resolution. The big problem here is this obviously opens up a very interesting can of worms and a secondary cycle in the type resolution (where type computation influences coherence domains which in turn influences implicit resolution which in turn influences type computation). It wouldn't be the first such cycle, but adding another is still enough to personally give me pause. I would vastly prefer a mechanism for constructing coherent definitions rather than declaring coherent instances. It would be less flexible, but more checkable and likely introduce less baggage. The conceptual problem I see with working at this from the definition site is it feels fundamentally incompatible with local instances, unless we can encode it in the dependent type. Also, in case we go with annotating type definitions, what would be a good way to do it? The most direct one would be a modifier or annotation, but what should its name be? |
Just to argue the point :-), in fact DOT has shown that Scala's type system is ultimately structural, where nominal types can be modelled as abstract types.
Can you give an example? |
Well at that point the tag in the nominal type comes out of the uniqueness of the path-dependent instantiation, if I understand correctly. Either way, "nominative with heavy structural features" or "structural with nominal encoding", you still end up with tags somewhere. :-)
If you're bringing in a generic |
It would have to be that
(Not sure about the term |
Oh I see what you mean. Still an assertion rather than a construction. Doesn't this effectively selectively disable local instances though? Because what you're saying is that in any case where there are multiple |
Also, just to play devil's advocate, there are numerous instances where this wouldn't be as nice as it could be. Consider the classic example of |
Can you flesh that out a bit for me? I don't see yet what you are getting at. |
But it isn't difficult to imagine |
Thanks for explaining. How would you handle the |
Under normal circumstances, you would just allow the incoherence. If you define two instances for // presumably MonoidFoo[a] <: Monoid[a] and MonoidBar[a] <: Monoid[a]
def example[A](implicit[_] AF: MonoidFoo[A], AB: MonoidBar[A]) = Monoid[A].zero (facetious examples here with Since we've declared (via the // just some arbitrary type tag
class U
implicit[U] val mfb: MonoidFoo[Boolean] = ...
implicit[U] val mbb: MonoidBar[Boolean] = ...
example[Boolean] // compiles and runs This is similar to your idea of declaring To be clear in the above example, since I elided the implementations, |
|
Here's a more worked out example how one could express
would force all instances of
But I can declare a coherent subtrait:
Here are some instances:
And here's the use site:
We could have several coherency domains by using a class instead of an object for One nice aspect about attaching coherence to types is that it works well with the phantom types proposal. (see scala/scala3#1408). We plan to model effect capabilities as phantom types, and there will be several other usages as well (e.g. the |
I have added #2040, which reflects the current state of discussion of phantom types. |
Just my 2cts contrib to this topic... Then, I could use that to find a behavior/typeclasse/law associated to a structure but also scope it (to choose Monoid of Numeric for example) and imagine funnier things like documentation generation, structure laws/typeclasses comparisons and anything you can imagine... This was really interesting, hadn't the classic limitations of Typeclass hierarchy by inheritance... Except its runtime cost of manipulating those "List of Laws"... |
Well, it's not another use of
That looks pretty interesting, actually. What you're doing there is conceptually newtyping the more general, potentially-incoherent Encoding variable coherence domains by using a class rather than an object and relying on the path-dependence to disambiguate is pretty cool, and subjectively feels like it fits well with Scala's core calculus. This really is effectively the same thing as my One concern that I have is if we push coherence to the declaration site in the form of an assertion rather than a construction, library authors like Cats and Scalaz are just going to mark everything as |
I think the language could also benefit from a mechanism to (selectively) disable any form of implicit ambiguity in user code. Non-coherent implicits are rather fragile in their current state because of the many ways of silently breaking things. By adding/deleting implicit arguments, adding/deleting imports, or simply moving things around, one can change implicit resolution in a way that affects run-time semantics. The solution I have in mind is a NonCoherent annotation/trait that would tell the compiler to emit an error if any of the implicit precedence rules affects the resolution of instances for annotated types. |
Maybe one way to improve the situation is if we put certain fundamental type classes which naturally admit several different implementations in the standard library. |
I think that would be a natural solution, as well as have other benefits. It would certainly be nice to have a single type which represents |
In my humble opinion it would be so nice having all fundamental type classes in the standard library, not only those which admit several different implementations. Another "dream" I have is having a collection library in terms of instances of those type classes in place of being expressed as an OO hierarchy, as they are now. However, I would still be fine with using an external lib for it. |
@lambdista cats-core is about 3.4 MB and cats-kernel is about 4.6 MB and this is a library in active development that can't yet guarantee source backwards compatibility. One can argue that not all of those type-classes are needed, but many people disagree, as most of those are fundamental. And that's a lot to add to the standard library, which is then set in stone, unless another round of deprecations happen in a couple of years after 2 or 3 major Scala versions. And no standard library that I know of has aged well, unless it broke people's code and expectations. |
Naively asked, after a long evening of reading, to me it is striking that
Could the whole scoped coherence approach not be spared if simply a Monoid would have to be defined more explicitely including the combine operation and the identity element? |
Type classes are a widely used tool in Scala, both in the standard library (
scala.math.Ordering
) and in popular libraries like Cats, Scalaz, Spire, FS2, etc.The obvious encoding of type classes via implicits has shortcomings, which I wrote about at length here. There are related issue tickets in Cats and Scalaz.
@djspiewak has written up a proposal for a type class encoding that requires language chances to the current Scala language.
I don't have any concrete ideas atm besides what Daniel has written about, nor am I necessarily advocating for global coherence like Haskell does, but I would like to see investigations into a better encoding of type classes in a future Scala that would accommodate the use cases above.
The text was updated successfully, but these errors were encountered: