-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Disallow implicit conversions between unrelated types #2060
Conversation
Disallow implicit conversions, unless one of the following holds: - language:Scala2 is set - the conversion is local to a block, or private, or protected - the conversion is defined in the same compilation unit as its source type or its target type Quite a few things in the compiler and tests had to be changed to conform to the new rule.
I am currently not sure under what conditions we should still allow such conversions. For the moment it's under Scala2 mode, which means: "in native dotty, never". But that might be too harsh. On the other hand, implicit conversions are way over-used, in our code bases as well as in others. The problem is that they are a temptingly easy and powerful solution. So without strict prohibitions the short-term gain might always win over the long-term complications. And that would be a pity because from what I could see so far, refactoring to avoid such problematic conversions increased code quality in most cases. |
This means that |
That was the case already before. implicit defs needed a language import, implicit classes did not. |
Er ... Why!? This will break most of the implicit conversions defined in the Scala.js standard library, notably those defined in This breaking change seems completely unjustified to me. |
@jsrd Can't you package up the conversions with the JS function types as well as with js.Array, js.Dictionary and js.Iterable? |
No, for a number of reasons:
|
We might need an escape hatch then to still allow these conversions in some cases. The escape hatch could be
|
I still don't understand why this change is desirable to begin with. There's no motivation in this PR. The last comment suggests that it's "dangerous", but I fail to see why it's more dangerous than implicit conversions in general. |
@sjrd It's more a trial balloon to see whether we can do it. It's inspired by https://blog.rust-lang.org/2017/03/02/lang-ergonomics.html According to their criteria (which look reasonable to me) unrestricted implicit conversions are way too dangerous. They are completely invisible, can apply anywhere, and can change anything. Would be good to get people's comments whether we should go ahead with this. |
Would adding syntax for typeclasses instances like in scalaz / cats be impossible ? If so I think it's indeed a bit harsh :) |
How does this affect imported conversions from libraries like https://github.com/shawjef3/Harmony, which provides what I think are conversions between unrelated types? My interpretation is that a user of the library would have to set language:Scala2. |
Many of my libraries (Scalatags and FastParse for example) use the following pattern package generic
class Foo[T](t: T) object Cake1{
type Foo = generic.Foo[String]
implicit def makeFoo(s: String) = new generic.Foo(s)
} object Cake2{
type Foo = generic.Foo[Int]
implicit def makeFoo(s: Int) = new generic.Foo(s)
}
package generic
class Foo[T](t: T) object Cake{
class Foo(s: String) extends generic.Foo[String](s)
} Would I be allowed to define implicit def makeFoo(s: String): Cake.Foo = new Cake.Foo(s) inside implicit def makeFoo(s: String): generic.Foo[String] = new Cake.Foo(s) ? In the first case,
class Foo[T](t: T)
trait FooImplicits[T]{
implicit def makeFoo(s: T): Foo[T] = realMakeFoo(s)
def realMakeFoo(s: T): Foo[T]
} object Cake1 extends FooImplicits[T]{
type Foo = Foo[String]
def realMakeFoo(s: String) = new Foo(s)
} object Cake2 extends FooImplicits[T]{
type Foo = Foo[Int]
def realMakeFoo(s: Int) = new Foo(s)
} If not, why not? If so, is it a good idea to force people to do this? |
re: @jto's question, the pattern we use for typeclass syntax in Cats is encoded by Simulacrum, which is very clearly documented. It would be good to verify that this pattern of usage will still work, in particular the mixin traits that you can use to bundle up a bunch of syntax conversions into a single import like Scalaz's encoding is done by hand but follows this same pattern more or less. |
@tpolecat As far as I can see, Simulacrum only uses |
@smarter how do you export the conversion associated with the implicit class? Or do you have to export the class itself? I believe we don't use |
@tpolecat It's not very elegant but you could always wrap the implicit class in an object Foo and do |
@smarter this might be adequate if it were also possible to alias an implicit class without losing implicitness; i.e. scala> object A { implicit class Cow(n:Int) { def moo = "oink" } }
defined object A
scala> object B { type Cow = A.Cow }
defined object B
scala> import B._
import B._
scala> 1.moo
<console>:17: error: value moo is not a member of Int
1.moo
^
scala> new B.Cow(1) // it's there but the implicit conversion is lost
res1: A.Cow = A$Cow@6025aa80
scala> import A._
import A._
scala> 1.moo
res2: String = oink The work done in cats/scalaz to layer these conversions in a way that makes them available en masse or a-la-carte is the product of years of refinement and it's very useful and important to the way we do things. So something equivalent really needs to be expressible I think. |
@lihaoyi Yes, these are good examples. Indeed you would be forced to jump through the hoops you quickly discovered to retain the functionality. Why make you do it? If you were the only Scala user I would see no need to bother 😄. On the other hand, we have been bombarded repeatedly by experienced users who claim that code bases they inherit are train wrecks because of overzealous use of implicits. And I don't want to dismiss these comments, my suspicion is that they describe something all too real. I think we all agree that the worst misuses of implicits have to do with conversions, not parameters. In the majority of cases I have seen (and that includes my own code) implicit conversions in places unrelated to source and target types were a premature and suboptimal (and sometimes disastruous!) solution. So we should avoid them as much as possible. But it's so tempting to use them because they are both super powerful and super easy to set up! You pay the price later on maintenance, sure, but then it's too late. So is there nothing we should do as a community to prevent people from ruining code bases with implicit conversions? We have tried language imports but it seems they are toothless, people just ignore them. But if the community thinks we should keep things as they are I won't insist on changing them. The purpose of this PR was to explore the possibilities, that's all. |
Simulacrum generates something like this: trait Semigroup[A] {
def append(x: A, y: A): A
}
object Semigroup {
def apply[A](implicit instance: Semigroup[A]): Semigroup[A] = instance
trait Ops[A] {
def typeClassInstance: Semigroup[A]
def self: A
def |+|(y: A): A = typeClassInstance.append(self, y)
}
trait ToSemigroupOps {
implicit def toSemigroupOps[A](target: A)(implicit tc: Semigroup[A]): Ops[A] = new Ops[A] {
val self = target
val typeClassInstance = tc
}
}
object nonInheritedOps extends ToSemigroupOps
trait AllOps[A] extends Ops[A] {
def typeClassInstance: Semigroup[A]
}
object ops {
implicit def toAllSemigroupOps[A](target: A)(implicit tc: Semigroup[A]): AllOps[A] = new AllOps[A] {
val self = target
val typeClassInstance = tc
}
}
} Aren't the target type |
They are. |
Yes, I didn't look closely enough, it would be allowed, ignore my previous messages ;). |
What I always liked about the Scala is how powerful and expressive the language is. I agree this particular issue with implicit conversions might be quite problematic, but is really the permanent removal of such "problematic" features from the language the way we want to go? |
@tpolecat wouldn't this work? Put your trait A { implicit class Cow(n:Int) { def moo = "oink" } }
object B extends A
import B._ |
See also #2065, which complements this PR. |
I'm for restricting implicit conversions, but "same compilation unit" is a little restrictive - implicit conversions are very useful for providing interoperability, but interoperability packages are often optional. Could it be in the same project that can have many sub-projects? Not sure how this could work for the compiler, but I like my stuff to be modular and optional. |
I am worried that the people who are complaining about "implicits" are not actually complaining about implicit conversions. Do we have good examples rather than just "GARRRH implicits, hate them!"? If you use Akka and care deeply about execution context, you will find that because it is an implicit parameter to many methods, it's a little more slippery to keep track of than you might wish. (If you don't care deeply about execution context it "just works".) If you use any sort of conversions, even JavaConverters, you have at least one extension method added ("asJava"). Are people fine with extension methods, or do they get into trouble there, too? Basically, I wonder whether this isn't overly restrictive, trying to solve a problem by further diminishing the use of a feature that isn't the problem any more for people who complain about "implicits". I agree that implicit conversions can be overused, but I don't see the evidence that "I don't like implicits" means "I love typeclasses and implicit evidence and extension methods but hate implicit conversions". |
My two cents. Something like this should be a warning at best. Not an error. You can warn and say Also, If you want to limit accidental overuse of implicits maybe add an
Or even |
I thought about this as well. I had access modifiers like syntax in mind. My motivation was to quickly see which imports import implicits. The question is if all or majority imports wouldn't be
As well it creates chance for another problem: Why it doesn't work? Ahhh, forgotten And I would have to know that I need implicit modifier on import. How I would know? Looking into source code? Docs? |
Here unrelated types are defined as types where I don't have control over at least one type source file. Is it the right definition for unrelated types? How the restriction helps? What is the evidence that dropping 1 of 4 options (no control, control over source, control over target, control over both) will solve the problem? Developers doesn't create problematic implicit conversions between their own types and types they don't have control over? And between their own types only? If they don't, what prevents them? What if Scala implicit conversions are risky not because I can define relation between types I have no control over. But because they allow to call any polymorphic method on its source, call side-effects and make heavy conversion processing? Together with capability to make types substitutable they allow to put arbitrary logic inside conversion. As soon as there is more logic outside of a simple projection between types, basically just a simple reference manipulation, it is very likely it shouldn't be implicit because it probably hides something which is better to keep explicit in general programming. |
@vladap There is evidence that the restriction helps, but it is admittedly circumstantial: The restriction is similar to the one enforced by C#. I have never heard people complain about bad surprises using implicit conversions in C#, but I constantly hear such complaints in Scala. Of course, implicit conversions with side effects are another code smell. But I would argue that they are much less common, because the ingrained culture of Scala is to minimize side effects anyway. Once we have an effect system we could potentially outlaw conversions with side effects as well. |
The impression I got is that most C# developers don't even know about
implicit conversions (not based on much data though)
…On Mon, Mar 13, 2017, 6:34 AM odersky ***@***.***> wrote:
@vladap <https://github.com/vladap> There is evidence that the
restriction helps, but it is admittedly circumstantial: The restriction is
similar to the one enforced by C#. I have never heard people complain about
bad surprises using implicit conversions in C#, but I constantly hear such
complaints in Scala.
Of course, implicit conversions with side effects are another code smell.
But I would argue that they are much less common, because the ingrained
culture of Scala is to minimize side effects anyway. Once we have an effect
system we could potentially outlaw conversions with side effects as well.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#2060 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/AAGAUIfk6QlvH83J_VWzpug7d8Aq8aB1ks5rlRuxgaJpZM4MVbrd>
.
|
Yes, and that's a good thing, no? That's the idea: reduce the temptation to use them to a level where people will forget about them :-) |
Then we should ask why to have implicit conversions in the first place when the goal is to forget about them. What is their intended role (use cases) in Scala once people will forget about them? Is the final goal to slowly deprecate them? I'm not C# dev. If I'm not mistaken C# implicit conversion operators are much more limited. They are not inherited (vs. searching implicit scope), they should support symmetric equality => they should be defined in pairs (to and back), conversion shouldn't loose information. These are more strict requirements than for subtyping. It is limited to the point that I can imagine they are rarely used. Which in turn results not using them even more regardless there is a good case for them. Because whenever a forgotten feature is pulled from a hat it confuses even more. It is my guess why nobody has a problem with C# variant - they are rarely used because they have little utility. It can be hardly said about Scala implicit conversions even with the suggested restriction applied. However, I don't know. I can only guess. If it can be tracked down that high percentage of bad uses (btw what does it mean?) can be linked to conversions between "3rd party" types than it can be low hanging fruit. I just have doubts that it is the elephant in the room. It might be just one leg. |
Can we take a few steps back, is there any way the community could have
access to the data about harm caused by conversions, maybe there are other
possibly analyses?
…On Wed, Mar 15, 2017, 10:56 AM vladap ***@***.***> wrote:
Yes, and that's a good thing, no? That's the idea: reduce the temptation
to use them to a level where people will forget about them :-)
Then we should ask why to have implicit conversions in the first place
when the goal is to forget about them? What is their intended role (use
cases) in Scala once people will forget about them? Is the final goal to
slowly deprecate them?
I'm not C# dev. If I'm not mistaken C# implicit conversion operators are
much more limited. They are not inherited (vs. searching implicit scope),
they should support symmetric equality => they should be defined in pairs
(to and back), conversion shouldn't loose information. These are more
strict requirements than for subtyping.
It is limited to the point that I can imagine they are rarely used. Which
in turn results not using them even more regardless there is a good case
for them. Because whenever a forgotten feature is pulled from a hat it
confuses even more.
It is my guess why nobody has a problem with C# variant - they are rarely
used because they have little utility. It can be hardly said about Scala
implicit conversions even with the suggested restriction applied.
However, I don't know. I can only guess. If it can be tracked down that
high percentage of bad uses (btw what does it mean?) can be linked to
conversions between "3rd party" types than it can be low hanging fruit.
I just have doubts that it is the elephant in the room. It might be just
one leg.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#2060 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/AAGAUHcXBx0d5MSX-Cy9-WTcfJU_EQp1ks5rl_wygaJpZM4MVbrd>
.
|
Unfortunately, if hops stay here they will be used and the net result might be worse, it just adds additional indirection layers. How long it will takes to users to discover them? How can we know that one additional hop beats user motivation to use implicit conversion? It would have to disallow them as well. Now it starts to become more complex. It could lead to situation that I couldn't figure out why I can't define implicit conversion between my types because there is some 3rd party type involved deep in a hierarchy, compiler could probably suggest though. |
Another perspective. package thirdparty1
class Foo package thirdparty2
class Bar package mycode
implicit def fooToBar(foo: Foo): Bar = new Bar
def baz(foo: Foo)(implicit convert: Foo => Bar) = {
val bar: Bar = convert(foo) // this is explicit conversion
} In Java I would define SAM interface (just a function) and create class implementations. Then I would use Spring framework and use Should we be more restrictive than Java? Should I have to use runtime DI in Scala for this when it would be disallowed by implicit values? I could do side-effects in Java in these conversion implementations and nobody would care. I would care to the point that there is a spectrum between implicit conversion, explicit conversion and ordinary function/method. Even using explicit conversion I can make code bad. Explicit conversions shouldn't do arbitrary complex operations either otherwise we could call almost any function/method a conversion. Simply said, while I can do more in explicit conversion than in implicit conversion I still don't expect that The conceptual problem implicit def fooToBar(foo: Foo): Bar = new Bar
def baz(foo: Foo) = {
val bar: Bar = foo // implicit conversion
}
// or this one
def baz(bar: Bar) = ???
baz(new Foo) // implicit conversion
// I put extension methods aside for now What they are? They are another form of subtyping which allows type substitution. The closest concept seems to be coercive subtyping (wikipedia). Unlike nominal and structural subtyping they create another instances but it is not that important here. To use them in a manageable way their usage should be closely related to implicit cast (upcasting), or the mechanic which enables structural subtyping (it is implicit as well). I don't have to care how it works, it never fails, I don't have to debug it, no surprises, well defined behaviour. Golang embedding and subsequent type substitution has the same properties. Break any of these properties and it can manifest as a problem. Do you ever need to step into a body of The right implicit conversion has to be actually implicit. It means it has to be invivisible and opaque. It has to be just a simple boilerplate wiring which can be forgotten. Just like implicit cast and other variants. The problem is that both my first example and actual Scala implicit conversions can use the same source Yet another perspective I would say that even if implicit conversions would be limited to simple projections like I don't think we can get much further compared to a dynamic language. Dynamic languages are very popular => higher flexibility in type substitutions doesn't look like a problem itself. Hence I believe it is possible to be more flexible than strict Java. But any step further has to be justified. I can imagine 2 use cases which could violate it. Very lower platform kind - like Scala.js. And very high level DSL kind - like DSL to write Spark notebooks directly in a web app. These could do some more complex and arbitrary implicit conversions to achieve a given goal. But it shouldn't be a norm. And some more advanced well known design patterns could be acceptable - like implicit typeclass derivations. Sometimes argument appears that implicit conversions break type safety in Scala. It is typically argued that they are fully type safe. It is true. But it is not what the argument is about. If I would have enough time I could sit down and write implicit conversions between all types. I would make most just to throw. This way I would convert Scala into a kind of strictly typed dynamic language:) Whatever I would write would compile but it would be failing on runtime. |
As an example I have implemented @Autowired (i.e. implicitly injected) function between String and String in Java Spring. Just to try it out after a while. If you are interested you can take a look into spring-playground. It seems Java developers have little problems with hidden @Autowired private values, @componentscan, @import of multiple @configuration classes. They can deal with SpringContext failing on startup with ambiguity, SpringContext inheritance etc. While every Java developer understand @componentscan I often find that Scala developers in our team has no knowledge about 2nd stage of implicit search. Even after 1 year if I didn't care and left them alone. Imagine that he somehow triggers it. He has little chance to understand what is going on. Implicits blamed. We could find close analogy to Scala implicit values in Spring. But one can get with Spring analogy only so far though. What I can't do with Spring is to write highly generic function and Spring would infer correct instances for me based on passed in values. Something what can be done in Scala in tandem with type inference and implicits. It is at the same time what makes Scala powerful. Make a step back, take the Simulacrum generated code. Ask enterprise commodity developer who is working with Scala for 1 year to explain the code. I guess there will be very high failure rate. And I'm affraid that the main response will be blaming implicits. Imagine that advanced Scala developer leaves and a less skilled one takes over. If there is this kind of code implicits can be blamed just easily. Without further understanding we can just add additional obstacles, limit functionality but gain a very little. EDIT: just for a bit of fun and horror I have added Scala version to spring-playground. |
@odersky I wasn't there at that time and I never used JavaConversions. Was the import the only reason? After all JavaConverters are based on import as well. Could it be summarized what were practical problems with it and how they manifested? It wouldn't pass my criteria for implicit conversion because converting a collection to another collection is potentially expensive which in itself can leak implicit conversion when debugging performance regression. It possibly could be fine in my book if Java collection would be embedded in Scala source type (it is not how Scala collections are designed). For a record, I can't name a single dev from my team, including me, who likes Scala interop with Java collections. It is an important feature and it feels too much bolted-on, like an after-thought. It seems that nothing is going to change with 2.13 collection overhaul. You wouldn't like to hear what Java developer says seeing the resulting Scala code. Devs are routinely producing code like this I don't know if it can be done any better. But when working with Java API I would like to work with Java collections but with Scala syntax and explicitly convert to immutable Scala collection only when passing boundaries to Scala code. Meaning there would be none conversion if call takes result from Java and returns back to Java. And only one when it goes from Java to Scala. I wouldn't mind if it would be limited compared to native Scala collections as far as it would cover the most common use cases to manipulate them. |
@vladap - IIRC |
I don't mind if it is auto-magical. I probably understand about 1% how all things I use actually works internally in complete details. Magic, abstraction however we call it help us to beat complexity. The thing I care about is if the magic is leaky or not. If it forces me at some point to dig into complex internal details I have to understand than it is bad. In what ways JavaConversions were leaky? E.g. how they could force me to debug implicits? How they could lead me to hard to debug bugs? How they could confuse me easily? Forgotten import is one thing. What else? Code samples how it went bad? I use Scala in enterprise and I just want to know my enemy (if there is actually any) before it hunts me down. I don't buy hate posts against implicit conversions that easily because they typically boil down to an argument that implicit conversions are bad because they convert implicitly. Or that they are auto-magical. Without actually explaining details and concrete failing scenarios. And to avoid confusion I don't deny they have issues. I prefer explicitness over implicitness. And I agree with linked Rust ergonomics. And yes, I think we have probably zero implicit conversions in our own code base. But we use 3rd party ones routinely, like Spark. |
@vladap I'll take your points under consideration. But this issue tracker is the wrong format for extended discussions and arguments. Please use scala-contributors for that. |
I'm sorry to mess it up here. I will use the other option next time. |
I have posted an example of "non-deterministic" implicit conversion on scala-contributors. This suggestion doesn't help to solve it. |
So is the logic that language imports aren't a big enough hoop, if we make the hoop bigger then it will have teeth? At what point do we say implicits don't kill [codebases], people kill? |
See also: |
Maybe if we changed the implicit conversion warning / language import to only warn / be needed if the conversion was between unrelated types, and made the warning language stronger, people wouldn't start mentally filtering them out. Incidentally there were people that predicted, at the time of the language imports proposal, that it might have such an effect. |
I wonder what people involved in the Marathon issue (the committer is @timcharper) have to say |
That commit doesn't even seem to remove |
The commit message seems to be wrong. The offending implicits were located in the package object `mesosphere.marathon.stream` and moved to the `mesosphere.marathon.stream.Implicits` object in another commit.
…________________________________
From: nafg <notifications@github.com>
Sent: Tuesday, June 20, 2017 8:50:14 PM
To: lampepfl/dotty
Cc: Jasper Moeys; Mention
Subject: Re: [lampepfl/dotty] Disallow implicit conversions between unrelated types (#2060)
I wonder what people involved in the Marathon issue (the committer is @timcharper<https://github.com/timcharper>) have to say
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub<#2060 (comment)>, or mute the thread<https://github.com/notifications/unsubscribe-auth/ADiVwUabrX_KXSKS6GAnls61nVDhGwj6ks5sGBRmgaJpZM4MVbrd>.
|
I propose we close this for now, WDYT @odersky? |
Superseded by #4229 |
Disallow implicit conversions, unless one of the following holds:
its source type or its target type
Quite a few things in the compiler and tests had to be changed to
conform to the new rule.