-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
proto: make the Message interface behaviorally complete #364
Comments
Interesting proposal. I faced a similar challenge when designing the public API for a dynamic message implementation. If this goes anywhere, I'll be curious to see how similar they are in terms of shape/surface area. And I'll be excited to see how it can simplify that dynamic message's implementation (particularly around extensions, which I think is the weakest part of the current generated code). |
About adding methods to proto Message interface, we have done exactly that a few times in both C++ and Java. For example, we added a ByteSizeLong method to C++ MessageLite interface not very long ago: It's a pure virtual method so anyone who implements their own MessageLite subclasses will be broken. Protobuf team's stance on this is that nobody except protobuf itself should subclass/implement these message interfaces. It's called out explicitly in our compatibility notice in Java:
|
Go doesn't have the concept of a "default method", so this is going to be a breaking change unfortunately. The transition to get the world to the new API will be a little tricky and will probably have to occur in several phases. |
There's an open proposal for default methods golang/go#23185, but it's not looking promising as the concept doesn't fit well into Go where interface satisfaction is implicit rather than explicit. |
Here are some documents for the proposed plans to improve Go protobufs. I'd appreciate any feedback from the community regarding the design and migration plan: |
Couldn't the same approach to the database/sql package be taken ? Where various functions run specific assertions on smaller interfaces, rather than having a giant V1/V2 interface where both have all the functions defined in one big bundle. Edit: This already seems the case in the new proposal, apologies for skimming over it too quickly |
Both documents are really ambitious, but in principal sound like a great idea. Good work. I am a bit concerned about the amount of work required to make this move on the part of gogoprotobuf. I will need some help. The last large move (currently on the dev branch) was a big job, taking ALOT of my personal time, which I am trying to focus om my studies. I am not working for a company that uses protobufs. Comments on protoreflect. It looks good, but I am a bit concerned, that the following cases might not have been taken into account. So maybe here are some tests for the protoreflect API:
I think getting this protoreflect library right is essential to having only a single |
Thank you @awalterschulze for your feedback.
I admit that the designs are indeed ambitious. Ideally, it would be nice if something like it occurred 8 years ago when Go protobufs were first implemented. However, that was not the case and we are struggling with the consequences today. As much work as this transition is, it will lay the foundation for a brighter future. Another 8 years from now, I hope we look back and see this moment as the point when Go protobuf support became much better.
I'll help 👋. Regarding your How do we handle slices of message values (not pointers)? This case is actually the reason why
Thus, for The How do we handle message values (not pointers)? This situation is different from above since proto semantics do not indicate the difference between a null or empty message within a repeated list of messages. However, for a standalone message, proto semantics do preserve whether a message is null or empty (even in proto3). Message values occurs when the
An implementation of the reflection API will need to do a best effort at providing the illusion that the implementation is proto compliant. However, this abstraction can leak:
How do we handle custom types for bytes? Similar to the documented restriction on How do we handle An implementation of the reflection API will need to create internal wrappers over One possible implementation is here: https://play.golang.org/p/IXvjCK_Y_Hc However, these wrappers are leaky abstractions:
In practice, I don't expect the abstraction leakages mentioned above to be much of a problem in practice. If anything the overflow of |
Option 2: One
|
@neild, @dsnet, if option 1 wins (two separate packages/APIs), does the existing |
We've been migrating a number of targets inside Google to use protobuf reflection and through the experience, it has helped us refine what the API should be. CL/175458 is an example of API changes informed by real usage. Within Google, we have the ability to also migrate all users since we use a monorepo. Unfortunately, we can't fix external users.
Unfortunately, likely. Fortunately, most of the changes are fairly mechanical changes. If it helps, we can track all breaking changes in one place with instructions on how to change. |
Representing an external company with heavy protobuf usage, I'd prefer to pay the one-time conversion cost to gain a superior interface for the long haul. As we (mostly) also use a monorepo, I don't foresee the conversion work required to be very hard. |
I am mostly exposed to gRPC and open source use of protobuf and in my experience the use of the |
I prefer option 2, fairly strongly. Lack of a gradual upgrade path is too painful. That's the sort of thing that slows teams way down and prevents any upgrade from occurring at all. Gradual upgrades are especially important for the github.com/golang/protobuf project to care about because Statements like "when use of the old package dies out, we have a cleaner and healthier ecosystem" concern me. Usage of the old package will never completely die out. We'll always have references to the old package. |
@spenczar, I think there may be a misunderstanding of what option 1 and 2 entail. The lack of gradual upgrade path is for APIs that use The real "lack of gradual upgrade path" means that packages that currently use
But old code can still link against and work just fine w/ new protobuf runtime and old generated code will still work. @neild, did I describe that accurately? |
While one should always have a way to smoothly transition from one way to another, I think such a transition has to have a definite limit in terms of support lifetime, otherwise, you end up supporting the old way forever, even when it makes your life miserable because everything has to work both ways now, and that becomes Just The Way Things Are. Some people will never change until it breaks, no matter how trivial the change might be. And breaking things is not Always The Wrong Choice. 🤷♀️ I’ll point here to grpc/grpc-go#711 which was a somewhat similar situation, where a choice in code generation would break people. It sat in “cannot change or we break people”, up to “we have a migration strategy” to “once go1.8 is end-of-life”. It took two and a half years to make what was on the surface a relatively simple change. But this is a problem that is never going to be seen inside of Google, because protobufs are compiled fresh every build. The entire notion of checking in generated code is to me still kind of crazy. It’s pretty much just like checking in a binary blob. |
@jhump Hm, I'm not following. Here's how I understood things: Under option 1, the Is that correct? I might have this wrong, particularly if this is an interface in a new If old generated code no longer implements the If this is the case, I think we hit the lockstep problem. |
@puellanivis, if that was in reply to my last comment, I wasn't suggesting the old APIs be supported forever. But they must be supported during some sort of transition period. For the second option I mentioned (having a package that provides API for both old and new interface), the idea is that the functions for the old interface would eventually be deprecated/removed. As far as checking in generated code: not everyone has blaze :) It is idiomatic Go that one be able to
Yes, it is a new interface. I think the suggestion is that the import path for the v2 package will be |
@jhump Thanks. I agree, then, that my worry about lockstep upgrades does not apply; option 1 looks better to me too. |
Yes, that's correct. Let's say today you have a package with an exported API like this: package prototwiddle
import "github.com/golang/protobuf/proto"
// Twiddle fiddles with a message.
func Twiddle(m proto.Message) {} If we redefine
You could make |
Yes, the question is what the interface in the new package should be. (The final package name is going to be |
For any adventurous people who are actually using v2, I've mentioned above that the API is not fully stable yet. If you want to be notified of any breaking changes, subscribe to #867. |
The |
Congratulations on an excellent piece of work. |
Filing on behalf of @alandonovan.
We can't add methods to
proto.Message
without breaking backwards compatibility. One approach we can take is to defineproto.MessageV2
that is a much more semantically complete interface that provides a form of "protobuf reflection". InMarshal
,Unmarshal
,Merge
,Clone
, and so on, we can type assert if it implementsproto.MessageV2
and use that interface to implement generic versions of those functions. Ifproto.Message
doesn't satisfyproto.MessageV2
, thenMerge
can just fail (it already does on most third-party implementations ofproto.Message
).The text was updated successfully, but these errors were encountered: