-
Notifications
You must be signed in to change notification settings - Fork 53
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Switching to Protobuf v2 breaks generation of K8S protos #86
Comments
This isn't something Skycfg can fix directly, but when the Protobuf module finishes migrating to the go-protobuf v2 API we should have some better options. Summary of the issue:
Potential workarounds:
In the future, Skycfg will be switching from using generated Go structs to the |
Maybe proto.Clone from gogo package would still work here?
…On Wed, Dec 9, 2020 at 6:40 PM John Millikin ***@***.***> wrote:
This isn't something Skycfg can fix directly, but when the Protobuf module
finishes migrating to the go-protobuf v2 API we should have some better
options.
Summary of the issue:
- Kubernetes has checked in Go structs that resemble go-protobuf
generated code, but which do not conform to the go-protobuf API.
- Previous versions of go-protobuf happened to work with the
Kubernetes structs because the particular API violations weren't relevant
to that code path.
- Newer versions of go-protobuf change how they do some
reflection-based operations, which breaks on the Kubernetes structs.
Potential workarounds:
- Avoid upgrading go-protobuf to v2 in your project -- this includes
avoiding upgrades of Skycfg to a version that depends on go-protobuf v2.
- Use a build system like Bazel to generate correct Go code for the
Kubernetes protobufs, rather than rely on upstream's broken Go code.
In the future, Skycfg will be switching from using generated Go structs to
the dynamicpb package. This will remove any dependency on the Kubernetes
structs, and it will be possible to operate on any Protobuf that a
descriptor can be obtained for.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#86 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAF3KAD34BN3PF6WZ3VTHL3SUAYJRANCNFSM4UULNOFA>
.
|
Thanks for the quick reply! Makes sense!
I actually maintain a set of Bazel rules (that I'm hoping to have released OSS one day) called rules_skycfg :) The summary is that we generate all our K8S YAML using skycfg, and have those YAML generating rules as part of the Bazel dependency graph so we can reason about them the same as we do with source code. One challenge I've had with generating protos for K8S is that gazelle is apparently not smart enough to connect proto imports across different repos. So, it generates a bunch of BUILD files that depend on Is there any trick you can recommend to get proto generation working for K8S projects? The best idea I've found so far involves smothering my BUILD files with |
FYI, Jay as recently as October recommended simply disabling proto generation, which makes me wonder if this is particularly feasible … bazel-contrib/bazel-gazelle#924 (comment). |
Take a look at the If you're using I currently expect Skycfg to finish migrating to the |
Awesome, thanks so much. If I can get it working I'll update this issue for posterity. |
I think what I really want is that I know this isn't strictly skycfg related, but for the sake of people who stumble across this ... I kind of have proto generation for K8S APIs working. I spent some time fiddling with gazelle, and it doesn't seem like those directives above solve the immediate problem. If I check out github.com/k8s/api and run proto_library(
name = "v1_proto",
srcs = ["generated.proto"],
visibility = ["//visibility:public"],
deps = [
"//k8s.io/api/core/v1:v1_proto",
"//k8s.io/apimachinery/pkg/api/resource:resource_proto",
"//k8s.io/apimachinery/pkg/apis/meta/v1:v1_proto",
"//k8s.io/apimachinery/pkg/runtime:runtime_proto",
"//k8s.io/apimachinery/pkg/runtime/schema:schema_proto",
],
) There is no such target So ... for i in $(buildozer 'print deps' //...:%proto_library | tr -d '[]' | tr ' ' '\n' | sort | uniq); do
proto=$(sed -e 's%//%%' -e 's/:.*$/\/generated.proto/' <<<$i)
printf "gazelle:resolve proto %s %s\n" $proto $(sed -e 's%//k8s.io/api%@io_k8s_api%' -e 's%/%//%' <<<$i)
printf "gazelle:resolve proto go %s %s\n" $proto $(sed -e 's%//k8s.io/api%@io_k8s_api%' -e 's%/%//%' -e "s/:\(.*\)_proto$/:\1_go_proto/" <<<$i)
done The output of which I tossed into my K8S_DIRECTIVES = [
"gazelle:resolve proto k8s.io/api/authentication/v1/generated.proto @io_k8s_api//authentication/v1:v1_proto",
"gazelle:resolve proto go k8s.io/api/authentication/v1/generated.proto @io_k8s_api//authentication/v1:v1_go_proto",
"gazelle:resolve proto k8s.io/api/batch/v1/generated.proto @io_k8s_api//batch/v1:v1_proto",
"gazelle:resolve proto go k8s.io/api/batch/v1/generated.proto @io_k8s_api//batch/v1:v1_go_proto",
"gazelle:resolve proto k8s.io/api/core/v1/generated.proto @io_k8s_api//core/v1:v1_proto",
"gazelle:resolve proto go k8s.io/api/core/v1/generated.proto @io_k8s_api//core/v1:v1_go_proto",
"gazelle:resolve proto k8s.io/apimachinery/pkg/api/resource/generated.proto @io_k8s_apimachinery//pkg/api/resource:resource_proto",
"gazelle:resolve proto go k8s.io/apimachinery/pkg/api/resource/generated.proto @io_k8s_apimachinery//pkg/api/resource:resource_go_proto",
"gazelle:resolve proto k8s.io/apimachinery/pkg/apis/meta/v1/generated.proto @io_k8s_apimachinery//pkg/apis/meta/v1:v1_proto",
"gazelle:resolve proto go k8s.io/apimachinery/pkg/apis/meta/v1/generated.proto @io_k8s_apimachinery//pkg/apis/meta/v1:v1_go_proto",
"gazelle:resolve proto k8s.io/apimachinery/pkg/runtime/schema/generated.proto @io_k8s_apimachinery//pkg/runtime/schema:schema_proto",
"gazelle:resolve proto go k8s.io/apimachinery/pkg/runtime/schema/generated.proto @io_k8s_apimachinery//pkg/runtime/schema:schema_go_proto",
"gazelle:resolve proto k8s.io/apimachinery/pkg/runtime/generated.proto @io_k8s_apimachinery//pkg/runtime:runtime_proto",
"gazelle:resolve proto go k8s.io/apimachinery/pkg/runtime/generated.proto @io_k8s_apimachinery//pkg/runtime:runtime_go_proto",
"gazelle:resolve proto k8s.io/apimachinery/pkg/util/intstr/generated.proto @io_k8s_apimachinery//pkg/util/intstr:intstr_proto",
"gazelle:resolve proto go k8s.io/apimachinery/pkg/util/intstr/generated.proto @io_k8s_apimachinery//pkg/util/intstr:intstr_go_proto",
] At which point I did require the go_repository(
name = "io_k8s_api",
build_directives = K8S_DIRECTIVES + ["gazelle:proto_import_prefix k8s.io/api"], # keep
importpath = "k8s.io/api",
sum = "h1:ojgIGmjzUSwsug8H2yVCoueRAGy0IshvrtowuLycEQo=",
version = "v0.0.0-20181027024800-9fcf73cc980b",
)
go_repository(
name = "io_k8s_apimachinery",
build_directives = K8S_DIRECTIVES + ["gazelle:proto_import_prefix k8s.io/apimachinery"], # keep
importpath = "k8s.io/apimachinery",
sum = "h1:QV0MJn7lF87qcyC3Y+On2zKM62Erf99uoORoQbu7lag=",
version = "v0.0.0-20181026144617-b7f9f1fa80ae",
) And where this has finally brought me is this error:
Which seems to be the result of somehow getting I'll keep poking at this, but hopefully this is useful to someone out there. |
Two more resolve overrides fixes things up:
Then, you have to be careful that your BUILD.bazel file has directives to make sure you link against the Bazel-generated Go libraries, not the checked-in ones:
So that gets you something. But it turns out the checked-in Go source does more for you than just exist. This skycfg program will now fail with def main(ctx):
d = appsv1.Deployment()
d.metadata.name = "hey"
return [d] Basically it looks like you need to construct every single object? def main(ctx):
d = appsv1.Deployment()
m = metav1.ObjectMeta()
m.name = "hey"
d.metadata = m
return [d] With several thousand lines of skycfg I'm not super keen on having to rewrite most of it with explicit constructors ... will this also be an issue after switching to reflection? Another problem is I can no longer import Well that was a fun exercise. I think for the time being I'll stick with the earlier skycfg versions without the v2 proto dependency. I'm totally at the limit of my proto knowledge here, so I look forward to learning what I'm getting wrong ;) |
Due to some unrelated circumstances, I do not think I'll be able to finish this work by the end of the year -- and there is now no ETA. I can review PRs that finish migrating the Do the followers of this issue have any preference as to the above choice? |
Hey John, hope everything is Ok. I should be able to burn some cycles on
this comes q1 since we are stuck on older skycfg and proto v1.3.2 due to
this as well. Could you post a rough outline on what remains to be done
here? (I didn’t see any prs fly by so it seems like “everything”? =))
…On Sat, Dec 19, 2020 at 7:52 PM John Millikin ***@***.***> wrote:
Due to some unrelated circumstances, I do not think I'll be able to finish
this work by the end of the year -- and there is now no ETA.
I can review PRs that finish migrating the proto module to go-protobuf
v2, or alternatively, I can roll back the migration and return current HEAD
to go-protobuf v1.
Do the followers of this issue have any preference as to the above choice?
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#86 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAF3KACA6SROPWVXBGH2UWDSVVYGZANCNFSM4UULNOFA>
.
|
I have a version of the Messages themselves are the missing big piece, plus figuring out a solution to:
|
The finished parts of the go-protobuf v2 implementation for message reflection have been pushed to Placeholders for the unfinished part, the |
Compatibility note: I just tagged commit f561f12 (right before the go-protobuf v2 switch) as The next tag will happen after the migration has finished and Kubernetes protos have been verified to work with go-protobuf v2. |
Are there any updates on this issue with the new v2 improvements that have been merged since? |
Ah I see now, for anyone else that is slow like me and still catching up on this, at this point we're just waiting for Kubernetes to migrate away from gogo so they support the new proto registry: kubernetes/kubernetes#96564 |
Commit e60b2f4 switched to Go Protobuf v2, but this breaks the K8S example.
Reverting to f561f12 does not exhibit the same panic.
The text was updated successfully, but these errors were encountered: