-
Notifications
You must be signed in to change notification settings - Fork 610
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Mux terraform-provider-sdk and terraform-plugin-framework #2170
Conversation
To allow us to migrate incrementally to terraform-plugin-framework, we need to: - Run a protocol v5 => v6 translator; and - Mux SDKv2 and framework servers to serve a unified provider This gets us moving down that path in to run them in a compatible fashion.
Creates a `consts` package to hold the provider schema that will be shared between SDKv2 and the plugin framework to prevent typos and duplication.
Copies over the sdkv2 client configuration to make the provider operate 1:1 regardless of the mechanism used.
Right now, there isn't a way to do automatic description decoration based on the schema attributes[1]. This is a problem because to mux two provider types they must return identical schemas. To work around this, I've added conditionals to the generator to ignore the schema fields in the providers where we may experience differences between the providers but it remains for all resources and data sources. For now, this strikes the balance of unblocking us and keeping the documentation nicely decorated. [1]: hashicorp/terraform-plugin-framework#625
changelog detected ✅ |
99099d0
to
8d8cd1e
Compare
acceptance tests all passing - https://github.com/cloudflare/terraform-provider-cloudflare/actions/runs/3963773242/jobs/6791986094 |
This functionality has been released in v3.33.0 of the Terraform Cloudflare Provider. Please see the Terraform documentation on provider versioning or reach out if you need any assistance upgrading. For further feature requests or bug reports with this functionality, please create a new GitHub issue following the template. Thank you! |
The
terraform-plugin-framework
is the next iteration of the provider framework from Hashicorp. It has been rebuilt from the ground up and introduces new approaches and workflows for managing resources from an internal perspective (more on why we care about this later).As majority of this provider is built using the SDKv2, we aren't in a position to swap everything immediately so we need a way to run both SDKs together. This is achieved by:
Hashicorp have some documentation that cover the migration and differences between the two implementations however, this doesn't tell the whole story as we have other dependencies that aren't covered so it required a little more massaging to get working correctly. See individual commits like 074e217 for details on the trade offs.
This PR looks alot bigger than what it is due to the need to move files and namespace packages to avoid conflicts. I would recommend reviewing this incrementally however, at a high level, these are the steps we've taken:
internal
is now 4 packages.utils
covers helpers that will be shared between both provider implementations where possible.consts
is a new package to allow us to reuse common values between two implementations and limit the chances of a typo or fat fingering of a value.framework
houses theterraform-plugin-framework
implementation and all the resources built by that version of the tool.sdkv2provider
is the ol' faithful and existing resources.terraform-plugin-framework
not yet supporting parity functionality. When providers are mux'd they must produce identical schemas or else a big warning is shown to end users.service
directory for each implementation which, in future, will help us namespace resources by service and have lines of ownership much clearer.terraform-plugin-framework
resources.Why the effort? Why now?
To answer this, we need to understand a little about the SDKv2. The way SDKv2 is structured isn't really conducive to representing null or "unset" values consistently and reliably. You can use the experimental ResourceData.GetRawConfig to check whether the value is set, null, or unknown in the config, but writing it back as null isn't really supported.
schema.ResourceData
is largely an abstraction over config, state, and plan all as one read/write map of strings to other strings. This hasn't been a true abstraction in a very long time, and it's largely kept alive as a compatibility shim. Values are always read through compatibility layers meant to imitate that abstraction (except forGetRawConfig
, above, which is a bolted-on workaround) and are always written back through those layers, as well. Unfortunately,map[string]string
doesn't really provide a reliable way to model null or unset values that are distinct from the empty string. I know this sounds like it wouldn't apply because we're dealing with booleans, but after years of accruing behaviours, nothing is that straightforward anymore, sadly.With this knowledge in hand, let's look at why this is a problem.
This first popped up for us when the Edge Rules Engine started onboarding new services and those services needed to support API responses that contained booleans in an unset (or missing),
true
andfalse
state each with their own reasoning and purpose. While I don't agree entirely with the API design, it is a valid way to do things that we should be able to work with but as mentioned above, the SDKv2 provider couldn't. This is because when a value isn't present in the response or read into state, it gets a Go compatible zero value for the default. This showed up as the inability to unset values after they had been written to state asfalse
values (and vice versa).The only solution we have here to reliably use the three states of those boolean values, is to migrate to the terraform-plugin-framework which has the correct implementation of this.
It's worth calling out here, while this introduces support for muxing and using the newer framework, it will not be implementing the resource migration. This PR is intended as a backwards compatible change to unblock that work that we can evaluate on a case-by-case basis.
As this is all internal to the provider and should have no end user impact, it will be released with our usual point releases.