-
Notifications
You must be signed in to change notification settings - Fork 190
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add GCP storage authentication #434
Conversation
adb169b
to
47d8654
Compare
b49d836
to
ab612b3
Compare
…ntity Added Support for Google Cloud Storage with Workload Identity as Source Provider. This enables the use of GCP without enabling S3 compatible access. Signed-off-by: pa250194 <pa250194@ncr.com>
Signed-off-by: pa250194 <pa250194@ncr.com>
Signed-off-by: pa250194 <pa250194@ncr.com>
Signed-off-by: pa250194 <pa250194@ncr.com>
Signed-off-by: pa250194 <pa250194@ncr.com>
Signed-off-by: pa250194 <pa250194@ncr.com>
Signed-off-by: pa250194 <pa250194@ncr.com>
Signed-off-by: pa250194 <pa250194@ncr.com>
1ee9206
to
6ff5970
Compare
Signed-off-by: pa250194 <pa250194@ncr.com>
4f2c77b
to
fa8c4ca
Compare
Signed-off-by: pa250194 <pa250194@ncr.com>
dbd2930
to
a6be9c8
Compare
Signed-off-by: pa250194 <pa250194@ncr.com>
1526d1c
to
0b97151
Compare
I'm interested to know what led to the GCP client having resumable downloads -- it doesn't seem to be needed by the controller, which always fetches into a fresh temp directory. |
Hi @squaremo I was trying to make the GCP client to closely resemble the Minio client that is already implemented which has resumable downloads implemented. But I can remove it as I see what you mean that it is not needed by the controller. |
Ah I see! That makes sense. I think that since it's not used by the controller, and not covered by tests (if I read them right), it's probably better to remove that bit for now in favour of simpler code. The CI failure looks like a transient problem, let's see what happens on the next push. Last thing: a belated first-time contributor high five ✋ ! Thank you for putting in time and effort on this. |
Thank you! ✋🏾 I will make the changes and push them as soon as possible. |
Signed-off-by: pa250194 <pa250194@ncr.com>
241e529
to
057c65e
Compare
Signed-off-by: pa250194 <pa250194@ncr.com>
e7a843d
to
38be5ed
Compare
Signed-off-by: pa250194 <pa250194@ncr.com>
052287f
to
7c0d4c0
Compare
I just removed the resumable downloads and refactored the comments. Any other comments or changes will be appreciated. Thank you |
To allow building a multi-platform container image using `buildx`. Various configuration flags allow for fine(r)-grain control over the build process: - `BASE_IMG`: FQDN of the base image that should be used, without a tag. - `BASE_TAG: tag of the base image that should be used. Allows checksum sum to be included. - `BUILDX_PLATFORMS`: platforms to target for the final container image. - `BUILDX_ARGS`: additional `docker buildx build` arguments, e.g. `--push` to push the result to a (local) image registry. Signed-off-by: Hidde Beydals <hello@hidde.co>
Signed-off-by: Hidde Beydals <hello@hidde.co>
To provide a better (contributing) experience to those with Apple machines, as determining the correct paths there is a bit harder. Signed-off-by: Hidde Beydals <hello@hidde.co>
This can be useful on machines where libgit2 is installed due to other applications depending on it, but where the composition of this installation does not properly work with the controller. Reason the system version is still preferred, is because this lowers the barrier for drive-by contributors, as a working set of (Git) dependencies should only really be required if you are going to perform work in that domain. Signed-off-by: Hidde Beydals <hello@hidde.co>
Signed-off-by: Hidde Beydals <hello@hidde.co>
Signed-off-by: Hidde Beydals <hello@hidde.co>
This moves the `libgit2` compilation to the image, to ensure it can be build on builders that aren't backed by AMD64. The image is structured in such a way that e.g. running nightly builds targeting a different Go version, or targeting a different OS vendor would be possible in the future via build arguments. Signed-off-by: Hidde Beydals <hello@hidde.co>
This ensures the Dockerfile used for testing is making use of the same scratch image to compile `libgit2` as the actual application image. In a future iteration we should restructure our GitHub Action workflows to re-use the application image, saving us an additional Dockerfile and a duplicate build. Inspiration for this (which makes use of a local registry for the duration of the build) can be found at: https://github.com/fluxcd/golang-with-libgit2/blob/main/.github/workflows/build.yaml Signed-off-by: Hidde Beydals <hello@hidde.co>
As this isn't available on Darwin by default, unlike on most Linux distributions. Signed-off-by: Hidde Beydals <hello@hidde.co>
Signed-off-by: Stefan Prodan <stefan.prodan@gmail.com>
This commit adds a `ReconcileStrategy` field to the `HelmChart` resource, which allows defining when a new chart should be packaged and/or published if it originates from a `Bucket` or `GitRepository` resource. The two available strategies are: - `ChartVersion`: creates a new artifact when the version of the Helm chart as defined in the `Chart.yaml` from the Source is different from the current version. - `Revision`: creates a new artifact when the revision of the Source is different from the current revision. For the `Revision` strategy, the (checksum part of the) revision of the artifact the chart originatesfrom is added as SemVer metadata. A chart from a `GitRepository` with Artifact revision `main/f0faacd5164a875ebdbd9e3fab778f49c5aadbbc` and a chart with e.g. SemVer `0.1.0` will be published as `0.1.0+f0faacd5164a875ebdbd9e3fab778f49c5aadbbc`. A chart from a `Bucket` with Artifact revision `f0faacd5164a875ebdbd9e3fab778f49c5aadbbc` and a chart with e.g. SemVer `0.1.0` will be published as `0.1.0+f0faacd5164a875ebdbd9e3fab778f49c5aadbbc`. Signed-off-by: Dylan Arbour <arbourd@users.noreply.github.com>
Signed-off-by: Hidde Beydals <hello@hidde.co>
The version was accidentally set to an invalid version, causing the API documentation generation to fail. Signed-off-by: Hidde Beydals <hello@hidde.co>
This includes a tiny fix for Darwin to ensure the generated `.pc` file includes the right paths. Signed-off-by: Hidde Beydals <hello@hidde.co>
Signed-off-by: pa250194 <pa250194@ncr.com>
Added Support for Google Cloud Storage with Workload Identity as Source Provider. This enables the use of GCP without enabling S3 compatible access. Signed-off-by: pa250194 <pa250194@ncr.com>
Signed-off-by: pa250194 <pa250194@ncr.com>
Signed-off-by: pa250194 <pa250194@ncr.com>
Signed-off-by: pa250194 <pa250194@ncr.com>
Signed-off-by: pa250194 <pa250194@ncr.com>
Signed-off-by: pa250194 <pa250194@ncr.com>
Signed-off-by: pa250194 <pa250194@ncr.com>
Signed-off-by: pa250194 <pa250194@ncr.com> Added log for GCP provider auth error Signed-off-by: pa250194 <pa250194@ncr.com>
…rce-controller into gcp-bucket-provider
Okay got it. I dropped commit |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for taking care of the last nitpicks @pa250194 🙇 💯
You're welcome! |
If applied, this PR will add support for Google Cloud Platform as a storage provider without the need to enable S3 interoperability and use HMAC keys. As stated here https://cloud.google.com/storage/docs/authentication/hmackeys. There is a restriction to a maximum of 5 HMAC keys per service account in GCP. This PR enables the use of Workload Identity which the GCP client automatically handles.
The GCP provider handles authentication in two ways. The first way being that the GCP client library will automatically check for the presence of the GOOGLE_APPLICATION_CREDENTIAL environment variable. If this is not found, the GCP client library will search for the Google Application Credential file in the config directory.
The second way to authenticate is by using a GCP Service Account Secret saved as a Kubernetes secret named service account. An example is as follows.
The service account secret is a base 64 encoded string of the GCP service account json file which is in the form: