-
Notifications
You must be signed in to change notification settings - Fork 5.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Application using Kustomize with Helm cannot access private remote helm repo #10745
Comments
after digging into it more, I found that this seems to be a limitation on Kustomize itself, and there is no short term support being added for private helm repos kubernetes-sigs/kustomize#4401 (comment) It sounds like in order to support this, ArgoCD would need to have a different application source type that has fields for a kustomize source + a helm source at the same time to pull the helm source first and then the kustomize |
Followup: I got a workaround working by doing the following:
|
@sandoichi is this workaround sufficient for you or are you still requesting this as a feature? Otherwise we can probably close this issue |
@reginapizza I think that this should be relabeled as a feature request. The workaround is sufficient at the moment but it seems like a common enough thing to support passing in multiple source repos for a single application. Or would it be better if we closed this and I opened a new feature request issue? |
That's ok, I can label this as an enhancement and keep this issue open. Anyone looking to do the same can use the workaround in the meantime |
+1 to this. It would even be nice if the |
+1 to this here too, having private helm charts would be a game changer |
I wanted to provide a detailed work around for future wonderers like me till this is officially supported in a different may. Disclaimer, once this issue is merged there is probably a more straight forward way to do this. The below flow explains how I got ArgoCD to work with private helm charts from JFrog Artifactory by rendering Kustomization This uses @sandoichi's approach as well as a reference as well as this blog on how to add a custom config plugin to do ArgoCD that works with Kustomize and Helm.
apiVersion: v1
kind: Secret
metadata:
name: ks-build-with-jfrog-helm-creds
namespace: argocd
stringData:
username: super-secret # The artifactory username
password: top-secret # The artifactory password
apiVersion: v1
kind: ConfigMap
metadata:
name: ks-build-with-jfrog-helm
data:
plugin.yaml: |
apiVersion: argoproj.io/v1alpha1
kind: ConfigManagementPlugin
metadata:
name: ks-build-with-jfrog-helm
spec:
generate:
command: [ "sh", "-c" ]
args: [ "kustomize build --enable-helm --helm-command '/home/argocd/cmp-server/config/command.sh'" ]
command.sh: |
#! /bin/bash
set -e
args=("$@")
if [ "${args[0]}" == "pull" ]; then
extras="--username $ARTIFACTORY_USERNAME --password $ARTIFACTORY_PASSWORD"
else
extras=""
fi
helm $extras $@
containers:
- name: ks-build-with-jfrog-helm
command: [/var/run/argocd/argocd-cmp-server] # Entrypoint should be Argo CD lightweight CMP server i.e. argocd-cmp-server
image: alpine/k8s:1.25.16
securityContext:
runAsNonRoot: true
runAsUser: 999
env:
- name: ARTIFACTORY_USERNAME
valueFrom:
secretKeyRef:
name: ks-build-with-jfrog-helm-creds
key: username
- name: ARTIFACTORY_PASSWORD
valueFrom:
secretKeyRef:
name: ks-build-with-jfrog-helm-creds
key: password
- name: HELM_CACHE_HOME
value: /cmp-helm-working-dir
- name: HELM_CONFIG_HOME
value: /cmp-helm-working-dir
- name: HELM_DATA_HOME
value: /cmp-helm-working-dir
volumeMounts:
- mountPath: /var/run/argocd
name: var-files
- mountPath: /home/argocd/cmp-server/plugins
name: plugins
- mountPath: /home/argocd/cmp-server/config/plugin.yaml
subPath: plugin.yaml
name: ks-build-with-jfrog-helm
- mountPath: /home/argocd/cmp-server/config/command.sh
subPath: command.sh
name: ks-build-with-jfrog-helm
- mountPath: /tmp
name: cmp-tmp
- mountPath: /cmp-helm-working-dir
name: cmp-helm-working-dir
volumes:
- configMap:
name: ks-build-with-jfrog-helm
defaultMode: 0777
name: ks-build-with-jfrog-helm
- emptyDir: {}
name: cmp-tmp
- emptyDir: {}
name: cmp-helm-working-dir
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
finalizers:
- resources-finalizer.argocd.argoproj.io
name: external-dns
namespace: argocd
spec:
destination:
name: in-cluster
namespace: kube-system
project: default
source:
path: path/to/folder/with/kustomization
plugin:
name: ks-build-with-jfrog-helm # this is how we tell the app to use the new plugin
repoURL: https://github.com/SomeRepo
targetRevision: main And there you go, any valid And the apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
helmCharts:
- name: external-dns
namespace: kube-system
releaseName: external-dns
repo: https://chewyinc.jfrog.io/artifactory/api/helm/helm-virtual
valuesInline:
foo:bar
version: 6.20.1 I could deploy the chart as well as other resources. |
Thank you very much @aguckenber-chwy! I've deployed your plugin using argocd-helm. I did a few improvments:
Here is my ConfigMap: apiVersion: v1
kind: ConfigMap
metadata:
name: argocd-kustomize-private-repo-script
labels:
app.kubernetes.io/part-of: argocd
data:
command.sh: |
#!/bin/bash
set -e
args=("$@")
extras="--namespace $PARAM_NAMESPACE"
if [ "${args[0]}" == "pull" ]; then
extras="$extras --username $GITLAB_HELM_CHARTS_USERNAME --password $GITLAB_HELM_CHARTS_PASSWORD"
fi
helm $extras $@ And here my helm values: configs:
cmp:
create: true
plugins:
# This plugin is mostly based on https://github.com/argoproj/argo-cd/issues/10745#issuecomment-1949298357
kustomize-private-repo:
parameters:
static:
- name: NAMESPACE
title: Namespace of the application
required: true
itemType: string
collectionType: string
generate:
command: ["bash", "-c"]
args: ["kustomize edit set namespace -- \"$PARAM_NAMESPACE\" && kustomize build --enable-helm --helm-command '/home/argocd/cmp-server/config/command.sh'"]
repoServer:
extraContainers:
- name: cmp-kustomize-private-repo
command: ["/var/run/argocd/argocd-cmp-server"]
image: |-
{{ default .Values.global.image.repository .Values.repoServer.image.repository }}:{{ default (include "argo-cd.defaultTag" .) .Values.repoServer.image.tag }}
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
runAsNonRoot: true
runAsUser: 999
env:
- name: GITLAB_HELM_CHARTS_USERNAME
valueFrom:
secretKeyRef:
name: argocd-finatix-gitlab-helm-creds
key: username
- name: GITLAB_HELM_CHARTS_PASSWORD
valueFrom:
secretKeyRef:
name: argocd-finatix-gitlab-helm-creds
key: password
volumeMounts:
- name: var-files
mountPath: /var/run/argocd
- name: plugins
mountPath: /home/argocd/cmp-server/plugins
- name: argocd-cmp-cm
mountPath: /home/argocd/cmp-server/config/plugin.yaml
subPath: kustomize-private-repo.yaml
- name: argocd-kustomize-private-repo-script
mountPath: /home/argocd/cmp-server/config/command.sh
subPath: command.sh
volumes:
- name: argocd-kustomize-private-repo-script
configMap:
name: argocd-kustomize-private-repo-script
defaultMode: 0777
- name: argocd-cmp-cm
configMap:
name: argocd-cmp-cm And in your ApplicationSet (or Apps) you can just define the namespace: plugin:
name: kustomize-private-repo
parameters:
- name: NAMESPACE
string: project-demolitions One note: I'm not sure if there is already a predefined variable for the namespace from ArgoCD side, but I'm using a variable within my helm chart that just templates it to that location in the ApplicationSet. So no duplicated config/code for me. |
I had the same issue but #16623 (comment) fixed it as well. |
Checklist:
argocd version
.Describe the bug
When using a Kustomize app that references a remote helm chart in a private repo, credentials are not pass into it.
To Reproduce
✅ Configure 2 private repos in ArgoCD. One repo will be the application source and hold the kustomization.yaml, and the other will be the private helm repo that holds the helm chart in which to use with kustomize. For purposes of this example, the repo with the kustomization.yaml will be called privateKustomizeRepo and the helm chart repo will be called privateHelmRepo
Verify in the ArgoCD UI that both repos have been connected to successfully.
I tried 2 different methods to get past this, and both failed the exact same way:
✔️ Method 1: Letting ArgoCD use kustomize normally
Configure the argocd-cm to have
Create an appset that references a private github repo, with a path to kustomization.yaml
create kustomization.yaml in this private repo, at the specified path:
✔️ Method 2: Use a custom plugin
Create a plugin:
Create an appset that references a private github repo, with a path to kustomization.yaml, and uses our plugin and passes in all of the necessary env vars so that we can template our remote helm chart from our private repo
create kustomization.yaml in this private repo, at the specified path:
Expected behavior
I expected that ArgoCD would pass in the private helm repo credentials when using kustomize to pull the private helm chart. I can see in the UI that ArgoCD has made a successful connection to the private helm repo in the Repositories settings.
Version
Logs
If I run that same command locally, but pass in
--username
and--password
with the github token that I used to configure the private helm chart repo in ArgoCD already, it works. So it would appear that despite having the repo already properly configured in ArgoCD, these credentials are not being propagated into the kustomize execution.I did see this section in the ArgoCD docs about remote kustomize bases:
Which I assumed might also apply to a remote helm repo, but it does not. My remote helm repo is using the same username/password token credential as the application repo that holds the kustomization file, but it still doesn't work.
If perhaps there is some way that I can access the repo username/password from within my custom plugin, I could then set it to pass in
--username
and--password
to the cmd line args and maybe get around this, but I haven't found a way to do that yet.The text was updated successfully, but these errors were encountered: