-
Notifications
You must be signed in to change notification settings - Fork 5.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ArgoCD start to DDoS related Helm Repository #12314
Comments
The feature request makes sense. We need the equivalent of what we are currently doing with For the implementation, I feel we could cache the index.yaml in redis, and make subsequent requests for index.yaml with the This all presumes jFrog Artifactory supports the @fotto1 are you open to submitting a fix for this? |
@jessesuen sounds like a good proposal. Maybe you have somewhere the link to merge request for the similar change on git repositories. Can you provide some guidance which classes must be touch to implement the change you proposed? After that I will check if I can do the change by my own. |
We often encounter
|
Based on multiple discussion also with my colleague @andrei-gavrila we identified the underlying issue. Seems argocd use for every helm chart an repo url and a cache. Means every configured application has an own cache and store a index.yaml file. Line 105 in 565aa8e
type nativeHelmChart struct {
chartCachePaths argoio.TempPaths
repoURL string
creds Creds
repoLock sync.KeyLock
enableOci bool
indexCache indexCache
proxy string
} This is how the index is retrieved: Line 239 in 565aa8e
if !noCache && c.indexCache != nil {
if err := c.indexCache.GetHelmIndex(c.repoURL, &data); err != nil && err != cache.ErrCacheMiss {
log.Warnf("Failed to load index cache for repo: %s: %v", c.repoURL, err)
}
}
if len(data) == 0 {
start := time.Now()
var err error
data, err = c.loadRepoIndex()
if err != nil {
return nil, err
}
log.WithFields(log.Fields{"seconds": time.Since(start).Seconds()}).Info("took to get index")
if c.indexCache != nil {
if err := c.indexCache.SetHelmIndex(c.repoURL, data); err != nil {
log.Warnf("Failed to store index cache for repo: %s: %v", c.repoURL, err)
}
}
} Unfortunately, this is how it's really retrieved (no helm involvement): Line 300 in 565aa8e
func (c *nativeHelmChart) loadRepoIndex() ([]byte, error) {
indexURL, err := getIndexURL(c.repoURL)
if err != nil {
return nil, err
}
req, err := http.NewRequest(http.MethodGet, indexURL, nil)
if err != nil {
return nil, err
}
if c.creds.Username != "" || c.creds.Password != "" {
// only basic supported
req.SetBasicAuth(c.creds.Username, c.creds.Password)
}
tlsConf, err := newTLSConfig(c.creds)
if err != nil {
return nil, err
}
tr := &http.Transport{
Proxy: proxy.GetCallback(c.proxy),
TLSClientConfig: tlsConf,
DisableKeepAlives: true,
}
client := http.Client{Transport: tr}
resp, err := client.Do(req)
if err != nil {
return nil, err
}
defer func() { _ = resp.Body.Close() }()
if resp.StatusCode != http.StatusOK {
return nil, errors.New("failed to get index: " + resp.Status)
}
return io.ReadAll(resp.Body)
} No helm involvement, see the http.NewRequest(http.MethodGet, indexURL, nil) call and yes that means every helm chart has its own cache. The original feature commit @alexmt that enabled caching in argocd (includes an environment variable to configured cache lifetime + the cache is set on the client - eg. 1000 charts, one repo, one client -> one cache) Not sure why this have been changed but the change explains our problem. |
* fix: cache helm-index in Redis cluster Signed-off-by: JenTing Hsiao <hsiaoairplane@gmail.com> * Update repository.go Fix order Signed-off-by: Dan Garfield <dan@codefresh.io> --------- Signed-off-by: JenTing Hsiao <hsiaoairplane@gmail.com> Signed-off-by: Dan Garfield <dan@codefresh.io> Co-authored-by: Dan Garfield <dan@codefresh.io>
* fix: cache helm-index in Redis cluster Signed-off-by: JenTing Hsiao <hsiaoairplane@gmail.com> * Update repository.go Fix order Signed-off-by: Dan Garfield <dan@codefresh.io> --------- Signed-off-by: JenTing Hsiao <hsiaoairplane@gmail.com> Signed-off-by: Dan Garfield <dan@codefresh.io> Co-authored-by: Dan Garfield <dan@codefresh.io> Signed-off-by: ashutosh16 <ashutosh_singh@intuit.com>
* fix: cache helm-index in Redis cluster Signed-off-by: JenTing Hsiao <hsiaoairplane@gmail.com> * Update repository.go Fix order Signed-off-by: Dan Garfield <dan@codefresh.io> --------- Signed-off-by: JenTing Hsiao <hsiaoairplane@gmail.com> Signed-off-by: Dan Garfield <dan@codefresh.io> Co-authored-by: Dan Garfield <dan@codefresh.io>
Checklist:
argocd version
.Describe the bug
Our ArgoCD Application start to DDoS our Helm Repositories.
For our deployment we use gitops, which uses an app of apps approach to create nearly 64 different ArgoCD Applications. For our gitops we have multiple branches and do multiple deployments on same argocd environment in one AWS EKS Cluster.
We have 32 Deployments means 2048 ArgoCD Applications based on same helm repository + 32 Argo CD Application running as app of apps over gitops.
To Reproduce
index.yaml
File.Expected behavior
After ArgoCD calls always the same
index.yaml
File in the Helm Chart Repository I would expect that it is cached in ArgoCD and used as shared resource between ArgoCD Applications. Not that every single ArgoCD Application calls the index.yaml File itself.Our
index.yaml
has 20.26 MB x 2048 ArgoCD Applications calling one time per Hour for 12 hours. That makes 20.26 MB x 2048 x 12 = 497.909,76 MB ~= 486,24 GB per Day.That is what we also see from our access logs in our Helm Repository.
Our expectation is 20,26 MB x 24 x 60 to check file onces per minute which should result in 28,49 GB per day, so that file is cached centrally.
Version
The text was updated successfully, but these errors were encountered: