Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remote builds randomly timeout: hit 27s timeout running '/usr/bin/git fetch --depth=1 origin master' #3742

Closed
VackarAfzal opened this issue Mar 22, 2021 · 8 comments
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@VackarAfzal
Copy link

When running a build on an overlay containing multiple remote components, I randomly get timeouts

Expected output

Should build the overlay as expected

Actual output

Sometimes builds, but sometimes fails with this error message

Error: accumulating resources: accumulation err='accumulating resources from 'git@***************?ref=master': evalsymlink failure on '*********************** 
no such file or directory': hit 27s timeout running '/usr/bin/git fetch --depth=1 origin master'

Kustomize version

{Version:kustomize/v4.0.5 GitCommit:9e8e7a7fe99ec9fbf801463e8607928322fc5245 BuildDate:2021-03-08T20:53:03Z GoOs:darwin GoArch:amd64}

Platform

Linux/macOS

Additional context

I upgraded from 2.x to 4.x then saw this behaviour appear

@VackarAfzal VackarAfzal added the kind/bug Categorizes issue or PR as related to a bug. label Mar 22, 2021
@mxrss2
Copy link

mxrss2 commented May 20, 2021

https://github.com/kubernetes-sigs/kustomize/blob/017a0944383d28378ae7a2f063a630b6c124b16c/api/internal/git/gitrunner.go

@VackarAfzal this is the code that does this -- may be an opportunity to add in a PR to override this cause its suboptimal to not be able to override it.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 18, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Sep 17, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue.

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@daxmc99
Copy link

daxmc99 commented Mar 28, 2022

Adding ?timeout=90s to the query parameters can now fix this per https://github.com/kubernetes-sigs/kustomize/blob/master/examples/remoteBuild.md

@Zia-Eurus
Copy link

@daxmc99 how to specify this timeout variable. can you please give a sample example?

@jasonicarter
Copy link

@daxmc99 how to specify this timeout variable. can you please give a sample example?

This is an Airbyte example, where the timeout=90s is appended at the end (works for me)

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

resources:
  - https://github.com/airbytehq/airbyte.git/kube/overlays/stable?ref=master&timeout=90s

daringcalf added a commit to daringcalf/argo-cd-kustomiza that referenced this issue Mar 29, 2023
nirs added a commit to nirs/ramen that referenced this issue Oct 4, 2023
Cloning https://github.com/stolostron/multicloud-operators-foundation.git
during `kubectl apply --kustomize` can fail with a timeout when using
slow network:

    $ kubectl apply -k test/addons/ocm-controller --context hub error: accumulating resources:
    accumulation err='accumulating resources from
    'https://github.com/stolostron/multicloud-operators-foundation.git/deploy/foundation/hub/overlays/ocm-controller?ref=main':
    URL is a git repository': hit 27s timeout running '/usr/bin/git fetch --depth=1
    https://github.com/stolostron/multicloud-operators-foundation.git main'

Turns out that the way to increase the timeout is to add a `timeout`
query parameter[1]. Use 300 seconds to avoid random failures when using
poor network.

[1] kubernetes-sigs/kustomize#3742

Signed-off-by: Nir Soffer <nsoffer@redhat.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

7 participants