Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Various prebuild issues #6391

Closed
Tracked by #10361
shaal opened this issue Oct 25, 2021 · 11 comments
Closed
Tracked by #10361

Various prebuild issues #6391

shaal opened this issue Oct 25, 2021 · 11 comments
Labels
component: dashboard feature: prebuilds meta: never-stale This issue can never become stale team: webapp Issue belongs to the WebApp team type: bug Something isn't working

Comments

@shaal
Copy link
Contributor

shaal commented Oct 25, 2021

Bug description

I've been working a lot with prebuilds, and some issues are still not resolved:

  1. When creating a PR and running a prebuild for the FIRST time, you can go to the Gitpod's URL of the project (ie: https://gitpod.io/#https://github.com/drud/ddev) and you can see all the commands that the prebuild is running.
    But when making new commits after that, and you'll only see "Connecting to workspace logs...".
    A prebuild IS RUNNING in the background, but there's no visual representation of what's happening, and when prebuild is done the workspace is reloaded on that browser tab.
    Related: Project Configuration - Run Prebuild always shows "Connecting to workspace Logs..." #6305
    image

  2. I set a special webhook to send me an email when a prebuild is running (https://github.com/shaal/DrupalPod/blob/ready-made-envs/.gitpod.yml#L8), every time a prebuild is running, I am getting 2 emails, so for some reason the prebuild is called and running twice, but I don't know why.

  3. (Noticeable on projects with init start-tasks that take a long time, and perhaps related to the previous issue of prebuild running twice) -
    After manually initiating a prebuild (or pushing a commit), if you go to the Gitpod URL of the project, you will see the Prebuild page (which is expected After Prebuild finish running), and the Gitpod workspace opens, instead the new workspace using the prebuild that just finished, it runs all the init tasks of the prebuild all over again.
    If you close this workspace, and open a new workspace, now prebuild is trully reloaded, and only command tasks are running in the workspace.

Steps to reproduce

  1. Fork repo with long init tasks (ie. https://github.com/shaal/DrupalPod)
  2. Create a commit and push it.
  3. Watch the prebuild process (issues 2 and 3) + workspace that opens right after.
  4. Create another commit and push it.
  5. Watch the prebuild process (issues 1 and 2 and 3) + workspace that opens right after.

Workspace affected

No response

Expected behavior

  1. Prebuild should only run once per commit (or manual trigger).
  2. Workspace should never run prebuild tasks in the workspace.
  3. Prebuild should always display what commands are running while prebuild is running.

Example repository

No response

Anything else?

No response

@AlexTugarev
Copy link
Member

@shaal, thanks for reporting!

We're looking into issues with flaky logs. /cc. @geropl

Let me try to reproduce the other two issue...

@AlexTugarev AlexTugarev added feature: prebuilds team: webapp Issue belongs to the WebApp team type: bug Something isn't working labels Oct 26, 2021
@JanKoehnlein
Copy link
Contributor

/schedule

@roboquat
Copy link
Contributor

@JanKoehnlein: Issue scheduled in the meta team (WIP: 0)

In response to this:

/schedule

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@AlexTugarev
Copy link
Member

Quick update: I'm still trying to reproduce with a fork of the DrupalPod repo. Currently trying to create a prebuild for ready-made-envs, which failed for unknown reasons.

The error reads headless task failed: context canceled, which would be a timeout?! The last log lines are

Package operations: 28 installs, 0 updates, 0 removals
  - Installing dflydev/dot-access-data (v1.1.0): Extracting archive
  - Installing consolidation/output-formatters (4.1.2): Extracting archive
  - Installing consolidation/annotated-command (4.4.0): Extracting archive
  - Installing consolidation/log (2.0.2): Extracting archive
  - Installing consolidation/self-update (2.0.0): Extracting archive
  - Installing doctrine/inflector (1.4.4): Extracting archive
  - Installing doctrine/event-manager (1.1.1): Extracting archive
  - Installing doctrine/collections (1.6.8): Extracting archive
  - Installing doctrine/cache (1.12.1): Extracting archive
  - Installing doctrine/persistence (1.3.8): Extracting archive
  - Installing drupal/admin_toolbar (3.0.3): Extracting archive
  - Installing doctrine/common (2.13.3): Extracting archive
  - Installing drupal/devel (4.1.1): Extracting archive
  - Installing webmozart/path-util (2.3.0): Extracting archive
  - Installing webflo/drupal-finder (1.2.2): Extracting archive
  - Installing nikic/php-parser (v4.13.0): Extracting archive
  - Installing psy/psysh (v0.10.9): Extracting archive
  - Installing league/container (3.4.1): Extracting archive
  - Installing grasmash/yaml-expander (1.4.0): Extracting archive
  - Installing enlightn/security-checker (v1.9.0): Extracting archive
  - Installing grasmash/expander (1.0.0): Extracting archive
  - Installing consolidation/config (1.2.1): Extracting archive
  - Installing consolidation/site-alias (3.1.1): Extracting archive
  - Installing consolidation/site-process (4.1.0): Extracting archive
  - Installing consolidation/robo (3.0.6): Extracting archive
  - Installing consolidation/filter-via-dot-access-data (1.0.0): Extracting archive
  - Installing chi-teck/drupal-code-generator (1.33.1): Extracting archive
  - Installing drush/drush (10.6.1): Extracting archive
7 

Given that, if that's the situation here, then it would be logical to see the init tasks being executed on start up, just because there is no successful prebuild tarred & uploaded.

On the Connecting to workspace logs... issue with the logs: While we still have the issue with short prebuilds and @geropl already has an idea what's wrong in there, I couldn't see that issue with more complex projects like this. But there is one more thing. Prebuilds are scheduled differently to regular workspaces and startup times might vary. Unfortunately there is no differentiation of that in the awaiting state.

@shaal
Copy link
Contributor Author

shaal commented Oct 27, 2021

@AlexTugarev if you have a few minutes, we can coordinate a quick call on Gitpod Discord, I'll be happy to walk you through replicating each one of the issues I wrote about.

@jldec jldec moved this to In Groundwork in 🍎 WebApp Team Nov 4, 2021
@shaal
Copy link
Contributor Author

shaal commented Nov 16, 2021

I noticed that you can see prebuilds are running twice in the UI (on same commit, 2 prebuilds running):
I pushed a commit to a branch, which triggered the 2 simultaneous prebuilds -
image

@JanKoehnlein
Copy link
Contributor

Sorry, as we're a bit swamped ATM, I have to unfortunately move this to our backlog.

Chances to get this fixed are much better if you file separate GH issues for the individual problems.

@jankeromnes
Copy link
Contributor

Many thanks for the in-depth debugging of how prebuilds work! 🙏

Based on #6391 (comment), removing temporarily from Groundwork, although I agree we should fix these problems if they still occur.

Additionally, I agree that it would be better to have separate issues for separate problems, in order to know which problems are already solved and which ones still need work.

@jankeromnes jankeromnes moved this from In Groundwork to Todo in 🍎 WebApp Team Nov 30, 2021
@jldec jldec removed this from 🍎 WebApp Team Dec 30, 2021
@stale
Copy link

stale bot commented Mar 2, 2022

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@stale stale bot added the meta: stale This issue/PR is stale and will be closed soon label Mar 2, 2022
@JanKoehnlein JanKoehnlein added meta: never-stale This issue can never become stale and removed meta: stale This issue/PR is stale and will be closed soon labels Mar 2, 2022
@geropl geropl mentioned this issue May 31, 2022
11 tasks
@AlexTugarev
Copy link
Member

Hey @shaal!
I retested most, hopefully all of the issues reported here.

The log related issues were already tackled in separate PRs, and while the situation is still not ideal, the reported issue 1.) was resolved.

I retested push events on a fork of the repo posted, and added a similar notification to the init task like so:

Screen Shot 2022-06-10 at 09 52 48

Which turns out to work as expected, the prebuild ran and the endpoint got called a single time:

Screen Shot 2022-06-10 at 09 52 24

With that, I'm going to close this issue here. Let's create smaller ones and solve separately if something else pops up.

@shaal
Copy link
Contributor Author

shaal commented Jun 10, 2022

Thank you! I'll report new issues if I see anything.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
component: dashboard feature: prebuilds meta: never-stale This issue can never become stale team: webapp Issue belongs to the WebApp team type: bug Something isn't working
Projects
Status: No status
Development

No branches or pull requests

6 participants