Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Service mode #33

Closed
aomarks opened this issue Mar 22, 2022 · 38 comments · Fixed by #536
Closed

Service mode #33

aomarks opened this issue Mar 22, 2022 · 38 comments · Fixed by #536
Assignees

Comments

@aomarks
Copy link
Member

aomarks commented Mar 22, 2022

By default, Wireit assumes that scripts eventually exit by themselves. This works well for things like building or testing. But sometimes a script runs indefinitely, such as a server.

Setting "server": true will tell Wireit that a script runs indefinitely. This has the following effects:

  • It will always run. It will never be skipped or restored from cache.

  • If something depends on the server, Wireit won't wait for the server to exit before the dependent script starts running. Instead, it just waits for the server process to be spawned.

  • If a server script is run directly (e.g. npm run serve), then it will stay running until the user kills Wireit (e.g. Ctrl-C).

  • If a server script is run indirectly (e.g. npm run script-that-depends-on-server), then the server script will stay running until all scripts which transitively depend on it have finished.

  • In watch mode, Wireit will restart the server whenever a dependency changes. If this isn't required for a particular dependency (such as for static assets that the server does not cache), the dependency edge can be annotated with "restart": false.

{
  "scripts": {
    "build": "wireit"
  },
  "wireit": {
    "serve": {
      "command": "node lib/server.js",
      "server": true,
      "dependencies": [
        "build:server",
        {
          "script": "build:assets",
          "restart": false
        }
      ],
      "files": [],
      "output": []
    }
  }
}
@aomarks aomarks self-assigned this Mar 22, 2022
@aomarks aomarks moved this to 🧊 Icebox in Lit Project Board Apr 2, 2022
@justinfagnani
Copy link
Collaborator

I hit this today: I somewhat naively configured a server script as a dependency of another server script (Firestore emulator as a dependency of a file server, file server as a dependency of app server, etc).

I did not get an error though - I just saw no progress on the wireit output. The script I ran was depending on a build step and a server and the output just stopped after the build completed. I thought the build might have hung.

In this case there aren't strict requirements that one server finish starting before another starts, so I can just make them all sibling dependencies of a step that won't complete. I've had previous project where I might have needed to wait for some specific output from the server before considering it started though.

@justinfagnani
Copy link
Collaborator

What's your thought for how to do this? Something like a "daemon": true flag?

I could imagine that and maybe a "readyMatch" field that takes a regex string to look for in stdout to indicate a server is ready to count the dependency as complete.

@aomarks aomarks moved this from 🧊 Icebox to 🔥 Front Burner in Lit Project Board Apr 25, 2022
@justinfagnani
Copy link
Collaborator

I did eventually run into a spot where I needed to start a server to run an integration test...

@ZachHaber
Copy link

In addition to a server as a dependency, wireit should also expect a server as the end result of a command in general.

I was trying to set up wireit to work for running a server in watch mode for development, and the issue I ran into is that the watch mode doesn't start watching again until after the server is shut down, which doesn't work well for a dev environment.

For reference:

I've tried several different approaches at this point to this problem with varying degrees of success.

nodemon is probably the best overall still despite being slow. tsc-watch is faster, but fails to properly shut the server down if you save it twice in quick succession which then requires manual intervention. tsc's watch mode has no way to run a command after it, which makes it impossible to use.

I also really love the if-file-deleted feature as that solves a huge issue I've been frustrated with.

Package.json
{
  "scripts": {
    "start:dev": "env-cmd -e dev node dist",
    "local:dev": "env-cmd --no-override -e dev,local npm run start",
    "local:prod": "env-cmd --nor-override -e prod,local npm run start",
    "start": "wireit",
    "ts": "wireit"
  },
  "wireit": {
    "start": {
      "command": "node dist",
      "files": [
        "dist/**/*.js"
      ],
      "dependencies": [
        "ts"
      ]
    },
    "ts": {
      "command": "tsc --build --pretty",
      "clean": "if-file-deleted",
      "files": [
        "src/**/*.ts",
        "tsconfig.json"
      ],
      "output": [
        "dist/**",
        ".tsbuildinfo"
      ]
    }
  },
}

@aomarks
Copy link
Member Author

aomarks commented Apr 27, 2022

I was trying to set up wireit to work for running a server in watch mode for development, and the issue I ran into is that the watch mode doesn't start watching again until after the server is shut down, which doesn't work well for a dev environment.

Yeah, this is a high priority issue.

In your case, would restarting your server on each watch iteration work?

I was also thinking that you often have some dependencies which require a restart, and some which don't. So maybe you declare like this:

"wireit": {
  "my-server": {
    "command": "node my-server.js",
    "server": true,
    "dependencies": [
      {
        "name": "build:server-implementation",
        "restart-on-change": true
      },
      {
        "name": "build:assets",
        "restart-on-change": false
      }
    ]
  }
}

I also really love the if-file-deleted feature as that solves a huge issue I've been frustrated with.

🙂

@ZachHaber
Copy link

I feel like restart-on-change would be problematic and redundant with "files".

I.e. "files": [] would indicate that it shouldn't ever restart. "files": ["*"] would presumably indicate that it'd restart when anything changes.

Then the logic would work with whatever "server" settings we have to indicate it's a continuous process (I'm not sure the best term). I.e. "files": [] in the build:assets command would function by telling it to just re-use whatever output. However if build:assets was a continuous process, then it would just keep it running until its files changed, and an empty array would signify that it will never change.

And of course, if there's dependencies for a continuous process, it'd process them in that order, since the individual dependencies would be specifying their own files and dependencies, and if the files for a longer running process changes, then kill the process and restart it. I suppose that if "files" aren't declared at all, the ideal would to treat it as "if any dependencies indicate they changed" then restart the continuous process.

I could imagine that and maybe a "readyMatch" field that takes a regex string to look for in stdout to indicate a server is ready to count the dependency as complete.

I also think that having some declarative way of signifying when you know that a continuous process is fully started for whatever uses it as a dependency would be nice (in the cases mentioned above where you are chaining multiple continuous processes together).

aomarks added a commit to lit/lit.dev that referenced this issue May 6, 2022
- Bump to latest version of Wireit to bring in google/wireit#189, which makes it safe to have multiple concurrent Wireit processes handling overlapping build graphs.

- Added `output:[]` to some server commands, so that they can run concurrently given the above new restriction.

- Renamed `:not-site` scripts to `:assets`, and `:site` scripts to `:eleventy`. I think these are more intuitive names.

- Separated out `dev:build:assets` and `prod:build:assets` scripts. This means that watching dev no longer runs rollup, which was unnecessary because we serve tsc output directly in dev mode.

- Updated the top-level `dev` script:
  - Ensure that the server is built first
  - Simplified to take advantage of safe concurrent Wireit processes

I'll be working on google/wireit#33 next, which should simplify things even further.
@aomarks aomarks self-assigned this May 6, 2022
@aomarks
Copy link
Member Author

aomarks commented May 6, 2022

I'm going to start working on this feature, and I just updated the description of this PR to describe how the feature will work.

@aomarks
Copy link
Member Author

aomarks commented May 6, 2022

I also think that having some declarative way of signifying when you know that a continuous process is fully started for whatever uses it as a dependency would be nice (in the cases mentioned above where you are chaining multiple continuous processes together).

I could imagine that and maybe a "readyMatch" field that takes a regex string to look for in stdout to indicate a server is ready to count the dependency as complete.

Good ideas! These could potentially be follow up features for the basic server support, but here are some ideas to explore first to let us achieve the same thing by composing regular scripts on top of a server script.

cc @justinfagnani @ZachHaber

Wait for a TCP port to be listened on

npm i -D wait-on
{
  "scripts": {
    "build": "wireit"
  },
  "wireit": {
    "serve:ready": {
      "command": "wait-on tcp:8080",
      "dependencies": [
        "serve"
      ]
    },
    "serve": {
      "command": "node lib/server.js",
      "server": true,
      "dependencies": [
        "build:server",
        {
          "script": "build:assets",
          "restart": false
        }
      ]
  }
}

Wait for the server to emit some stdout

{
  "scripts": {
    "build": "wireit"
  },
  "wireit": {
    "serve:ready": {
      "command": "tail -f -n+0 server.log | grep -q \"Server ready\"",
      "dependencies": [
        "serve"
      ]
    },
    "serve": {
      "command": "node lib/server.js | tee server.log",
      "server": true,
      "dependencies": [
        "build:server",
        {
          "script": "build:assets",
          "restart": false
        }
      ]
    }
  }
}

aomarks added a commit that referenced this issue May 9, 2022
Factor out a new `ScriptChildProcess` class from the`#executeCommandIfNeeded` method. This class represents a spawned child process.

This will be useful for [server mode](#33), because it gives us a more discrete object that represents a running child process. The lifecycle of running processes will become more complex in server mode, especially in watch mode where we'll be persisting child processes potentially across different build graphs. In a follow up PR, it will gain a `state` property and a `terminate` method.

Also includes a small tweak to logging. We used to log the "Running" message before we tried to get a `WorkerPool` slot, which could give the impression that something was running that wasn't yet, when concurrency was constrained.
@aomarks aomarks moved this from 🔥 Front Burner to ⏱ In Progress in Lit Project Board May 11, 2022
@aomarks aomarks changed the title Server mode Service mode May 20, 2022
aomarks added a commit that referenced this issue May 20, 2022
Previously, we had a single `ScriptExecution` class which handled all scripts. However, some scripts are significantly simpler than others. Namely, we now distinguish between "no-op" and "one-shot" scripts.

A "no-op" script is one that has no command defined. It just has some files and/or dependencies, and is mostly just a pass-through for fingerprints. There is no sense in writing fingerprint files, doing caching, acquiring locks, etc. for these scripts, so we no longer do that.

A "one-shot" script is a script with an actual command. It's called "one-shot" to distinguish it from an upcoming 3rd kind of script called a [service](#33).

**Note to reviewer**: The `OneShotExecution` class is almost identical to what used to be called `ScriptExecution`, but with some methods moved into a new `BaseExecution` class. The `NoOpExecution` class is where the meaningful changes are, because a lot of logic that used to run around reading/writing to the `.wireit` directory no longer does.
@appsforartists
Copy link
Member

The correct answer is probably "wait until this is supported," but I just naively tried to workaround this:

  "scripts": {
    "start": "concurrently \"wireit\" \"dhost site --bind-all\"",
    "build": "wireit"
  },
  "wireit": {
    "start": {
      "dependencies": [
        "build"
      ]
    },
    "build": {
      "command": "rollup --config ./rollup.config.js",

and got yelled at:

[0] ❌ package.json:5:14 This command should just be "wireit", as this script is configured in the wireit section.
[0]         "start": "concurrently \"wireit\" \"dhost site --bind-all\"",
[0]                  ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

@aomarks
Copy link
Member Author

aomarks commented May 27, 2022

The correct answer is probably "wait until this is supported," but I just naively tried to workaround this:

  "scripts": {
    "start": "concurrently \"wireit\" \"dhost site --bind-all\"",
    "build": "wireit"
  },
  "wireit": {
    "start": {
      "dependencies": [
        "build"
      ]
    },
    "build": {
      "command": "rollup --config ./rollup.config.js",

and got yelled at:

[0] ❌ package.json:5:14 This command should just be "wireit", as this script is configured in the wireit section.
[0]         "start": "concurrently \"wireit\" \"dhost site --bind-all\"",
[0]                  ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

I think I would suggest something like this for now:

  "scripts": {
    "start": "npm run build && concurrently \"npm run build watch\" \"dhost site --bind-all\"",
    "build": "wireit"
  },
  "wireit": {
    "build": {
      "command": "rollup --config ./rollup.config.js",
    }
  }

So, you get an initial build, followed by simultaneously building continuously and starting your server.

@SanderElias
Copy link

I also just hit this, Working around it by nodemon. I don't like the nodemon solution, as it requires yet another terminal to be open.
Something to consider is other toolchains that have their own "watch&serve" options like Angulars ng serve.
while an option "server": true would work well in that scenario, the wording "server" might throw off some of the devs using such a tool

@appsforartists
Copy link
Member

appsforartists commented Jun 9, 2022

Have you considered how something like livereload/hot module replacement might work in wireit?

One common pattern is to use an environment variable (e.g. ROLLUP_WATCH) to decide whether or not to include a livereload client in the generated bundle. Without wireit, you might have something like this:

"start": "concurrently \"rollup --config ./rollup.config.js --watch\" \"firebase serve --host 0.0.0.0\"",
"build": "rollup --config ./rollup.config.js",

where "start" is effectively "build" + --watch. (I'm imagining a future where static file hosting is also managed by wireit.)

This makes me wonder:

  • What is the recommendation for when to use an underlying tool's watch mode v.s. wireit's? In its current form, it seems that the underlying tool's ability to inject a dev-only snippet like livereload gives it an advantage.
  • Should there be some conditionality in wireit, either by setting an env variable (WIREIT_WATCH), or by conditionally running/not running a rule depending on the presence of a watch flag?
  • How does flagging work between wireit rules? Should there be the ability to include --watch in a dependency rule?
  • Should the invocation assertion be relaxed, to allow:
      "start": "yarn run -TB wireit --watch"
    I've gone back and forth on this one, since it can perhaps also be modeled as:
      "start": "yarn run build --watch",
      "build": "yarn run -TB wireit"

aomarks added a commit that referenced this issue Nov 7, 2022
### Background

Currently, if script A depends on script B, then script B's fingerprint is automatically included in script A's fingerprint. That means if an input file for script B changes, which causes B to re-run, then script A will re-run too — _regardless of whether script B actually emitted different output._

This is a good and safe default because it means you aren't required to specify the input files for every script when those input files are generated by a dependency; Wireit assumes that any time script B runs, script A could be affected. However, it comes with the trade-off that scripts will sometimes re-run even when none of their input files changed.

### This PR

This PR adds a `"triggersRerun": false` setting that can be annotated on dependencies. This prevents the fingerprint from being inherited, so running B won't *necessarily* cause A to run as well.

This can be used for more optimal builds via fewer re-runs, but requires that input files are always fully specified.

This will also be very useful for [service mode](#33), because a `triggersRerun:false` dependency won't require a restart across watch iterations. (Restarts will happen only if the fingerprint is different between iterations). For example, the script that builds the service itself can be a regular dependency (causing restarts), but the script that builds the *assets* served by the script can be a `triggersRerun:false` dependency (not causing restarts).

Note in order to have a place to put this annotation, dependencies can now be objects. They can still be plain strings as well, but must be objects to receive annotations. (This also opens the door for other annotations on dependency edges we may want in the future. E.g. #23 is probably better expressed as a `"workspaces": true` annotation instead of a magic `$WORKSPACES:` prefix as previously planned).

Fixes #237

#### Example

In this example, `bundle` has a `triggersRerune:false` dependency on `build`. Importantly, it also includes `lib/**/*.js` in its `files` array, which are the specific outputs from `tsc` that `rollup` consumes. Including these input files wasn't necessary before, but with `triggersRerun:false` it is now critical.

The advantage of this configuration is that if `tsc` re-runs but doesn't produce different `.js` files (for example, if it only produced different `.d.ts` files), then `rollup` won't need to re-run. We've also excluded the `lib/test` directory, because we know test files aren't included in our bundles.

```json
{
  "scripts": {
    "build": "wireit",
    "bundle": "wireit"
  },
  "wireit": {
    "build": {
      "command": "tsc",
      "files": ["src/**/*.ts", "tsconfig.json"],
      "output": ["lib/**"]
    },
    "bundle": {
      "command": "rollup -c",
      "dependencies": [
        {
          "script": "build",
          "triggersRerun": false
        }
      ],
      "files": ["rollup.config.json", "lib/**/*.js", "!lib/test"],
      "output": ["dist/bundle.js"]
    }
  }
}
```
@aomarks
Copy link
Member Author

aomarks commented Nov 8, 2022

@aomarks I tried again this morning. It seems a bit better, but I'm still experiencing a fair amount of freezes in my tools.
When this happens, it seems WiteIt also stalls an stops monitoring completely.

@SanderElias I found a couple bugs from my own testing which could be related to the ones you have been seeing. Released 0.7.3-pre.3 with those fixes. Note this also includes #238 which is fairly useful with services too.

aomarks added a commit that referenced this issue Nov 8, 2022
Previously, we did not forward services to the next iteration in watch mode if there was a failure anywhere in the graph. Until #517, this didn't matter too much, since we'd always eventually shut down all services anyway.

However, now it matters, because by default we'll keep running services as long as the failure didn't affect it or its dependencies. The effect of this bug was that services would start up multiple times, causing e.g. a port conflict.

Now we always forward services, regardless of errors.

Part of #33
@thejustinwalsh
Copy link

thejustinwalsh commented Nov 11, 2022

I gave this a test drive and am experiencing an issue when a service is a dependency of another service under watch mode; the dependency service doesn't appear to exit cleanly when pressing ctrl+c.

Here is a slimmed-down example of my setup...

...
  "scripts": {
    "build:prisma": "wireit",
    "dev:frontend": "wireit",
    "dev:backend": "wireit",
    "prisma": "cd src-tauri && cargo prisma"
  },
  "wireit": {
    "dev:frontend": {
      "command": "vite",
      "service": true,
    },
    "dev:backend": {
      "command": "tauri dev --no-watch",
      "service": true,
      "files": [
        "src-tauri/src/**/*.rs",
        "src-tauri/build.rs",
        "src-tauri/tauri.conf.json"
      ],
      "packageLocks": [
        "Cargo.lock"
      ],
      "dependencies": [
        "build:prisma",
        "dev:frontend"
      ]
    },
    "build:prisma": {
      "command": "pnpm run prisma generate",
      "files": [
        "src-tauri/prisma/schema.prisma",
        "src-tauri/prisma/migrations/**/*"
      ],
      "output": [
        "src-tauri/src/db/mod.rs"
      ],
      "packageLocks": [
        "Cargo.lock"
      ]
    }
...

In the example above, if I terminate the dev:backend service (by quitting the gui application) and then press ctrl+c, everything appears to be getting cleaned up. However, If I press ctrl+c to shut down both without exiting the parent service when I attempt to spin up the dev:backend service again with pnpm run dev:backend --watch, the port that the dev:frontend service was using is still bound.

Running ps `lsof -ti:1420` reveals that the vite process is still running in this circumstance.

Additionally, I have noticed that when you press ctrl+c to kill the service in watch mode, you do not get any output about the services shutting down, whereas if the service terminates from user action, you get a nice feedback message about the event. It would be really slick if the sigterm gave output about the services shutting down for peace of mind.

I haven't tried to repro this with other services, so it very well could be something specific to my setup.
Thanks for all your hard work, this is looking really great!

@aomarks
Copy link
Member Author

aomarks commented Nov 11, 2022

Thank you @thejustinwalsh ! Can you confirm which version of wireit, and also which OS?

@SanderElias
Copy link

SanderElias commented Nov 11, 2022

@aomarks I'm still running into issues. It is getting better, but it still errors out quite often.
I'm on a project that also has an Angular front. this fronend is being handled by this wireit:

    "start:sandbox": {
      "command": "ng serve sandbox --port 4201",
      "files": [
        "projects/sandbox/tsconfig.json",
        "projects/sandbox/src/**/*.ts"
      ],
      "service": true,
      "dependencies": [
        {
          "script": "build:feutils", "triggersRerun": true
        },
        {
          "script": "build:googlemaps", "triggersRerun": true
        }
      ]
    },

When using watch-mode, the problem is that the service is not properly exited before the recompilation of the dependencies starts, at least not on every run. Once the ting is stalled, I need to restart it entirely to get it going again.

EDIT: I'm on pre.3

@thejustinwalsh
Copy link

thejustinwalsh commented Nov 11, 2022

Thank you @thejustinwalsh ! Can you confirm which version of wireit, and also which OS?

Just moved to 0.7.3-pre.3 and am running MacOS 12.6.1 (21G217)

@aomarks
Copy link
Member Author

aomarks commented Nov 11, 2022

Did you get the errors you described on pre3?

@aomarks
Copy link
Member Author

aomarks commented Nov 12, 2022

When using watch-mode, the problem is that the service is not properly exited before the recompilation of the dependencies starts, at least not on every run. Once the ting is stalled, I need to restart it entirely to get it going again.

Hey @SanderElias, thanks again for the bug reporting!

Why is it important in this example for the service to exit before compilation of the dependencies starts? What I would expect would happen given the current implementation and your config is this:

  1. wireit starts
  2. build:feutils and build:googlemaps run to completion
  3. start:sandbox starts
  4. "watching for file changes"
  5. a file that affects build:feutils changes
  6. build:feutils runs to completion
  7. start:sandbox stops
  8. start:sandbox starts

Are you saying that you expect step 7 to run before step 6? Why does this cause stalling?

@aomarks
Copy link
Member Author

aomarks commented Nov 12, 2022

@SanderElias @thejustinwalsh I found an issue that's probably the cause of these reports -- indeed in some cases a second version of a service can sometimes start up before waiting for the first one to shut down. Working on fix now!

@aomarks
Copy link
Member Author

aomarks commented Nov 12, 2022

@SanderElias @thejustinwalsh I just published 0.7.3-pre.5 which has a fix that hopefully addresses the problem of scripts not shutting down/starting up in the right order -- and also of needing to restart less often in general. Let me know if you get a chance to try it out!

@aomarks
Copy link
Member Author

aomarks commented Nov 12, 2022

Also, if you were using triggersRerun: false before, then note the setting has been renamed to cascade: false since 0.7.3-pre.5. Also note that true is the default value, so there's no need to set it if you do want a change in a dependency to trigger a restart. (See #520)

@thejustinwalsh
Copy link

Did you get the errors you described on pre3?

Yes. As well as on pre1, same results with both preview builds. I did not try pre2 though.

@thejustinwalsh
Copy link

Awesome I will check out pre5 and report back asap!

@SanderElias
Copy link

@aomarks

  1. build:feutils runs to completion
  2. start:sandbox stops
  3. start:sandbox starts

Are you saying that you expect step 7 to run before step 6? Why does this cause stalling?

Yes. That is part of the problem. The start-sandbox uses ng serve. This has its own watch mode, and it really doesn't like it when one of its dependencies gets recompiled while it is running. Killing before compiling the dependencies does make sense in this case.
While thinking about it, I think it would be nice to have some control over this behavior, because it might depend on the tools that you are using and what makes more sense.
I'll give the next pre-version a run tomorrow, and will let yo know what happens.

@aomarks
Copy link
Member Author

aomarks commented Nov 13, 2022

Yes. That is part of the problem. The start-sandbox uses ng serve. This has its own watch mode, and it really doesn't like it when one of its dependencies gets recompiled while it is running. Killing before compiling the dependencies does make sense in this case.
While thinking about it, I think it would be nice to have some control over this behavior, because it might depend on the tools that you are using and what makes more sense.
I'll give the next pre-version a run tomorrow, and will let yo know what happens.

Interesting, thanks! I'm trying to reproduce this problem locally -- I've got ng serve set up with a sample Angular project, but can you give me a little more information about what kind of files get generated by build:feutils?

When using watch-mode, the problem is that the service is not pwhyroperly exited before the recompilation of the dependencies starts, at least not on every run. Once the ting is stalled, I need to restart it entirely to get it going again.

It seems like, even if it would be better to stop the service before building the dependency (e.g. to avoid some unnecessary churn, or to avoid some error messages), ng serve is still going to get automatically restarted afterwards -- so things should automatically get back to a good state eventually. So, I'm wondering what exactly is happening when ng serve is getting "stalled" -- do you mean that it stops responding to SIGINT, so the whole wireit process gets stuck?

@SanderElias
Copy link

SanderElias commented Nov 13, 2022

@aomarks It is an angular CLI workspace.
The build:feUrils is building a lib by doing ng build --project @libs/feUtils.

In an angular project, you can add such a library with the command ng generate library @libs/feUtis

When it gets stalled, it just seems to freeze completely, and saving anything doesn't seem to trigger anything anymore.
When I ^c out, there is still a process running ng serve in my task-list, and I need to kill this manually.

aomarks added a commit that referenced this issue Nov 14, 2022
#526)

In non-watch mode, if service A depends on service B, and service B exits unexpectedly, then service A is automatically killed. We do this because otherwise wireit will just continue running, even though there has been an error. We instead want to exit with a non-zero code.

Before this PR, we did the same thing in watch mode. However, this was wrong for 2 reasons:

1. If we're iteration N-1 which is about to be adopted into iteration N, our dependencies will sometimes *intentionally* restart, but we were treating this as though it was unexpected. This should not cause us to fail, since we'll either also restart very shortly (when cascade is true), or we'll just keep running (when cascade is false).
          
2. If we're iteration N and our dependency unexpectedly exits by itself, it's not actually helpful if we also aggressively exit. We can just keep running, and hopefully the user can fix the problem with the server dependency, which will then either cause us to restart or not depending on the cascade setting.

As part of this I found these other related issues, which were compounding the problem above and making it harder to debug:

- We were returning `undefined` when we detached, if the service was in its "failing" or "stopping" state, instead of the child process. This was compounding the issue described above, because if iteration N-1 was failing, iteration N would think the child wasn't running *at all*, and would not wait for it to fully shut down. We are now much stricter about communicating from iteration N-1 to iteration N that there is a running child process, regardless of the state.

- We didn't log when a service shut down because of a failure in its dependencies.

Part of #33
aomarks added a commit that referenced this issue Nov 14, 2022
#527)

You can now configure a service to only be considered ready after a line on `stdout` or `stderr` has matched a given regular expression:

```json
{
  "command": "node my-server.js",
  "service": {
    "readyWhen": {
      "lineMatches": "Server listening on port \\d+"
    }
  }
}
```

Part of #33
@aomarks aomarks mentioned this issue Nov 15, 2022
aomarks added a commit that referenced this issue Nov 15, 2022
Repository owner moved this from ⏱ In Progress to ✅ Done in Lit Project Board Nov 15, 2022
@aomarks
Copy link
Member Author

aomarks commented Nov 15, 2022

Services support is now released in 0.7.3!

Docs are at https://github.com/google/wireit/#services

Changes from the pre-releases are the addition of readyWhen (determines when a service is considered ready/serving) and cascade (determines which dependencies cause a service to restart).

@SanderElias @thejustinwalsh @michaelwarren1106 Thank you for beta testing, was a big help!

@SanderElias In my projects I've found services to be stable enough to get this release out, and I haven't reproduced your issue yet, but assuming you still have it, let's continue discussing in a new issue and get to the bottom of it. Currently my suspicion is that we should focus on why wireit can't shut down ng serve.

alexspevak pushed a commit to opendesigndev/psd-ts that referenced this issue Jan 16, 2023
Use wireit to specify dependency relationships between build scripts of
each package, cache build outputs, incrementally rebuild scripts, and
watch for changes.

Furthermore, use npm-run-all2 to run Vite and wireit in watch mode for
@webtoon/psd-example-browser and @webtoon/psd-benchmark. This is needed
because wireit currently does not support "service" scripts.
(see: google/wireit#33)

Using wireit and npm-run-all2 makes our development NPM scripts (e.g.
`npm run start:browser`) more reliable. The scripts now "just work" in a
fresh local clone of the repo, and do not require arbitrary `sleep`
calls or polling delays to properly rebuild multiple packages.

Since wireit and npm-run-all2 handle our build orchestration, we can
remove packages like rollup-plugin-shell, rollup-plugin-watcher, and
nodedev that we previously used for watching the filesystem and running
build tools.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

7 participants