-
Notifications
You must be signed in to change notification settings - Fork 108
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Service mode #33
Comments
I hit this today: I somewhat naively configured a server script as a dependency of another server script (Firestore emulator as a dependency of a file server, file server as a dependency of app server, etc). I did not get an error though - I just saw no progress on the wireit output. The script I ran was depending on a build step and a server and the output just stopped after the build completed. I thought the build might have hung. In this case there aren't strict requirements that one server finish starting before another starts, so I can just make them all sibling dependencies of a step that won't complete. I've had previous project where I might have needed to wait for some specific output from the server before considering it started though. |
What's your thought for how to do this? Something like a I could imagine that and maybe a |
I did eventually run into a spot where I needed to start a server to run an integration test... |
In addition to a server as a dependency, wireit should also expect a server as the end result of a command in general. I was trying to set up wireit to work for running a server in watch mode for development, and the issue I ran into is that the watch mode doesn't start watching again until after the server is shut down, which doesn't work well for a dev environment. For reference: I've tried several different approaches at this point to this problem with varying degrees of success. nodemon is probably the best overall still despite being slow. tsc-watch is faster, but fails to properly shut the server down if you save it twice in quick succession which then requires manual intervention. tsc's watch mode has no way to run a command after it, which makes it impossible to use. I also really love the Package.json{
"scripts": {
"start:dev": "env-cmd -e dev node dist",
"local:dev": "env-cmd --no-override -e dev,local npm run start",
"local:prod": "env-cmd --nor-override -e prod,local npm run start",
"start": "wireit",
"ts": "wireit"
},
"wireit": {
"start": {
"command": "node dist",
"files": [
"dist/**/*.js"
],
"dependencies": [
"ts"
]
},
"ts": {
"command": "tsc --build --pretty",
"clean": "if-file-deleted",
"files": [
"src/**/*.ts",
"tsconfig.json"
],
"output": [
"dist/**",
".tsbuildinfo"
]
}
},
} |
Yeah, this is a high priority issue. In your case, would restarting your server on each watch iteration work? I was also thinking that you often have some dependencies which require a restart, and some which don't. So maybe you declare like this: "wireit": {
"my-server": {
"command": "node my-server.js",
"server": true,
"dependencies": [
{
"name": "build:server-implementation",
"restart-on-change": true
},
{
"name": "build:assets",
"restart-on-change": false
}
]
}
}
🙂 |
I feel like restart-on-change would be problematic and redundant with "files". I.e. Then the logic would work with whatever "server" settings we have to indicate it's a continuous process (I'm not sure the best term). I.e. And of course, if there's dependencies for a continuous process, it'd process them in that order, since the individual dependencies would be specifying their own files and dependencies, and if the files for a longer running process changes, then kill the process and restart it. I suppose that if "files" aren't declared at all, the ideal would to treat it as "if any dependencies indicate they changed" then restart the continuous process.
I also think that having some declarative way of signifying when you know that a continuous process is fully started for whatever uses it as a dependency would be nice (in the cases mentioned above where you are chaining multiple continuous processes together). |
- Bump to latest version of Wireit to bring in google/wireit#189, which makes it safe to have multiple concurrent Wireit processes handling overlapping build graphs. - Added `output:[]` to some server commands, so that they can run concurrently given the above new restriction. - Renamed `:not-site` scripts to `:assets`, and `:site` scripts to `:eleventy`. I think these are more intuitive names. - Separated out `dev:build:assets` and `prod:build:assets` scripts. This means that watching dev no longer runs rollup, which was unnecessary because we serve tsc output directly in dev mode. - Updated the top-level `dev` script: - Ensure that the server is built first - Simplified to take advantage of safe concurrent Wireit processes I'll be working on google/wireit#33 next, which should simplify things even further.
I'm going to start working on this feature, and I just updated the description of this PR to describe how the feature will work. |
Good ideas! These could potentially be follow up features for the basic server support, but here are some ideas to explore first to let us achieve the same thing by composing regular scripts on top of a server script. Wait for a TCP port to be listened onnpm i -D wait-on {
"scripts": {
"build": "wireit"
},
"wireit": {
"serve:ready": {
"command": "wait-on tcp:8080",
"dependencies": [
"serve"
]
},
"serve": {
"command": "node lib/server.js",
"server": true,
"dependencies": [
"build:server",
{
"script": "build:assets",
"restart": false
}
]
}
} Wait for the server to emit some stdout{
"scripts": {
"build": "wireit"
},
"wireit": {
"serve:ready": {
"command": "tail -f -n+0 server.log | grep -q \"Server ready\"",
"dependencies": [
"serve"
]
},
"serve": {
"command": "node lib/server.js | tee server.log",
"server": true,
"dependencies": [
"build:server",
{
"script": "build:assets",
"restart": false
}
]
}
}
} |
Factor out a new `ScriptChildProcess` class from the`#executeCommandIfNeeded` method. This class represents a spawned child process. This will be useful for [server mode](#33), because it gives us a more discrete object that represents a running child process. The lifecycle of running processes will become more complex in server mode, especially in watch mode where we'll be persisting child processes potentially across different build graphs. In a follow up PR, it will gain a `state` property and a `terminate` method. Also includes a small tweak to logging. We used to log the "Running" message before we tried to get a `WorkerPool` slot, which could give the impression that something was running that wasn't yet, when concurrency was constrained.
Previously, we had a single `ScriptExecution` class which handled all scripts. However, some scripts are significantly simpler than others. Namely, we now distinguish between "no-op" and "one-shot" scripts. A "no-op" script is one that has no command defined. It just has some files and/or dependencies, and is mostly just a pass-through for fingerprints. There is no sense in writing fingerprint files, doing caching, acquiring locks, etc. for these scripts, so we no longer do that. A "one-shot" script is a script with an actual command. It's called "one-shot" to distinguish it from an upcoming 3rd kind of script called a [service](#33). **Note to reviewer**: The `OneShotExecution` class is almost identical to what used to be called `ScriptExecution`, but with some methods moved into a new `BaseExecution` class. The `NoOpExecution` class is where the meaningful changes are, because a lot of logic that used to run around reading/writing to the `.wireit` directory no longer does.
The correct answer is probably "wait until this is supported," but I just naively tried to workaround this: "scripts": {
"start": "concurrently \"wireit\" \"dhost site --bind-all\"",
"build": "wireit"
},
"wireit": {
"start": {
"dependencies": [
"build"
]
},
"build": {
"command": "rollup --config ./rollup.config.js", and got yelled at:
|
I think I would suggest something like this for now: "scripts": {
"start": "npm run build && concurrently \"npm run build watch\" \"dhost site --bind-all\"",
"build": "wireit"
},
"wireit": {
"build": {
"command": "rollup --config ./rollup.config.js",
}
} So, you get an initial build, followed by simultaneously building continuously and starting your server. |
I also just hit this, Working around it by nodemon. I don't like the nodemon solution, as it requires yet another terminal to be open. |
Have you considered how something like livereload/hot module replacement might work in wireit? One common pattern is to use an environment variable (e.g.
where This makes me wonder:
|
### Background Currently, if script A depends on script B, then script B's fingerprint is automatically included in script A's fingerprint. That means if an input file for script B changes, which causes B to re-run, then script A will re-run too — _regardless of whether script B actually emitted different output._ This is a good and safe default because it means you aren't required to specify the input files for every script when those input files are generated by a dependency; Wireit assumes that any time script B runs, script A could be affected. However, it comes with the trade-off that scripts will sometimes re-run even when none of their input files changed. ### This PR This PR adds a `"triggersRerun": false` setting that can be annotated on dependencies. This prevents the fingerprint from being inherited, so running B won't *necessarily* cause A to run as well. This can be used for more optimal builds via fewer re-runs, but requires that input files are always fully specified. This will also be very useful for [service mode](#33), because a `triggersRerun:false` dependency won't require a restart across watch iterations. (Restarts will happen only if the fingerprint is different between iterations). For example, the script that builds the service itself can be a regular dependency (causing restarts), but the script that builds the *assets* served by the script can be a `triggersRerun:false` dependency (not causing restarts). Note in order to have a place to put this annotation, dependencies can now be objects. They can still be plain strings as well, but must be objects to receive annotations. (This also opens the door for other annotations on dependency edges we may want in the future. E.g. #23 is probably better expressed as a `"workspaces": true` annotation instead of a magic `$WORKSPACES:` prefix as previously planned). Fixes #237 #### Example In this example, `bundle` has a `triggersRerune:false` dependency on `build`. Importantly, it also includes `lib/**/*.js` in its `files` array, which are the specific outputs from `tsc` that `rollup` consumes. Including these input files wasn't necessary before, but with `triggersRerun:false` it is now critical. The advantage of this configuration is that if `tsc` re-runs but doesn't produce different `.js` files (for example, if it only produced different `.d.ts` files), then `rollup` won't need to re-run. We've also excluded the `lib/test` directory, because we know test files aren't included in our bundles. ```json { "scripts": { "build": "wireit", "bundle": "wireit" }, "wireit": { "build": { "command": "tsc", "files": ["src/**/*.ts", "tsconfig.json"], "output": ["lib/**"] }, "bundle": { "command": "rollup -c", "dependencies": [ { "script": "build", "triggersRerun": false } ], "files": ["rollup.config.json", "lib/**/*.js", "!lib/test"], "output": ["dist/bundle.js"] } } } ```
@SanderElias I found a couple bugs from my own testing which could be related to the ones you have been seeing. Released |
Previously, we did not forward services to the next iteration in watch mode if there was a failure anywhere in the graph. Until #517, this didn't matter too much, since we'd always eventually shut down all services anyway. However, now it matters, because by default we'll keep running services as long as the failure didn't affect it or its dependencies. The effect of this bug was that services would start up multiple times, causing e.g. a port conflict. Now we always forward services, regardless of errors. Part of #33
I gave this a test drive and am experiencing an issue when a service is a dependency of another service under watch mode; the dependency service doesn't appear to exit cleanly when pressing ctrl+c. Here is a slimmed-down example of my setup... ...
"scripts": {
"build:prisma": "wireit",
"dev:frontend": "wireit",
"dev:backend": "wireit",
"prisma": "cd src-tauri && cargo prisma"
},
"wireit": {
"dev:frontend": {
"command": "vite",
"service": true,
},
"dev:backend": {
"command": "tauri dev --no-watch",
"service": true,
"files": [
"src-tauri/src/**/*.rs",
"src-tauri/build.rs",
"src-tauri/tauri.conf.json"
],
"packageLocks": [
"Cargo.lock"
],
"dependencies": [
"build:prisma",
"dev:frontend"
]
},
"build:prisma": {
"command": "pnpm run prisma generate",
"files": [
"src-tauri/prisma/schema.prisma",
"src-tauri/prisma/migrations/**/*"
],
"output": [
"src-tauri/src/db/mod.rs"
],
"packageLocks": [
"Cargo.lock"
]
}
... In the example above, if I terminate the Running Additionally, I have noticed that when you press ctrl+c to kill the service in watch mode, you do not get any output about the services shutting down, whereas if the service terminates from user action, you get a nice feedback message about the event. It would be really slick if the sigterm gave output about the services shutting down for peace of mind. I haven't tried to repro this with other services, so it very well could be something specific to my setup. |
Thank you @thejustinwalsh ! Can you confirm which version of wireit, and also which OS? |
@aomarks I'm still running into issues. It is getting better, but it still errors out quite often. "start:sandbox": {
"command": "ng serve sandbox --port 4201",
"files": [
"projects/sandbox/tsconfig.json",
"projects/sandbox/src/**/*.ts"
],
"service": true,
"dependencies": [
{
"script": "build:feutils", "triggersRerun": true
},
{
"script": "build:googlemaps", "triggersRerun": true
}
]
}, When using watch-mode, the problem is that the service is not properly exited before the recompilation of the dependencies starts, at least not on every run. Once the ting is stalled, I need to restart it entirely to get it going again. EDIT: I'm on pre.3 |
Just moved to |
Did you get the errors you described on pre3? |
Hey @SanderElias, thanks again for the bug reporting! Why is it important in this example for the service to exit before compilation of the dependencies starts? What I would expect would happen given the current implementation and your config is this:
Are you saying that you expect step 7 to run before step 6? Why does this cause stalling? |
@SanderElias @thejustinwalsh I found an issue that's probably the cause of these reports -- indeed in some cases a second version of a service can sometimes start up before waiting for the first one to shut down. Working on fix now! |
@SanderElias @thejustinwalsh I just published |
Also, if you were using |
Yes. As well as on pre1, same results with both preview builds. I did not try pre2 though. |
Awesome I will check out pre5 and report back asap! |
Yes. That is part of the problem. The start-sandbox uses |
Interesting, thanks! I'm trying to reproduce this problem locally -- I've got
It seems like, even if it would be better to stop the service before building the dependency (e.g. to avoid some unnecessary churn, or to avoid some error messages), |
@aomarks It is an angular CLI workspace. In an angular project, you can add such a library with the command When it gets stalled, it just seems to freeze completely, and saving anything doesn't seem to trigger anything anymore. |
#526) In non-watch mode, if service A depends on service B, and service B exits unexpectedly, then service A is automatically killed. We do this because otherwise wireit will just continue running, even though there has been an error. We instead want to exit with a non-zero code. Before this PR, we did the same thing in watch mode. However, this was wrong for 2 reasons: 1. If we're iteration N-1 which is about to be adopted into iteration N, our dependencies will sometimes *intentionally* restart, but we were treating this as though it was unexpected. This should not cause us to fail, since we'll either also restart very shortly (when cascade is true), or we'll just keep running (when cascade is false). 2. If we're iteration N and our dependency unexpectedly exits by itself, it's not actually helpful if we also aggressively exit. We can just keep running, and hopefully the user can fix the problem with the server dependency, which will then either cause us to restart or not depending on the cascade setting. As part of this I found these other related issues, which were compounding the problem above and making it harder to debug: - We were returning `undefined` when we detached, if the service was in its "failing" or "stopping" state, instead of the child process. This was compounding the issue described above, because if iteration N-1 was failing, iteration N would think the child wasn't running *at all*, and would not wait for it to fully shut down. We are now much stricter about communicating from iteration N-1 to iteration N that there is a running child process, regardless of the state. - We didn't log when a service shut down because of a failure in its dependencies. Part of #33
Services support is now released in Docs are at https://github.com/google/wireit/#services Changes from the pre-releases are the addition of @SanderElias @thejustinwalsh @michaelwarren1106 Thank you for beta testing, was a big help! @SanderElias In my projects I've found services to be stable enough to get this release out, and I haven't reproduced your issue yet, but assuming you still have it, let's continue discussing in a new issue and get to the bottom of it. Currently my suspicion is that we should focus on why wireit can't shut down |
Use wireit to specify dependency relationships between build scripts of each package, cache build outputs, incrementally rebuild scripts, and watch for changes. Furthermore, use npm-run-all2 to run Vite and wireit in watch mode for @webtoon/psd-example-browser and @webtoon/psd-benchmark. This is needed because wireit currently does not support "service" scripts. (see: google/wireit#33) Using wireit and npm-run-all2 makes our development NPM scripts (e.g. `npm run start:browser`) more reliable. The scripts now "just work" in a fresh local clone of the repo, and do not require arbitrary `sleep` calls or polling delays to properly rebuild multiple packages. Since wireit and npm-run-all2 handle our build orchestration, we can remove packages like rollup-plugin-shell, rollup-plugin-watcher, and nodedev that we previously used for watching the filesystem and running build tools.
By default, Wireit assumes that scripts eventually exit by themselves. This works well for things like building or testing. But sometimes a script runs indefinitely, such as a server.
Setting
"server": true
will tell Wireit that a script runs indefinitely. This has the following effects:It will always run. It will never be skipped or restored from cache.
If something depends on the server, Wireit won't wait for the server to exit before the dependent script starts running. Instead, it just waits for the server process to be spawned.
If a server script is run directly (e.g.
npm run serve
), then it will stay running until the user kills Wireit (e.g.Ctrl-C
).If a server script is run indirectly (e.g.
npm run script-that-depends-on-server
), then the server script will stay running until all scripts which transitively depend on it have finished.In watch mode, Wireit will restart the server whenever a dependency changes. If this isn't required for a particular dependency (such as for static assets that the server does not cache), the dependency edge can be annotated with
"restart": false
.The text was updated successfully, but these errors were encountered: