Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

workflow for server development #25

Closed
mathroc opened this issue Jan 12, 2016 · 64 comments
Closed

workflow for server development #25

mathroc opened this issue Jan 12, 2016 · 64 comments

Comments

@mathroc
Copy link

mathroc commented Jan 12, 2016

when building a server (eg: a webserver that listen on port 3000), it could be useful to automatically kill the already running server (after successful re-compilation) before starting the new one. otherwise, the server cannot start again because the old one still listen to the same port.

here is a log of what is going, if the server is not killed:

# run rshello cargo watch run
$ cargo run
   Compiling hello_world v0.0.1 (file:///app)
     Running `target/debug/hello_world`

$ cargo run
   Compiling hello_world v0.0.1 (file:///app)
     Running `target/debug/hello_world`
thread '<main>' panicked at 'called `Result::unwrap()` on an `Err` value: Io(Error { repr: Os { code: 98, message: "Address already in use" } })', ../src/libcore/result.rs:738
Process didn't exit successfully: `target/debug/hello_world` (exit code: 101)
-> exit code: 101

$ cargo run
   Compiling hello_world v0.0.1 (file:///app)
     Running `target/debug/hello_world`
thread '<main>' panicked at 'called `Result::unwrap()` on an `Err` value: Io(Error { repr: Os { code: 98, message: "Address already in use" } })', ../src/libcore/result.rs:738
Process didn't exit successfully: `target/debug/hello_world` (exit code: 101)
@passcod
Copy link
Member

passcod commented Jan 13, 2016

Yes, this was already requested some time ago; I rejected it at the time. I think I've changed my mind, now, this would be a good feature. I just don't really have to time to develop it. Leaving open so someone can look at it if they want.

@ghost
Copy link

ghost commented Jan 26, 2016

@passcod I can try to implement this but can you give me some advice as to how to stop an already running run?

@passcod
Copy link
Member

passcod commented Jan 26, 2016

Kill the child processes using this: http://doc.rust-lang.org/std/process/struct.Child.html#method.kill
Or something. The code is a bit of a mess. Off the top of my head, I would probably have the cargo::run method listen on a channel or similar for kill events/messages. If a new run is requested, send the kill message. If any run is still running, they'll receive the message and terminate.

But there might be a better way to do things :)

@LukasKalbertodt
Copy link
Contributor

@ivegotasthma Are you coding on it right now? I need that feature, too. So I would start implementing it now. But I don't want to steal your job here ;)

@ghost
Copy link

ghost commented Jan 29, 2016

I am working on it but I don't have experience with synchronization
primitives in Rust and it's going a bit slow. Feel free to work on it. :)

On 29-01-16 14:37, Lukas Kalbertodt wrote:

@ivegotasthma https://github.com/ivegotasthma Are you coding on it
right now? I need that feature, too. So I would start implementing it
now. But I don't want to steal your job here ;)


Reply to this email directly or view it on GitHub
#25 (comment).

@LukasKalbertodt
Copy link
Contributor

I just saw your commit. Looks like it could already work. Does it work already?
I will add a few comments, if I notice something I would do in another way. I hope you don't mind :)

@ghost
Copy link

ghost commented Jan 29, 2016

It doesn't work because I need a RwLock around the Config. When a new
thread is spawned and the config is passed to it the lifetime if the
thread is invalid. I looked around on the web and I saw that if's
probably an issue related to shared mutability between threads.

Thanks for the feedback, it's very welcome. :)

On 29-01-16 14:44, Lukas Kalbertodt wrote:

I just saw your commit
https://github.com/ivegotasthma/cargo-watch/commit/a4c4a114e8745e1105f95e0a901f032c2e474648.
Looks like it could already work. Does it work already?
I will add a few comments, if I notice something I would do in another
way. I hope you don't mind :)


Reply to this email directly or view it on GitHub
#25 (comment).

@LukasKalbertodt
Copy link
Contributor

As it turns out, it's pretty difficult. This issue and #2 are not easily solvable. The reason for this is the API of std::process::Child. You can read the problem description in my SO question. We have to wait for the child process, to know when to start the next one (e.g. cargo test after cargo build was finished). But if we wait, we can't kill it.

I see a few possibilities here:

  1. Use the wait-timeout crate and do some kind of busy waiting. A timeout of 100ms for example would probably feel immediate to the user and still wouldn't cause a lot of CPU usage. The actual process of compiling or testing will take more CPU resources anyway. But this needs to be profiled.
  2. Write the platform dependent code ourselves, like it was mentioned in the SO answer. This has the advantage of doing exactly what we want, but the obvious disadvantage of being really hard to program, test and maintain.
  3. Leave everything as it is right now. Not really an option for me, though.
  4. Think of some other crazy hack. For example: we don't need to wait for our process if we would only execute one process (like just cargo build). So if we, for example, would create a temporary .sh file with all commands that need to be executed, we could create a process that executes this script. Therefore we would only have one Process and the "waiting and executing next"-step would be done by the sh script.
  5. As said in 4 we could also just allow one command to be execute. Not backwards compatible and not really an option for me.

I hope everything is understandable. I wanted to ask everyone involved, what you think of it. Personally, I would choose the first option because it's the easiest and the busy-timeout-waiting probably won't cause any significant CPU load. Fourth option is worth thinking about, too, IMO.

What do you think?

@LukasKalbertodt
Copy link
Contributor

Did some quick profiling:

fn main() {
    let ns: u32 = std::env::args().nth(1).unwrap().parse().unwrap();
    let dur = std::time::Duration::new(0, ns);

    loop {
        std::thread::sleep(dur);
    }

}

I don't even know if this is comparable with the wait-timeout crate, but it was easy. At 100_000 ns (= 100µs = 0.1 ms) I start noticing CPU usage (1% in my KSysGuard). But it's not getting worse with shorter durations. I suspect that this is due to the kernel's scheduling and that the process isn't actually woken up a lot faster than 100µs.

If we decide for (1), I would do some more profiling, of course.

@passcod
Copy link
Member

passcod commented Jan 30, 2016

(2) is undesirable, I agree. (3) is obviously not a solution. (4) is interesting, but also platform dependent. (1) seems the best out of those 5. After reading some more on the issue in the context of Rust, (1) seems like our only sane option until non-blocking process status checking gets in Rust. Someday.

If you're going to do some more profiling, could you check this scenario? Pseudocode:

let mut child = Command::new("foo").spawn().unwrap();
loop {
  match child.wait_timeout_ms(0).unwrap() {
    Some(status) => { exit },
    None => { sleep(1 second) }
  }
}

That is, what happens if instead of letting wait-timeout sleep for us, we instruct it to return immediately (and handle the waiting ourselves) ?

@LukasKalbertodt
Copy link
Contributor

Haven't found the RFC-issue, thanks for that.

And that is a good idea. We can compare the differences between letting sleep and sleep ourselves. Although, I honestly don't expect any differences.

Is it decided then? :P So I will start implement it with wait-timeout now?

@passcod
Copy link
Member

passcod commented Jan 30, 2016

It is indeed decided :)

passcod added a commit that referenced this issue Jan 31, 2016
Refactor into more idiomatic and cleaner code

Fix #2
Fix #33
Do most of the work towards #25
@passcod
Copy link
Member

passcod commented Jan 31, 2016

Implemented in v3.1.0. I've tested it by using cargo watch run to watch cargo watch's source. It works (yay!) but it's a bid mad as both the child and parent cargo watch instances react to changes…

@passcod
Copy link
Member

passcod commented Jan 31, 2016

Nevermind, 3.1.0 doesn't work for this. It seems to try to kill the child process, but fails somehow, and then assumes the process was killed so it tries to start further runs… which of course hit the dreaded "address in use" error.

I've tried:

  • Sending SIGKILL after SIGTERM, to force the process to terminate (after a suitable delay, like 250ms or 500ms or even 1s).
  • Calling child.kill() after sending the SIGTERM, i.e. same as above but without using libc for the SIGKILL.
  • Calling child.wait() after both these things to let the process finish correctly.
  • Not sending SIGKILL and then calling child.wait().
  • Replacing SIGTERM by SIGINT.
  • Not using libc at all, just child.kill(), i.e. reverting to the code just after merging Big rewrite of the application #30.

Annoyingly, sending a signal (SIGKILL, SIGTERM, SIGINT…) with the command-line to the child process directly works.

@passcod passcod reopened this Jan 31, 2016
@LukasKalbertodt
Copy link
Contributor

Wow, this is strange. I thought that the process was killed, but the port wasn't freed yet. But indeed, it looks like the process is still running. I just checked: child.kill() returns Ok(()). Maybe this is even a bug in std or wait-timeout or some incompatibility between those.

The documentation says that the method should only be used with great care. Maybe it somehow disturbs the kill method? Maybe @alexcrichton knows something about that?

But I was actually pretty surprised that you merged my PR already, since you had a few remarks. First sending a signal like SIGTERM or SIGINT could be nice, but I'd rather avoid unsafe code.

And I did not understand your comment completely: of all the things you tried, nothing worked? What happened when calling wait()?

@passcod
Copy link
Member

passcod commented Jan 31, 2016

I had some comments, but realised I could merge and implement them myself, no need to drag it on :) So the current release sends a SIGTERM if on Unix, and a normal kill (I suppose? Not sure how it works) on Windows.

Of all the things I tried and listed, nothing worked indeed. Same behaviour every time. Cargo watch goes to kill the child process, thinks it got it, and proceeds to run the next command. In the meantime, the child has not been killed, hence the problem.

I haven't tested any of this on Windows, so I don't know if it works there.

@passcod
Copy link
Member

passcod commented Jan 31, 2016

One thing I found is that one can use libc::kill with a signal set to 0 to find out the status of a process in a non-blocking manner. (It's actually documented in the kill(2) manpage.) Not sure it's actually useful when we have wait-timeout, but good to know.

@passcod
Copy link
Member

passcod commented Jan 31, 2016

Going to bed nowish, I'll pick this up again tomorrow. Feel free to experiment! :)

@LukasKalbertodt
Copy link
Contributor

The docs say that kill() is sending SIGKILL and not SIGTERM. Not sure if you just made a typo. But SIGKILL should end the process immediately...

And yeah sure no hurries. Time zones make open source more difficult :P

@passcod
Copy link
Member

passcod commented Jan 31, 2016

I made a commit after merging that uses libc on Unix to send SIGTERM to the process instead of using child.kill() (which does use SIGKILL).

@LukasKalbertodt
Copy link
Contributor

Ok so I guess the problem is that cargo run is a completely different process than the actual user executable. After digging through some cargo code, I found out that cargo essentially starts the user process with

Command::new(...)./* args...*/.status()

So similar to our code (= no magic involved).

Furthermore I just tested and verified with a small example: if process A starts process B then waits for it (with status() for example), and process A is then killed with SIGKILL, process B still runs. So this is the whole problem. Sending SIGTERM or SIGINT doesn't help either.

I'm not yet sure how to correctly solve this problem. Maybe killing a whole process tree or something. Will look into that soon.

@passcod
Copy link
Member

passcod commented Jan 31, 2016

Ah! Yes, that would explain it. Well done figuring it out.

@alexcrichton
Copy link

The caveat in wait-timeout is mainly just indicating that once a process has been successfully waited on (either through that crate or the standard library), you can no longer consider the pid valid to do further operations like wait, kill, etc.

The behavior you may be seeing here is that Cargo spawns subprocesses which may not be getting killed. When you kill the parent it doesn't automatically kill the children, so they'll still be running around. Although after reading this thread it's not clear to me what the failure mode is, so this may not be the case either.

@passcod
Copy link
Member

passcod commented Jan 31, 2016

Yes, that is the behaviour we're seeing. Now we just want to kill the process tree.

One thing I have found, but that may be overkill, is to create a subprocess that uses setsid to make a new process group, then that process starts the cargo run, and cargo run starts the user's application. Then we could use killpg to terminate the entire process group without affecting the main cargo watch process.

Alternatively, the command-line utility pkill can be used to kill processes with a particular parent id. We could use syscalls to replicate that ability. The problem I see there is that between the process lookup and our actual killing the processes, those may already have terminated and the process ids reused somewhere else, so we'd run the risk of killing random processes on the machine.

@passcod
Copy link
Member

passcod commented Jan 31, 2016

I'm wondering how Ctrl-C works, as it obviously kills the entire thing, it doesn't leave a dangling process. Sending SIGINT to the cargo run process didn't seem to work. Maybe we ought to discuss with Cargo people.

@sedrik
Copy link

sedrik commented Nov 23, 2016

Hi all

I wrote a small script for this as I needed it myself (did not know about cargo-watch at the time). Until it has been solved in cargo-watch feel free to use my script.

My solution to the issue is to pass the target binary as an argument to the script thereby sidestepping cargo run in this particular case.

Hope it helps and hopefully we get support for this in cargo-watch soon.

https://users.rust-lang.org/t/small-script-to-watch-and-re-launch-rust-code

@robinst
Copy link
Contributor

robinst commented Nov 23, 2016

@sedrik You can use watchexec (written Rust) instead, like this:

watchexec --restart "cargo run"

@sedrik
Copy link

sedrik commented Nov 24, 2016

Interesting, if I find the need to switch I will look into it, currently I have stuffed setup using my script but thanks for telling me about watchexec =)

@passcod
Copy link
Member

passcod commented Dec 29, 2016

This will be the next feature added. Don't have an ETA, but it's on top of the todolist.

@bb010g
Copy link

bb010g commented Mar 27, 2017

Would duct be able to help here?

@passcod
Copy link
Member

passcod commented Mar 27, 2017

Oh that is perfect! I was just thinking something like that would be needed, but I wasn't looking forward to implementing it myself. That really helps. I'll work on replacing all the executy bits in Cargo Watch with duct. Thanks @bb010g!

@passcod passcod self-assigned this Mar 27, 2017
@passcod
Copy link
Member

passcod commented Mar 28, 2017

Preliminary testing is promising. Barring any big surprises, support for this will (finally!) land in the next release, within a few days.

@passcod
Copy link
Member

passcod commented Mar 29, 2017

There's new support for this in version 4.0.0 which I just released.

@passcod passcod closed this as completed Mar 29, 2017
@mathroc
Copy link
Author

mathroc commented Mar 30, 2017

Hi @passcod , just had some time to do some basic testing but I can’t get it to work, maybe I'm missing something obvious… I’m not sure where the problem is. here is what I can observe with this small project:

// main.rs
#![feature(plugin)]
#![plugin(rocket_codegen)]

extern crate rocket;

#[get("/")]
fn index() -> &'static str {
    "Hello, world!"
}

fn main() {
    rocket::ignite().mount("/", routes![index]).launch();
}
# Cargo.toml
[package]
name = "hello-rocket"
version = "0.1.0"
authors = ["Mathieu Rochette <mathieu@texthtml.net>"]

[dependencies]
rocket = "0.2.3"
rocket_codegen = "0.2.3"

mathieu in ~/Projects/hello-rocket on master%
⚡ cargo watch -x run
[Watching for changes... Ctrl-C to stop]
[Running 'cargo run']
Finished dev [unoptimized + debuginfo] target(s) in 0.0 secs
Running target/debug/hello-rocket
🔧 Configured for development.
=> address: localhost
=> port: 8000
=> log: normal
=> workers: 4
🛰 Mounting '/':
=> GET /
🚀 Rocket has launched from http://localhost:8000...

http://localhost:8000/ now prints "Hello, world!"

I replace Hello, world! in main.rs with Hello, rust!

[Killing running command]

[Running 'cargo run']
Compiling hello-rocket v0.1.0 (file:///home/mathieu/Projects/hello-rocket)
Finished dev [unoptimized + debuginfo] target(s) in 1.75 secs
Running target/debug/hello-rocket
🔧 Configured for development.
=> address: localhost
=> port: 8000
=> log: normal
=> workers: 4
🛰 Mounting '/':
=> GET /
🚀 Rocket has launched from http://localhost:8000...

but http://localhost:8000/ still prints "Hello, world!" :(

I replace Hello, rust! in main.rs with Hello, you!

[Killing running command]

[Running 'cargo run']
Compiling hello-rocket v0.1.0 (file:///home/mathieu/Projects/hello-rocket)
Finished dev [unoptimized + debuginfo] target(s) in 1.70 secs
Running target/debug/hello-rocket
🔧 Configured for development.
=> address: localhost
=> port: 8000
=> log: normal
=> workers: 4
🛰 Mounting '/':
=> GET /
Error: Failed to start server.
thread 'main' panicked at 'Address already in use (os error 98)', /home/mathieu/.cargo/registry/src/github.com-1ecc6299db9ec823/rocket-0.2.3/src/rocket.rs:555
note: Run with RUST_BACKTRACE=1 for a backtrace.

http://localhost:8000/ still prints "Hello, world!" :(
and there is 2 instances running:

mathieu in ~
⚡ pgrep -a hello
15080 target/debug/hello-rocket
15189 target/debug/hello-rocket

I replace Hello, you! with Hello, darkness!

ERROR:cargo_watch::schedule: Error trying to check status of job, abort.
ERROR:cargo_watch::schedule: command ["/bin/sh", "-c", "cargo run"] exited with code 101

cargo-watch now exits and leaves behind the last 2 instances:

mathieu in ~
⚡ pgrep -a hello
15080 target/debug/hello-rocket
15189 target/debug/hello-rocket

I hope it this can be helpful to fix it, please tell me if I can do more debugging

ps:

mathieu in ~/Projects/hello-rocket on master%
⚡ cargo-watch --version
cargo-watch 4.0.0
mathieu in ~/Projects/hello-rocket on master%
⚡ rustc --version
rustc 1.17.0-nightly (ccce2c6eb 2017-03-27)

@passcod
Copy link
Member

passcod commented Mar 30, 2017

Hmm.

When killing:

  1. Cargo Watch tells Duct to kill the job Handle
  2. Duct does this:
    1. Internally, the Handle calls .kill() on the HandleInner enum
    2. Which calls .kill() on its inner value…
    3. …eventually after going through the "Duct call tree" it arrives at the leaf HandleInner, and calls .kill() on that
    4. Which calls .kill() on a shared_child::SharedChild instance
    5. Which uses the libstd Process implementation, which sends a SIGKILL through the libc kill. But what should really be done is use the libc pgkill (process group kill), which is why watchexec has a custom Process implementation just for that.

@passcod
Copy link
Member

passcod commented Mar 31, 2017

Opened an issue on Duct for this.

@passcod
Copy link
Member

passcod commented Apr 18, 2017

I've written a new version that calls watchexec by translating cargo watch's options and defaults, so given that watchexec has this issue covered, technically this is fixed. It's a bit of a cheat, though.

Check it out: https://github.com/passcod/cargo-watch/tree/just-wrap-watchexec
You can install with this lengthy command:

$ cargo install --git https://github.com/passcod/cargo-watch --branch just-wrap-watchexec

And if you don't have it already, you'll also need to $ cargo install watchexec.

@mathroc
Copy link
Author

mathroc commented Apr 18, 2017

hey @passcod just tried with the just-wrap-watchexec but this does not seems to works, cargo watch -x run does nothing :

$ time cargo watch -x run
  mathieu in ~/Projects/hello-rocket on master%
⚡ cargo install --git https://github.com/passcod/cargo-watch --branch just-wrap-watchexec --force
    Updating git repository `https://github.com/passcod/cargo-watch`
  Installing cargo-watch v5.0.0 (https://github.com/passcod/cargo-watch?branch=just-wrap-watchexec#0308fd0
b)
    Updating registry `https://github.com/rust-lang/crates.io-index`
   Compiling unicode-width v0.1.4
   Compiling libc v0.2.21
   Compiling winapi v0.2.8
   Compiling vec_map v0.7.0
   Compiling unicode-segmentation v1.1.0
   Compiling regex-syntax v0.3.9
   Compiling log v0.3.7
   Compiling winapi-build v0.1.1
   Compiling term_size v0.3.0
   Compiling ansi_term v0.9.0
   Compiling utf8-ranges v0.1.3
   Compiling kernel32-sys v0.2.2
   Compiling strsim v0.6.0
   Compiling thread-id v2.0.0
   Compiling thread_local v0.2.7
   Compiling atty v0.2.2
   Compiling memchr v0.1.11
   Compiling bitflags v0.8.2
   Compiling aho-corasick v0.5.3
   Compiling clap v2.23.2
   Compiling regex v0.1.80
   Compiling env_logger v0.3.5
   Compiling cargo-watch v5.0.0 (https://github.com/passcod/cargo-watch?branch=just-wrap-watchexec#0308fd0b)
    Finished release [optimized] target(s) in 89.27 secs
   Replacing /home/mathieu/.cargo/bin/cargo-watch
mathieu in ~/Projects/hello-rocket on master%
⚡ cargo run
    Finished dev [unoptimized + debuginfo] target(s) in 0.0 secs
     Running `target/debug/hello-rocket`
🔧  Configured for development.
    => address: localhost
    => port: 8000
    => log: normal
    => workers: 4
🛰  Mounting '/':
    => GET /
🚀  Rocket has launched from http://localhost:8000...
^C
mathieu in ~/Projects/hello-rocket on master%
⚡ time cargo watch -x run
real    0m0.027s
user    0m0.016s
sys     0m0.011s

passcod added a commit that referenced this issue Apr 18, 2017
Probably help debugging (see comments on #25)
@passcod
Copy link
Member

passcod commented Apr 18, 2017

Run with RUST_LOG=cargo_watch=debug to see the options it generates for watchexec. I've also added some better erroring so it should be less silent if it fails to run watchexec. Also, silly question, you do have watchexec installed and in path, right?

@mathroc
Copy link
Author

mathroc commented Apr 18, 2017

not silly at all, that was it !
it’s still not working but this is a problem with watchexec now

RUST_LOG=cargo_watch=debug cargo watch -x run

mathieu in ~/Projects/hello-rocket on master%
⚡ RUST_LOG=cargo_watch=debug cargo watch -x run
INFO:cargo_watch: Filters: []
INFO:cargo_watch: Settings: ["--debug"]
INFO:cargo_watch: Watches: []
INFO:cargo_watch: Commands: ["cargo run"]
*** glob converted to regex: Glob { glob: "/target/", re: "(?-u)^(?:/?|./)target(?:/?|/.)$", opts: GlobOptions { case_insensitive: false, literal_separator: true }, tokens: Tokens([RecursivePrefix, Literal('t'), Literal('a'), Literal('r'), Literal('g'), Literal('e'), Literal('t'), RecursiveSuffix]) }
*** built glob set; 0 literals, 0 basenames, 0 extensions, 0 prefixes, 0 suffixes, 0 required extensions, 1 regexes
*** Loaded "/home/mathieu/Projects/hello-rocket/.gitignore"
*** Adding ignore: "/./"
*** Adding ignore: "
/.DS_Store"
*** Adding ignore: ".pyc"
*** Adding ignore: "
.swp"
*** glob converted to regex: Glob { glob: "/./", re: "(?-u)^./\../.$", opts: GlobOptions { case_insensitive: false, literal_separator: false }, tokens: Tokens([ZeroOrMore, Literal('/'), Literal('.'), ZeroOrMore, Literal('/'), ZeroOrMore]) }
*** built glob set; 0 literals, 0 basenames, 2 extensions, 0 prefixes, 1 suffixes, 0 required extensions, 1 regexes
thread 'main' panicked at 'unable to create watcher: Io(Error { repr: Os { code: 28, message: "No space left on device" } })', /checkout/src/libcore/result.rs:859
note: Run with RUST_BACKTRACE=1 for a backtrace.
ERROR:cargo_watch: Oh no! Watchexec exited with: signal: 6
mathieu in ~/Projects/hello-rocket on master%
⚡ watchexec -r cargo run
thread 'main' panicked at 'unable to create watcher: Io(Error { repr: Os { code: 28, message: "No space left on device" } })', /checkout/src/libcore/result.rs:859
note: Run with RUST_BACKTRACE=1 for a backtrace.
Aborted
mathieu in ~/Projects/hello-rocket on master%
⚡ watchexec -r --force-poll 50 cargo run
*** Polling for changes every 50 ms
Finished dev [unoptimized + debuginfo] target(s) in 0.0 secs
Running target/debug/hello-rocket
🔧 Configured for development.
=> address: localhost
=> port: 8000
=> log: normal
=> workers: 4
🛰 Mounting '/':
=> GET /
🚀 Rocket has launched from http://localhost:8000...
Finished dev [unoptimized + debuginfo] target(s) in 0.0 secs
Running target/debug/hello-rocket
🔧 Configured for development.
=> address: localhost
=> port: 8000
=> log: normal
=> workers: 4
🛰 Mounting '/':
=> GET /
🚀 Rocket has launched from http://localhost:8000...
Compiling hello-rocket v0.1.0 (file:///home/mathieu/Projects/hello-rocket)
Compiling hello-rocket v0.1.0 (file:///home/mathieu/Projects/hello-rocket)
Finished dev [unoptimized + debuginfo] target(s) in 1.51 secs
Running target/debug/hello-rocket
🔧 Configured for development.
=> address: localhost
=> port: 8000
=> log: normal
=> workers: 4
🛰 Mounting '/':
=> GET /
🚀 Rocket has launched from http://localhost:8000...

watchexec -r --force-poll 50 cargo run kinda works but it seems to be triggered even when there is no changes 😔

passcod added a commit that referenced this issue Apr 18, 2017
Noticed I forgot this for #25, oh no
@passcod
Copy link
Member

passcod commented Apr 18, 2017

Ah, that might be an inotify or similar watch limit issue. If you are on linux see: https://github.com/passcod/cargo-watch#linux-if-it-fails-to-watch-some-deep-directories-but-not-others And see this for the upstream issue: notify-rs/notify#103

@Boscop
Copy link

Boscop commented Apr 20, 2017

When I run cargo watch -x run on my server project, when I edit and save a source file, it says [Killing running command] but it doesn't do anything. So it doesn't work, the old server exe keeps running.
watchexec --restart "cargo run" seems to work, but is there a way to do it also with cargo watch?

@passcod
Copy link
Member

passcod commented Apr 21, 2017

Yes, if you use this version: https://github.com/passcod/cargo-watch/tree/just-wrap-watchexec it will literally just wrap watchexec and do exactly that. You're using the v4 version (from the master branch) that does not have a fix for this issue (yet).

@Boscop
Copy link

Boscop commented Apr 22, 2017

watchexec --restart "cargo run" doesn't watch the file in the other crates in the same workspace though, which the current crate depends on.
Is there a way to run it on the toplevel folder of the workspace (such that it watches all files of the workspace) with an arg which crate to run?
I tried watchexec --restart "cargo run --bin foo" but it doesn't work..

@passcod
Copy link
Member

passcod commented Apr 22, 2017

@Boscop please ask watchexec questions on the watchexec repo :) and if you want to ask workspaces-related questions using cargo-watch, then open a new issue, thanks. Also check out the Troubleshooting section on the README.

@passcod
Copy link
Member

passcod commented Apr 27, 2017

Finally released properly in 5.0.0.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

10 participants