-
-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Exceptions dropped in multithreaded code #17532
Comments
Exceptions in threads are silently dropped right now. This is known and will be addressed, hopefully in the next major update to threading. |
Thanks @kpamnany. I could not find an associated issue so I'll leave this open, but please close if I've overlooked something. |
Can we just put a |
…eads` loops with it * `Log.exception()` is like `Log.error()`, but includes exception information. Inspired by Python's `logging.exception()` and such. * Since exceptions inside threads are silently swallowed right now (JuliaLang/julia#17532), always wrap the body of a `Thread.@threads` call with a try-catch that manually logs the exception.
…eads` loops with it (#582) * `Log.exception()` is like `Log.error()`, but includes exception information. Inspired by Python's `logging.exception()` and such. * Since exceptions inside threads are silently swallowed right now (JuliaLang/julia#17532), always wrap the body of a `Thread.@threads` call with a try-catch that manually logs the exception.
) * add a logger that includes exception traceback, and wrap `Thread.@threads` loops with it * `Log.exception()` is like `Log.error()`, but includes exception information. Inspired by Python's `logging.exception()` and such. * Since exceptions inside threads are silently swallowed right now (JuliaLang/julia#17532), always wrap the body of a `Thread.@threads` call with a try-catch that manually logs the exception. * fix bug where i forgot to add the initial position offset * add Config object to faciliate passing params deep into the call chain As discucsed in PR #538, we have this problem of passing arguments deep down the call stack of Celeste inference. This manifests itself as lots of chains of keyword arguments and various not-so-pretty hacks (like custom `infer_source_callback` wrappers) to set options. The discussion in #538 outlines a broad direction for not only solving this problem, but also the related problem of different branches of code for different options at each stage of inference. The two current examples are single vs. joint inference and MOG vs. FFT PSF inference, but there will likely always be such branches. I still think that's a good direction to go, but even a minimal implementation is too invasive to be safe/worthwhile right now. Instead, this commit takes what I view as the simplest possible step in the right direction. We add a single `Config` object which is instantiated at top-level and passed as the first argument through all major function calls. Over time this object ought to absorb more parameters. Once there are distinct groups of parameters, the object can be split into multiple objects, and functions can receive only the sub-Config-object they require. Dynamic dispatch of different types of sub-Config-objects can follow from that. (If this doesn't make sense, read the example in #538.) The change looks a little silly right now, since it's this config object with one parameter, but it's an incremental path. There are a lot of legacy wrappers included which instantiate the default Config object, so I don't have to go change every top-level caller (in which case I'd likely break something because I'm not familiar with much of the code), but over time callers ought to be changed to explicitly pass a Config object, and the legacy wrappers can go away. * initialize with a fixed, not random, position offset random seems preferable for broader test coverage, but inference can be very sensitive to this, as on the `simple_star` test case: we infer probability of star = .005 when initializing with exactly the true position, and 0.995 when initialized slightly offset from the true position. (that's problematic by itself, but it's a different discussion.) since inference is sensitive, let's not randomize this offset and risk having extremely brittle tests (i.e., even if we srand, if some computations are reordered the random offset would change and the test could mysteriously break). * clean up some imports in tests These test files aren't under the Celeste module so they shouldn't really be doing .. imports, I think. It works because they're included into the runtests.jl script which imports a bunch of stuff, but it seems confusing, prone to breakage, and it doesn't work when you pull code out into sandbox scripts (for use with `reload()`).
…eads` loops with it (#582) * `Log.exception()` is like `Log.error()`, but includes exception information. Inspired by Python's `logging.exception()` and such. * Since exceptions inside threads are silently swallowed right now (JuliaLang/julia#17532), always wrap the body of a `Thread.@threads` call with a try-catch that manually logs the exception.
) * add a logger that includes exception traceback, and wrap `Thread.@threads` loops with it * `Log.exception()` is like `Log.error()`, but includes exception information. Inspired by Python's `logging.exception()` and such. * Since exceptions inside threads are silently swallowed right now (JuliaLang/julia#17532), always wrap the body of a `Thread.@threads` call with a try-catch that manually logs the exception. * fix bug where i forgot to add the initial position offset * add Config object to faciliate passing params deep into the call chain As discucsed in PR #538, we have this problem of passing arguments deep down the call stack of Celeste inference. This manifests itself as lots of chains of keyword arguments and various not-so-pretty hacks (like custom `infer_source_callback` wrappers) to set options. The discussion in #538 outlines a broad direction for not only solving this problem, but also the related problem of different branches of code for different options at each stage of inference. The two current examples are single vs. joint inference and MOG vs. FFT PSF inference, but there will likely always be such branches. I still think that's a good direction to go, but even a minimal implementation is too invasive to be safe/worthwhile right now. Instead, this commit takes what I view as the simplest possible step in the right direction. We add a single `Config` object which is instantiated at top-level and passed as the first argument through all major function calls. Over time this object ought to absorb more parameters. Once there are distinct groups of parameters, the object can be split into multiple objects, and functions can receive only the sub-Config-object they require. Dynamic dispatch of different types of sub-Config-objects can follow from that. (If this doesn't make sense, read the example in #538.) The change looks a little silly right now, since it's this config object with one parameter, but it's an incremental path. There are a lot of legacy wrappers included which instantiate the default Config object, so I don't have to go change every top-level caller (in which case I'd likely break something because I'm not familiar with much of the code), but over time callers ought to be changed to explicitly pass a Config object, and the legacy wrappers can go away. * initialize with a fixed, not random, position offset random seems preferable for broader test coverage, but inference can be very sensitive to this, as on the `simple_star` test case: we infer probability of star = .005 when initializing with exactly the true position, and 0.995 when initialized slightly offset from the true position. (that's problematic by itself, but it's a different discussion.) since inference is sensitive, let's not randomize this offset and risk having extremely brittle tests (i.e., even if we srand, if some computations are reordered the random offset would change and the test could mysteriously break). * clean up some imports in tests These test files aren't under the Celeste module so they shouldn't really be doing .. imports, I think. It works because they're included into the runtests.jl script which imports a bunch of stuff, but it seems confusing, prone to breakage, and it doesn't work when you pull code out into sandbox scripts (for use with `reload()`).
* Print a warning if an error occurs in the threaded loop (Helps JuliaLang#17532) * Make recursive threaded loops "work" (Fix JuliaLang#18335). The proper fix will be tracked by JuliaLang#21017
* Print a warning if an error occurs in the threaded loop (Helps JuliaLang#17532) * Make recursive threaded loops "work" (Fix JuliaLang#18335). The proper fix will be tracked by JuliaLang#21017
* Print a warning if an error occurs in the threaded loop (Helps JuliaLang#17532) * Make recursive threaded loops "work" (Fix JuliaLang#18335). The proper fix will be tracked by JuliaLang#21017
* Print a warning if an error occurs in the threaded loop (Helps JuliaLang#17532) * Make recursive threaded loops "work" (Fix JuliaLang#18335). The proper fix will be tracked by JuliaLang#21017
This has been fixed. |
I'm afraid this isn't fixed, |
??
|
Apologies! I just realised as well that I need to wait() on the thread. Thanks :) |
Is this a consequence of #17388, or another problem?
The text was updated successfully, but these errors were encountered: