Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

error: display_handler panic on launch #38

Closed
titus-anromedonn opened this issue Dec 30, 2019 · 18 comments · Fixed by #41
Closed

error: display_handler panic on launch #38

titus-anromedonn opened this issue Dec 30, 2019 · 18 comments · Fixed by #41

Comments

@titus-anromedonn
Copy link

titus-anromedonn commented Dec 30, 2019

Immediately after launching, the following error shows up.

thread 'display_handler' panicked at 'overflow when subtracting durations', src/libcore/option.rs:1185:5

note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace.

thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: Any', src/libcore/result.rs:1165:5

@imsnif
Copy link
Owner

imsnif commented Dec 30, 2019

Oh my! @titus-anromedonn - are you using mac/linux? Which version? Does this always happen? How are you running it?

@titus-anromedonn
Copy link
Author

titus-anromedonn commented Dec 30, 2019

Oh my! @titus-anromedonn - are you using mac/linux? Which version? Does this always happen? How are you running it?

@imsnif Happens every time I launch. At first I thought it was because I was running inside of tmux, but it crashes inside normal terminal windows as well.

I installed via cargo. Cargo and rust were installed via rustup.

I've been tailing syslog and the only thing I'm seeing are my network devices going into promiscuous mode. If there is a file you would like me to tail in order to retrieve more detailed information, please let me know.

run

sudo ~/home/.cargo/bin/what

sys info

os:             Ubuntu 18.04.3
rustc:          1.40.0 (73528e339 2019-12-16)
rustup:         1.21.1 (7832b2ebe 2019-12-20)
what:           0.5.1

@ebroto
Copy link
Collaborator

ebroto commented Dec 30, 2019

Thanks for the report @titus-anromedonn! Could you run it like this:

RUST_BACKTRACE=full sudo ~/home/.cargo/bin/what

This will give more information about where the error happens.

@titus-anromedonn
Copy link
Author

Thanks for the report @titus-anromedonn! Could you run it like this:

RUST_BACKTRACE=full sudo ~/home/.cargo/bin/what

This will give more information about where the error happens.

@ebroto

d at 'called `Result::unwrap()` on an `Err` value: Any', src/libcore/result.rs:1165:5
stack backtrace:
   0:     0x55a4cbdc9e14 - backtrace::backtrace::libunwind::trace::h65597d255cb1398b
                               at /cargo/registry/src/github.com-1ecc6299db9ec823/backtrace-0.3.40/src/backtrace/libunwind.rs:88
   1:     0x55a4cbdc9e14 - backtrace::backtrace::trace_unsynchronized::hd4f479d7150ec4a0
                               at /cargo/registry/src/github.com-1ecc6299db9ec823/backtrace-0.3.40/src/backtrace/mod.rs:66
   2:     0x55a4cbdc9e14 - std::sys_common::backtrace::_print_fmt::h015072984a2b172c
                               at src/libstd/sys_common/backtrace.rs:77
   3:     0x55a4cbdc9e14 - <std::sys_common::backtrace::_print::DisplayBacktrace as core::fmt::Display>::fmt::h6df05d3335f32194
                               at src/libstd/sys_common/backtrace.rs:61
   4:     0x55a4cbdec3ec - core::fmt::write::h1f444f4312eb6c27
                               at src/libcore/fmt/mod.rs:1028
   5:     0x55a4cbdc6037 - std::io::Write::write_fmt::h8d147888220078ef
                               at src/libstd/io/mod.rs:1412
   6:     0x55a4cbdcc38e - std::sys_common::backtrace::_print::h8a6df0fa81d6af62
                               at src/libstd/sys_common/backtrace.rs:65
   7:     0x55a4cbdcc38e - std::sys_common::backtrace::print::h6f05b4733407e509
                               at src/libstd/sys_common/backtrace.rs:50
   8:     0x55a4cbdcc38e - std::panicking::default_hook::{{closure}}::h0d0a23bd02315dd8
                               at src/libstd/panicking.rs:188
   9:     0x55a4cbdcc081 - std::panicking::default_hook::h8d15a9aecb4efac6
                               at src/libstd/panicking.rs:205
  10:     0x55a4cbdcca8b - std::panicking::rust_panic_with_hook::hbe174577402a475d
                               at src/libstd/panicking.rs:464
  11:     0x55a4cbdcc62e - std::panicking::continue_panic_fmt::h4d855dad868accf3
                               at src/libstd/panicking.rs:373
  12:     0x55a4cbdcc516 - rust_begin_unwind
                               at src/libstd/panicking.rs:302
  13:     0x55a4cbde8cee - core::panicking::panic_fmt::hdeb7979ab6591473
                               at src/libcore/panicking.rs:139
  14:     0x55a4cbde8de7 - core::result::unwrap_failed::h054dd680e6fcd38b
                               at src/libcore/result.rs:1165
  15:     0x55a4cbc64a7e - what::try_main::h16d9f4317f9417d0
  16:     0x55a4cbc61ffa - what::main::hde4295a1949f4cc9
  17:     0x55a4cbc54a73 - std::rt::lang_start::{{closure}}::hbc24c5ba38563b48
  18:     0x55a4cbdcc4b3 - std::rt::lang_start_internal::{{closure}}::h6ea535ec5c50fc3e
                               at src/libstd/rt.rs:48
  19:     0x55a4cbdcc4b3 - std::panicking::try::do_call::h631c6408dfccc6f5
                               at src/libstd/panicking.rs:287
  20:     0x55a4cbdd089a - __rust_maybe_catch_panic
                               at src/libpanic_unwind/lib.rs:78
  21:     0x55a4cbdccf6d - std::panicking::try::hab539b2d1255d635
                               at src/libstd/panicking.rs:265
  22:     0x55a4cbdccf6d - std::panic::catch_unwind::hd5e0a26424bd7f34
                               at src/libstd/panic.rs:396
  23:     0x55a4cbdccf6d - std::rt::lang_start_internal::h3bdc4c7d98181bf9
                               at src/libstd/rt.rs:47
  24:     0x55a4cbc65912 - main
  25:     0x7faccc1bfb97 - __libc_start_main
  26:     0x55a4cbc1a93a - _start
  27:                0x0 - <unknown>

@ebroto
Copy link
Collaborator

ebroto commented Dec 30, 2019

Thanks!

The backtrace seems to point somewhere else 🤔 , but looking at the relevant thread, we are doing this:
https://github.com/imsnif/what/blob/c9b9025577d0e1f831c9ebb054a5ac00b7a89796/src/main.rs#L161

I think we would get this panic if render_duration is greater than one second.

https://github.com/rust-lang/rust/blob/9d6f87184e5116cf4a96f6686eb67994f19908a5/src/libcore/time.rs#L751-L758
https://github.com/rust-lang/rust/blob/9d6f87184e5116cf4a96f6686eb67994f19908a5/src/libcore/time.rs#L417-L418

Does it make sense @imsnif? I can open a PR to fix it. Unfortunately I would not be able to test it as I can't reproduce it.

@titus-anromedonn
Copy link
Author

titus-anromedonn commented Dec 30, 2019

Thanks!

The backtrace seems to point somewhere else thinking , but looking at the relevant thread, we are doing this:

https://github.com/imsnif/what/blob/c9b9025577d0e1f831c9ebb054a5ac00b7a89796/src/main.rs#L161

I think we would get this panic if render_duration is greater than one second.

https://github.com/rust-lang/rust/blob/9d6f87184e5116cf4a96f6686eb67994f19908a5/src/libcore/time.rs#L751-L758
https://github.com/rust-lang/rust/blob/9d6f87184e5116cf4a96f6686eb67994f19908a5/src/libcore/time.rs#L417-L418

Does it make sense @imsnif? I can open a PR to fix it.

@ebroto Just fyi, I have several hundred, if not thousand, network connections open at any given time.

So I would not be surprised to find the render time exceeding some threshold you guys have set internally.

@imsnif
Copy link
Owner

imsnif commented Dec 30, 2019

@ebroto Just fyi, I have several hundred, if not thousand, network connections open at any given time.

So I would not be surprised to find the render time exceeding some threshold you guys have set internally.

Wow! Okay.
@ebroto: what fix did you have in mind?
@titus-anromedonn: does this also happen when you use the --raw flag? If so, does it also happen when you pipe the output with --raw to a file? eg sudo what --raw > /tmp/log?

I'm asking because right not the display loop is a little hacky: it is invoked every 1 second and then displays whatever happened in that second. If it takes it more than 1 second to display, this might be the issue we're seeing here.

@ebroto
Copy link
Collaborator

ebroto commented Dec 30, 2019

Wow :) That would probably be more than what the tool has been exposed to before. There are some performance improvements pending, but I think that for now to fix this panic we could just avoid parking the thread.

@imsnif
Copy link
Owner

imsnif commented Dec 30, 2019

Sorry @ebroto - my curiosity got the better of me. :) Let me know if you need anything from me.

@ebroto
Copy link
Collaborator

ebroto commented Dec 30, 2019

Hey @imsnif sorry I was answering to the previous message by titus-anromedonn, forgot the mention and we commented at the same time :)

@titus-anromedonn
Copy link
Author

@ebroto @imsnif

I tried piping the output and without. Both seem to still cause the panic.

# what  --raw > /tmp/test-what.log
thread 'display_handler' panicked at 'overflow when subtracting durations', src/libcore/option.rs:1185:5
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace.

I suspect it is my pool off mass connections that is causing the issue. Unfortunately, I cannot shut down my internet connections at this time to test and verify, but I will try to replicate this issue in a VM a bit later.

@imsnif
Copy link
Owner

imsnif commented Dec 30, 2019

This was accidentally closed when I merged @ebroto's PR - so reopening it until we can confirm the fix. I'm not 100% certain this will work (because if rendering takes 30s...)

@ebroto
Copy link
Collaborator

ebroto commented Dec 30, 2019

@titus-anromedonn could you test with the latest release? The fix related to the render_duration has been applied :)

@titus-anromedonn
Copy link
Author

@ebroto I noticed that you guys created a release but did not push to creates.io Any plans on doing that or should I just test off of the github release packed source?

@ebroto
Copy link
Collaborator

ebroto commented Dec 30, 2019

Oh right, I forgot to mention that the name has been changed to bandwhich :)

@titus-anromedonn
Copy link
Author

@ebroto Ahh awesome! Can confirm that it works now!

Thanks for the quick response guys!

@ebroto
Copy link
Collaborator

ebroto commented Dec 30, 2019

Cool! Thank you for reporting the issue :)

@ebroto ebroto closed this as completed Dec 30, 2019
@imsnif
Copy link
Owner

imsnif commented Dec 30, 2019

@titus-anromedonn - I would be very interested in hearing more about your experiences running bandwhich under that kind of load. If you feel like sharing here or privately: aram@poor.dev

Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants