Skip to content
This repository has been archived by the owner on Nov 29, 2023. It is now read-only.

Ursa-proxy, tokio-runtime-worker: internal error: entered unreachable code: the default fallback added in `Router::new #522

Closed
3 tasks
heldrida opened this issue Apr 24, 2023 · 14 comments
Assignees

Comments

@heldrida
Copy link
Member

heldrida commented Apr 24, 2023

Description

On Docker stack, fresh install and vps, an error shows up which might be similar or same as previously reported "tokio-runtime-worker: internal error: entered unreachable code: the default fallback added in Router::new matches everything".

As the Docker stack is set to restart on failure, the issue seems not to occur immediately after. Left the service running and when checked back after no interactions past an initial /ping, seen the error in the logs again.

Environment

  • Ubuntu 22.04 LTS

Demo

full-node-ursa-proxy-1  | thread 'tokio-runtime-worker' panicked at 'internal error: entered unreachable code: the default fallback added in `Router::new` matches everything', /usr/local/cargo/registry/src/github.com-1ecc6299db9ec823/axum-0.6.16/src/routing/mod.rs:318:25
full-node-ursa-proxy-1  | note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
full-node-ursa-proxy-1 exited with code 0

After leaving the server running for >=15m, post the initial /ping request

full-node-ursa-proxy-1  |   2023-04-24T10:37:46.178222Z  INFO ursa_proxy::core::handler: Cache miss for ping
full-node-ursa-proxy-1  |     at crates/ursa-proxy/src/core/handler.rs:172
full-node-ursa-proxy-1  |     in axum_tracing_opentelemetry::middleware::trace_extractor::HTTP request with otel.name: GET /*path, http.client_ip: , http.flavor: 1.1, http.host: localhost, http.method: GET, http.route: /*path, http.scheme: HTTP, http.target: /ping, http.user_agent: curl/7.81.0, otel.kind: server, trace_id: fed2f655f1f050e7e585dc9c262dc293
full-node-ursa-proxy-1  |
full-node-ursa-proxy-1  |   2023-04-24T10:37:46.178242Z  INFO ursa_proxy::core::handler: Sending request to http://full-node-ursa-1:4069/ping
full-node-ursa-proxy-1  |     at crates/ursa-proxy/src/core/handler.rs:180
full-node-ursa-proxy-1  |     in axum_tracing_opentelemetry::middleware::trace_extractor::HTTP request with otel.name: GET /*path, http.client_ip: , http.flavor: 1.1, http.host: localhost, http.method: GET, http.route: /*path, http.scheme: HTTP, http.target: /ping, http.user_agent: curl/7.81.0, otel.kind: server, trace_id: fed2f655f1f050e7e585dc9c262dc293
full-node-ursa-proxy-1  |
full-node-ursa-proxy-1  |   2023-04-24T10:37:46.179994Z  INFO tower_http::trace::on_response: finished processing request, latency: 1797 μs, status: 200, response_headers: {"content-type": "text/plain; charset=utf-8", "content-length": "4", "date": "Mon, 24 Apr 2023 10:37:46 GMT"}
full-node-ursa-proxy-1  |     at /usr/local/cargo/registry/src/github.com-1ecc6299db9ec823/tower-http-0.4.0/src/trace/on_response.rs:254
full-node-ursa-proxy-1  |     in axum_tracing_opentelemetry::middleware::trace_extractor::HTTP request with otel.name: GET /*path, http.client_ip: , http.flavor: 1.1, http.host: localhost, http.method: GET, http.route: /*path, http.scheme: HTTP, http.target: /ping, http.user_agent: curl/7.81.0, otel.kind: server, trace_id: fed2f655f1f050e7e585dc9c262dc293
full-node-ursa-proxy-1  |
full-node-ursa-1        |   2023-04-24T10:40:23.688659Z  INFO ursa_network::service: Starting random kademlia walk
full-node-ursa-1        |     at crates/ursa-network/src/service.rs:998
full-node-ursa-1        |
full-node-ursa-1        |   2023-04-24T10:45:23.689696Z  INFO ursa_network::service: Starting random kademlia walk
full-node-ursa-1        |     at crates/ursa-network/src/service.rs:998
full-node-ursa-1        |
full-node-grafana-1     | logger=cleanup t=2023-04-24T10:45:25.419807558Z level=info msg="Completed cleanup jobs" duration=48.74374ms
full-node-ursa-proxy-1  |   2023-04-24T10:45:34.520918Z  INFO ursa_proxy::core::handler: Cache miss for cdn-cgi/trace
full-node-ursa-proxy-1  |     at crates/ursa-proxy/src/core/handler.rs:172
full-node-ursa-proxy-1  |     in axum_tracing_opentelemetry::middleware::trace_extractor::HTTP request with otel.name: GET /*path, http.client_ip: , http.flavor: 1.1, http.host: speed.cloudflare.com, http.method: GET, http.route: /*path, http.scheme: HTTP, http.target: /cdn-cgi/trace, http.user_agent: Mozilla/5.0, otel.kind: server, trace_id: ee3ac5d1def04eba98e75a5f27b77057
full-node-ursa-proxy-1  |
full-node-ursa-proxy-1  |   2023-04-24T10:45:34.520947Z  INFO ursa_proxy::core::handler: Sending request to http://full-node-ursa-1:4069/cdn-cgi/trace
full-node-ursa-proxy-1  |     at crates/ursa-proxy/src/core/handler.rs:180
full-node-ursa-proxy-1  |     in axum_tracing_opentelemetry::middleware::trace_extractor::HTTP request with otel.name: GET /*path, http.client_ip: , http.flavor: 1.1, http.host: speed.cloudflare.com, http.method: GET, http.route: /*path, http.scheme: HTTP, http.target: /cdn-cgi/trace, http.user_agent: Mozilla/5.0, otel.kind: server, trace_id: ee3ac5d1def04eba98e75a5f27b77057
full-node-ursa-proxy-1  |
full-node-ursa-proxy-1  |   2023-04-24T10:45:34.522178Z  INFO tower_http::trace::on_response: finished processing request, latency: 1290 μs, status: 404, response_headers: {"content-length": "0", "date": "Mon, 24 Apr 2023 10:45:34 GMT"}
full-node-ursa-proxy-1  |     at /usr/local/cargo/registry/src/github.com-1ecc6299db9ec823/tower-http-0.4.0/src/trace/on_response.rs:254
full-node-ursa-proxy-1  |     in axum_tracing_opentelemetry::middleware::trace_extractor::HTTP request with otel.name: GET /*path, http.client_ip: , http.flavor: 1.1, http.host: speed.cloudflare.com, http.method: GET, http.route: /*path, http.scheme: HTTP, http.target: /cdn-cgi/trace, http.user_agent: Mozilla/5.0, otel.kind: server, trace_id: ee3ac5d1def04eba98e75a5f27b77057
full-node-ursa-proxy-1  |
full-node-ursa-proxy-1  | OpenTelemetry trace error occurred. Exporter jaeger encountered the following error(s): thrift agent failed with not open
full-node-ursa-1        |   2023-04-24T10:46:13.626051Z  WARN ursa_network::service: Private NAT detected. Nodes should be publically accessable on 4890(udp) and 6009(tcp), as well as standard http(80) and https(443)! Falling back temporarily to public relay address on bootstrap node /ip4/159.223.211.234/tcp/6009/p2p/12D3KooWDji7xMLia6GAsyr4oiEFD2dd3zSryqNhfxU3Grzs1r9p/p2p-circuit/p2p/12D3KooWFKPpNHtV6LH9xk6paNLWYML9GcdscDnbDKdnHeDscAoY
full-node-ursa-1        |     at crates/ursa-network/src/service.rs:435
full-node-ursa-1        |
full-node-ursa-proxy-1  |   2023-04-24T10:48:31.921456Z  INFO ursa_proxy::core::handler: Cache miss for cdn-cgi/trace
full-node-ursa-proxy-1  |     at crates/ursa-proxy/src/core/handler.rs:172
full-node-ursa-proxy-1  |     in axum_tracing_opentelemetry::middleware::trace_extractor::HTTP request with otel.name: GET /*path, http.client_ip: , http.flavor: 1.1, http.host: speed.cloudflare.com, http.method: GET, http.route: /*path, http.scheme: HTTP, http.target: /cdn-cgi/trace, http.user_agent: Mozilla/5.0, otel.kind: server, trace_id: 29aa852786ffb4af027194594ced7ec0
full-node-ursa-proxy-1  |
full-node-ursa-proxy-1  |   2023-04-24T10:48:31.921486Z  INFO ursa_proxy::core::handler: Sending request to http://full-node-ursa-1:4069/cdn-cgi/trace
full-node-ursa-proxy-1  |     at crates/ursa-proxy/src/core/handler.rs:180
full-node-ursa-proxy-1  |     in axum_tracing_opentelemetry::middleware::trace_extractor::HTTP request with otel.name: GET /*path, http.client_ip: , http.flavor: 1.1, http.host: speed.cloudflare.com, http.method: GET, http.route: /*path, http.scheme: HTTP, http.target: /cdn-cgi/trace, http.user_agent: Mozilla/5.0, otel.kind: server, trace_id: 29aa852786ffb4af027194594ced7ec0
full-node-ursa-proxy-1  |
full-node-ursa-proxy-1  |   2023-04-24T10:48:31.922753Z  INFO tower_http::trace::on_response: finished processing request, latency: 1307 μs, status: 404, response_headers: {"content-length": "0", "date": "Mon, 24 Apr 2023 10:48:31 GMT"}
full-node-ursa-proxy-1  |     at /usr/local/cargo/registry/src/github.com-1ecc6299db9ec823/tower-http-0.4.0/src/trace/on_response.rs:254
full-node-ursa-proxy-1  |     in axum_tracing_opentelemetry::middleware::trace_extractor::HTTP request with otel.name: GET /*path, http.client_ip: , http.flavor: 1.1, http.host: speed.cloudflare.com, http.method: GET, http.route: /*path, http.scheme: HTTP, http.target: /cdn-cgi/trace, http.user_agent: Mozilla/5.0, otel.kind: server, trace_id: 29aa852786ffb4af027194594ced7ec0
full-node-ursa-proxy-1  |
full-node-ursa-1        |   2023-04-24T10:50:23.690698Z  INFO ursa_network::service: Starting random kademlia walk
full-node-ursa-1        |     at crates/ursa-network/src/service.rs:998
full-node-ursa-1        |
full-node-ursa-proxy-1  | thread 'tokio-runtime-worker' panicked at 'internal error: entered unreachable code: the default fallback added in `Router::new` matches everything', /usr/local/cargo/registry/src/github.com-1ecc6299db9ec823/axum-0.6.16/src/routing/mod.rs:318:25
full-node-ursa-proxy-1  | note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
full-node-ursa-proxy-1 exited with code 0
full-node-ursa-1        |   2023-04-24T10:55:23.692716Z  INFO ursa_network::service: Starting random kademlia walk
full-node-ursa-1        |     at crates/ursa-network/src/service.rs:998
full-node-ursa-1        |
full-node-grafana-1     | logger=cleanup t=2023-04-24T10:55:25.392828672Z level=info msg="Completed cleanup jobs" duration=20.938464ms
full-node-ursa-1        |   2023-04-24T10:55:27.745711Z  WARN libp2p_gossipsub::behaviour: GRAFT: ignoring request from direct peer 12D3KooWLYSxQxmvTarcqu3DPR2zfVpG1dp2K8uFXag5RsSXxsLa
full-node-ursa-1        |     at /usr/local/cargo/git/checkouts/rust-libp2p-98135dbcf5b63918/d8de86e/protocols/gossipsub/src/behaviour.rs:1391
full-node-ursa-1        |
full-node-ursa-1        |   2023-04-24T10:56:38.242828Z  INFO ursa_network::service: Public Nat verified! Public listening address: /ip4/144.126.204.31/udp/4890/quic-v1/p2p/12D3KooWFKPpNHtV6LH9xk6paNLWYML9GcdscDnbDKdnHeDscAoY
full-node-ursa-1        |     at crates/ursa-network/src/service.rs:452
full-node-ursa-1        |
full-node-ursa-1        |   2023-04-24T10:57:07.475747Z  WARN libp2p_gossipsub::handler: Dial upgrade error Timeout
full-node-ursa-1        |     at /usr/local/cargo/git/checkouts/rust-libp2p-98135dbcf5b63918/d8de86e/protocols/gossipsub/src/handler.rs:578
full-node-ursa-1        |
full-node-ursa-proxy-1  | thread 'tokio-runtime-worker' panicked at 'internal error: entered unreachable code: the default fallback added in `Router::new` matches everything', /usr/local/cargo/registry/src/github.com-1ecc6299db9ec823/axum-0.6.16/src/routing/mod.rs:318:25
full-node-ursa-proxy-1  | note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
full-node-ursa-proxy-1 exited with code 0

Notes

Checklist

  • I have ensured that my version is up-to-date
  • I have ensured that my issue is reproducible
  • I have ensured that my issue is not a duplicate
@heldrida
Copy link
Member Author

The same issue happens natively, as a systemd service. Below I've shared some logs to help troubleshoot.

This is the STDERR error messages, have in mind there are appended continuously, as the service restarts, similar to the docker compose.

Error: Could not find config file
Error: Could not find config file
thread 'tokio-runtime-worker' panicked at 'internal error: entered unreachable code: the default fallback added in `Router::new` matches everything', /root/.cargo/registry/src/github.com-1ecc6299db9ec823/axum-0.6.16/src/routing/mod.rs:318:25
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
thread 'tokio-runtime-worker' panicked at 'internal error: entered unreachable code: the default fallback added in `Router::new` matches everything', /root/.cargo/registry/src/github.com-1ecc6299db9ec823/axum-0.6.16/src/routing/mod.rs:318:25
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
thread 'tokio-runtime-worker' panicked at 'internal error: entered unreachable code: the default fallback added in `Router::new` matches everything', /root/.cargo/registry/src/github.com-1ecc6299db9ec823/axum-0.6.16/src/routing/mod.rs:318:25
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
thread 'tokio-runtime-worker' panicked at 'internal error: entered unreachable code: the default fallback added in `Router::new` matches everything', /root/.cargo/registry/src/github.com-1ecc6299db9ec823/axum-0.6.16/src/routing/mod.rs:318:25
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
thread 'tokio-runtime-worker' panicked at 'internal error: entered unreachable code: the default fallback added in `Router::new` matches everything', /root/.cargo/registry/src/github.com-1ecc6299db9ec823/axum-0.6.16/src/routing/mod.rs:318:25
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
thread 'tokio-runtime-worker' panicked at 'internal error: entered unreachable code: the default fallback added in `Router::new` matches everything', /root/.cargo/registry/src/github.com-1ecc6299db9ec823/axum-0.6.16/src/routing/mod.rs:318:25
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
thread 'tokio-runtime-worker' panicked at 'internal error: entered unreachable code: the default fallback added in `Router::new` matches everything', /root/.cargo/registry/src/github.com-1ecc6299db9ec823/axum-0.6.16/src/routing/mod.rs:318:25
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace

The stdout messages are also appended. Do note that the systemd service, as docker stack, has the order to restart on failure.

  2023-04-24T13:24:37.714433Z  INFO ursa_proxy::core: Listening on 0.0.0.0:80
    at crates/ursa-proxy/src/core/mod.rs:60

  2023-04-24T13:24:37.714855Z  INFO ursa_proxy::core: Listening on 0.0.0.0:443
    at crates/ursa-proxy/src/core/mod.rs:60

  2023-04-24T13:24:37.742254Z  INFO ursa_proxy::core: Shutting down servers
    at crates/ursa-proxy/src/core/mod.rs:85

  2023-04-24T13:24:37.742430Z  INFO ursa_proxy: Proxy shut down successfully
    at crates/ursa-proxy/src/main.rs:76

  2023-04-24T13:24:37.750533Z  INFO ursa_proxy::core: Listening on 0.0.0.0:80
    at crates/ursa-proxy/src/core/mod.rs:60

  2023-04-24T13:24:37.750919Z  INFO ursa_proxy::core: Listening on 0.0.0.0:443
    at crates/ursa-proxy/src/core/mod.rs:60

  2023-04-24T13:24:37.773621Z  INFO ursa_proxy::core::handler: Cache miss for ping
    at crates/ursa-proxy/src/core/handler.rs:172
    in axum_tracing_opentelemetry::middleware::trace_extractor::HTTP request with otel.name: GET /*path, http.client_ip: , http.flavor: 2.0, http.host: , http.method: GET, http.route: /*path, http.scheme: https, http.target: /ping, http.user_agent: curl/7.74.0, otel.kind: server, trace_id: 897b819e2f903f8d60de2d0b59242546

  2023-04-24T13:24:37.773651Z  INFO ursa_proxy::core::handler: Sending request to http://127.0.0.1:4069/ping
    at crates/ursa-proxy/src/core/handler.rs:180
    in axum_tracing_opentelemetry::middleware::trace_extractor::HTTP request with otel.name: GET /*path, http.client_ip: , http.flavor: 2.0, http.host: , http.method: GET, http.route: /*path, http.scheme: https, http.target: /ping, http.user_agent: curl/7.74.0, otel.kind: server, trace_id: 897b819e2f903f8d60de2d0b59242546

  2023-04-24T13:24:37.774160Z  INFO tower_http::trace::on_response: finished processing request, latency: 560 μs, status: 200, response_headers: {"content-type": "text/plain; charset=utf-8", "content-length": "4", "date": "Mon, 24 Apr 2023 13:24:37 GMT"}
    at /root/.cargo/registry/src/github.com-1ecc6299db9ec823/tower-http-0.4.0/src/trace/on_response.rs:254
    in axum_tracing_opentelemetry::middleware::trace_extractor::HTTP request with otel.name: GET /*path, http.client_ip: , http.flavor: 2.0, http.host: , http.method: GET, http.route: /*path, http.scheme: https, http.target: /ping, http.user_agent: curl/7.74.0, otel.kind: server, trace_id: 897b819e2f903f8d60de2d0b59242546

  2023-04-24T13:25:22.664344Z  INFO ursa_proxy::core: Listening on 0.0.0.0:80
    at crates/ursa-proxy/src/core/mod.rs:60

  2023-04-24T13:25:22.664706Z  INFO ursa_proxy::core: Listening on 0.0.0.0:443
    at crates/ursa-proxy/src/core/mod.rs:60

  2023-04-24T13:25:44.664888Z  INFO ursa_proxy::core: Listening on 0.0.0.0:80
    at crates/ursa-proxy/src/core/mod.rs:60

  2023-04-24T13:25:44.665323Z  INFO ursa_proxy::core: Listening on 0.0.0.0:443
    at crates/ursa-proxy/src/core/mod.rs:60

  2023-04-24T13:26:18.414428Z  INFO ursa_proxy::core: Listening on 0.0.0.0:80
    at crates/ursa-proxy/src/core/mod.rs:60

  2023-04-24T13:26:18.414754Z  INFO ursa_proxy::core: Listening on 0.0.0.0:443
    at crates/ursa-proxy/src/core/mod.rs:60

  2023-04-24T13:26:40.414956Z  INFO ursa_proxy::core: Listening on 0.0.0.0:80
    at crates/ursa-proxy/src/core/mod.rs:60

  2023-04-24T13:26:40.415358Z  INFO ursa_proxy::core: Listening on 0.0.0.0:443
    at crates/ursa-proxy/src/core/mod.rs:60

  2023-04-24T13:34:38.163942Z  INFO ursa_proxy::core: Listening on 0.0.0.0:80
    at crates/ursa-proxy/src/core/mod.rs:60

  2023-04-24T13:34:38.164295Z  INFO ursa_proxy::core: Listening on 0.0.0.0:443
    at crates/ursa-proxy/src/core/mod.rs:60

  2023-04-24T13:51:16.415369Z  INFO ursa_proxy::core: Listening on 0.0.0.0:80
    at crates/ursa-proxy/src/core/mod.rs:60

  2023-04-24T13:51:16.415783Z  INFO ursa_proxy::core: Listening on 0.0.0.0:443
    at crates/ursa-proxy/src/core/mod.rs:60

  2023-04-24T14:03:51.913977Z  INFO ursa_proxy::core: Listening on 0.0.0.0:80
    at crates/ursa-proxy/src/core/mod.rs:60

  2023-04-24T14:03:51.914382Z  INFO ursa_proxy::core: Listening on 0.0.0.0:443
    at crates/ursa-proxy/src/core/mod.rs:60

The journal

Apr 24 13:23:38 debian-s-8vcpu-16gb-intel-fra1-01 systemd[1]: Started Ursa-proxy, for the Decentralized Content Delivery Network (DCDN).
Apr 24 13:23:38 debian-s-8vcpu-16gb-intel-fra1-01 systemd[1]: ursa-proxy.service: Main process exited, code=exited, status=1/FAILURE
Apr 24 13:23:38 debian-s-8vcpu-16gb-intel-fra1-01 systemd[1]: ursa-proxy.service: Failed with result 'exit-code'.
Apr 24 13:23:53 debian-s-8vcpu-16gb-intel-fra1-01 systemd[1]: ursa-proxy.service: Scheduled restart job, restart counter is at 1.
Apr 24 13:23:53 debian-s-8vcpu-16gb-intel-fra1-01 systemd[1]: Stopped Ursa-proxy, for the Decentralized Content Delivery Network (DCDN).
Apr 24 13:23:53 debian-s-8vcpu-16gb-intel-fra1-01 systemd[1]: Started Ursa-proxy, for the Decentralized Content Delivery Network (DCDN).
Apr 24 13:23:53 debian-s-8vcpu-16gb-intel-fra1-01 systemd[1]: ursa-proxy.service: Main process exited, code=exited, status=1/FAILURE
Apr 24 13:23:53 debian-s-8vcpu-16gb-intel-fra1-01 systemd[1]: ursa-proxy.service: Failed with result 'exit-code'.
Apr 24 13:23:59 debian-s-8vcpu-16gb-intel-fra1-01 systemd[1]: Stopped Ursa-proxy, for the Decentralized Content Delivery Network (DCDN).
Apr 24 13:24:37 debian-s-8vcpu-16gb-intel-fra1-01 systemd[1]: Started Ursa-proxy, for the Decentralized Content Delivery Network (DCDN).
Apr 24 13:24:37 debian-s-8vcpu-16gb-intel-fra1-01 systemd[1]: Stopping Ursa-proxy, for the Decentralized Content Delivery Network (DCDN)...
Apr 24 13:24:37 debian-s-8vcpu-16gb-intel-fra1-01 systemd[1]: ursa-proxy.service: Succeeded.
Apr 24 13:24:37 debian-s-8vcpu-16gb-intel-fra1-01 systemd[1]: Stopped Ursa-proxy, for the Decentralized Content Delivery Network (DCDN).
Apr 24 13:24:37 debian-s-8vcpu-16gb-intel-fra1-01 systemd[1]: Started Ursa-proxy, for the Decentralized Content Delivery Network (DCDN).
Apr 24 13:25:07 debian-s-8vcpu-16gb-intel-fra1-01 systemd[1]: ursa-proxy.service: Main process exited, code=killed, status=6/ABRT
Apr 24 13:25:07 debian-s-8vcpu-16gb-intel-fra1-01 systemd[1]: ursa-proxy.service: Failed with result 'signal'.
Apr 24 13:25:22 debian-s-8vcpu-16gb-intel-fra1-01 systemd[1]: ursa-proxy.service: Scheduled restart job, restart counter is at 1.
Apr 24 13:25:22 debian-s-8vcpu-16gb-intel-fra1-01 systemd[1]: Stopped Ursa-proxy, for the Decentralized Content Delivery Network (DCDN).
Apr 24 13:25:22 debian-s-8vcpu-16gb-intel-fra1-01 systemd[1]: Started Ursa-proxy, for the Decentralized Content Delivery Network (DCDN).
Apr 24 13:25:29 debian-s-8vcpu-16gb-intel-fra1-01 systemd[1]: ursa-proxy.service: Main process exited, code=killed, status=6/ABRT
Apr 24 13:25:29 debian-s-8vcpu-16gb-intel-fra1-01 systemd[1]: ursa-proxy.service: Failed with result 'signal'.
Apr 24 13:25:44 debian-s-8vcpu-16gb-intel-fra1-01 systemd[1]: ursa-proxy.service: Scheduled restart job, restart counter is at 2.
Apr 24 13:25:44 debian-s-8vcpu-16gb-intel-fra1-01 systemd[1]: Stopped Ursa-proxy, for the Decentralized Content Delivery Network (DCDN).
Apr 24 13:25:44 debian-s-8vcpu-16gb-intel-fra1-01 systemd[1]: Started Ursa-proxy, for the Decentralized Content Delivery Network (DCDN).
Apr 24 13:26:03 debian-s-8vcpu-16gb-intel-fra1-01 systemd[1]: ursa-proxy.service: Main process exited, code=killed, status=6/ABRT
Apr 24 13:26:03 debian-s-8vcpu-16gb-intel-fra1-01 systemd[1]: ursa-proxy.service: Failed with result 'signal'.
Apr 24 13:26:18 debian-s-8vcpu-16gb-intel-fra1-01 systemd[1]: ursa-proxy.service: Scheduled restart job, restart counter is at 3.
Apr 24 13:26:18 debian-s-8vcpu-16gb-intel-fra1-01 systemd[1]: Stopped Ursa-proxy, for the Decentralized Content Delivery Network (DCDN).
Apr 24 13:26:18 debian-s-8vcpu-16gb-intel-fra1-01 systemd[1]: Started Ursa-proxy, for the Decentralized Content Delivery Network (DCDN).
Apr 24 13:26:25 debian-s-8vcpu-16gb-intel-fra1-01 systemd[1]: ursa-proxy.service: Main process exited, code=killed, status=6/ABRT
Apr 24 13:26:25 debian-s-8vcpu-16gb-intel-fra1-01 systemd[1]: ursa-proxy.service: Failed with result 'signal'.
Apr 24 13:26:40 debian-s-8vcpu-16gb-intel-fra1-01 systemd[1]: ursa-proxy.service: Scheduled restart job, restart counter is at 4.
Apr 24 13:26:40 debian-s-8vcpu-16gb-intel-fra1-01 systemd[1]: Stopped Ursa-proxy, for the Decentralized Content Delivery Network (DCDN).
Apr 24 13:26:40 debian-s-8vcpu-16gb-intel-fra1-01 systemd[1]: Started Ursa-proxy, for the Decentralized Content Delivery Network (DCDN).
Apr 24 13:34:22 debian-s-8vcpu-16gb-intel-fra1-01 systemd[1]: ursa-proxy.service: Main process exited, code=killed, status=6/ABRT
Apr 24 13:34:22 debian-s-8vcpu-16gb-intel-fra1-01 systemd[1]: ursa-proxy.service: Failed with result 'signal'.
Apr 24 13:34:38 debian-s-8vcpu-16gb-intel-fra1-01 systemd[1]: ursa-proxy.service: Scheduled restart job, restart counter is at 5.
Apr 24 13:34:38 debian-s-8vcpu-16gb-intel-fra1-01 systemd[1]: Stopped Ursa-proxy, for the Decentralized Content Delivery Network (DCDN).
Apr 24 13:34:38 debian-s-8vcpu-16gb-intel-fra1-01 systemd[1]: Started Ursa-proxy, for the Decentralized Content Delivery Network (DCDN).
Apr 24 13:51:01 debian-s-8vcpu-16gb-intel-fra1-01 systemd[1]: ursa-proxy.service: Main process exited, code=killed, status=6/ABRT
Apr 24 13:51:01 debian-s-8vcpu-16gb-intel-fra1-01 systemd[1]: ursa-proxy.service: Failed with result 'signal'.
Apr 24 13:51:16 debian-s-8vcpu-16gb-intel-fra1-01 systemd[1]: ursa-proxy.service: Scheduled restart job, restart counter is at 6.
Apr 24 13:51:16 debian-s-8vcpu-16gb-intel-fra1-01 systemd[1]: Stopped Ursa-proxy, for the Decentralized Content Delivery Network (DCDN).
Apr 24 13:51:16 debian-s-8vcpu-16gb-intel-fra1-01 systemd[1]: Started Ursa-proxy, for the Decentralized Content Delivery Network (DCDN).
Apr 24 14:03:36 debian-s-8vcpu-16gb-intel-fra1-01 systemd[1]: ursa-proxy.service: Main process exited, code=killed, status=6/ABRT
Apr 24 14:03:36 debian-s-8vcpu-16gb-intel-fra1-01 systemd[1]: ursa-proxy.service: Failed with result 'signal'.
Apr 24 14:03:51 debian-s-8vcpu-16gb-intel-fra1-01 systemd[1]: ursa-proxy.service: Scheduled restart job, restart counter is at 7.
Apr 24 14:03:51 debian-s-8vcpu-16gb-intel-fra1-01 systemd[1]: Stopped Ursa-proxy, for the Decentralized Content Delivery Network (DCDN).
Apr 24 14:03:51 debian-s-8vcpu-16gb-intel-fra1-01 systemd[1]: Started Ursa-proxy, for the Decentralized Content Delivery Network (DCDN)

@kckeiks
Copy link
Collaborator

kckeiks commented Apr 24, 2023

Might be related tokio-rs/axum#1955. It was opened a couple of days ago. We're using axum version 0.6.1.

@kckeiks
Copy link
Collaborator

kckeiks commented Apr 24, 2023

@heldrida I haven't been able to replicate this.

@davidpdrsn
Copy link

We're using axum version 0.6.1

axum = { version = "0.6.1", features = ["multipart", "headers"] } means 0.6.1 or newer. Cargo will pick the newest version that matches that. If you want exactly 0.6.1 you have to do version = "=0.6.1".

This error you're seeing comes from a version after 0.6.13. While it's not exactly the same error as tokio-rs/axum#1955 it is related for sure. If you find a way to reproduce it I'd love hear about it! Been trying to repro that for days without any luck 😕

@heldrida
Copy link
Member Author

heldrida commented Apr 24, 2023

Unlucky me, as I keep getting this issue every time. The following is the latest lines in the journalctl, as we see it keeps status=6/ABRT

Apr 24 18:49:33 debian-s-8vcpu-16gb-intel-lon1-01 systemd[1]: ursa-proxy.service: Scheduled restart job, restart counter is at 9.
Apr 24 18:49:33 debian-s-8vcpu-16gb-intel-lon1-01 systemd[1]: Stopped Ursa-proxy, for the Decentralized Content Delivery Network (DCDN).
Apr 24 18:49:33 debian-s-8vcpu-16gb-intel-lon1-01 systemd[1]: Started Ursa-proxy, for the Decentralized Content Delivery Network (DCDN).
Apr 24 18:49:33 debian-s-8vcpu-16gb-intel-lon1-01 systemd[1]: ursa-proxy.service: Main process exited, code=exited, status=1/FAILURE
Apr 24 18:49:33 debian-s-8vcpu-16gb-intel-lon1-01 systemd[1]: ursa-proxy.service: Failed with result 'exit-code'.
Apr 24 18:49:49 debian-s-8vcpu-16gb-intel-lon1-01 systemd[1]: ursa-proxy.service: Scheduled restart job, restart counter is at 10.
Apr 24 18:49:49 debian-s-8vcpu-16gb-intel-lon1-01 systemd[1]: Stopped Ursa-proxy, for the Decentralized Content Delivery Network (DCDN).
Apr 24 18:49:49 debian-s-8vcpu-16gb-intel-lon1-01 systemd[1]: Started Ursa-proxy, for the Decentralized Content Delivery Network (DCDN).
Apr 24 18:49:49 debian-s-8vcpu-16gb-intel-lon1-01 systemd[1]: ursa-proxy.service: Main process exited, code=exited, status=1/FAILURE
Apr 24 18:49:49 debian-s-8vcpu-16gb-intel-lon1-01 systemd[1]: ursa-proxy.service: Failed with result 'exit-code'.
Apr 24 18:50:01 debian-s-8vcpu-16gb-intel-lon1-01 systemd[1]: Stopped Ursa-proxy, for the Decentralized Content Delivery Network (DCDN).
Apr 24 18:50:40 debian-s-8vcpu-16gb-intel-lon1-01 systemd[1]: Started Ursa-proxy, for the Decentralized Content Delivery Network (DCDN).
Apr 24 18:50:40 debian-s-8vcpu-16gb-intel-lon1-01 systemd[1]: Stopping Ursa-proxy, for the Decentralized Content Delivery Network (DCDN)...
Apr 24 18:50:40 debian-s-8vcpu-16gb-intel-lon1-01 systemd[1]: ursa-proxy.service: Succeeded.
Apr 24 18:50:40 debian-s-8vcpu-16gb-intel-lon1-01 systemd[1]: Stopped Ursa-proxy, for the Decentralized Content Delivery Network (DCDN).
Apr 24 18:50:40 debian-s-8vcpu-16gb-intel-lon1-01 systemd[1]: Started Ursa-proxy, for the Decentralized Content Delivery Network (DCDN).
Apr 24 18:52:15 debian-s-8vcpu-16gb-intel-lon1-01 systemd[1]: ursa-proxy.service: Main process exited, code=killed, status=6/ABRT
Apr 24 18:52:15 debian-s-8vcpu-16gb-intel-lon1-01 systemd[1]: ursa-proxy.service: Failed with result 'signal'.

This occurred after:

  • Created a VPS
  • Debian 11
  • Run the install script
curl https://get.fleek.network | bash
  • Select "Native" install

As in the original reports, find stdout at /var/log/ursa-proxy/output.log stderr at /var/log/ursa-proxy/diagnostic.log

thread 'tokio-runtime-worker' panicked at 'internal error: entered unreachable code: the default fallback added in `Router::new` matches everything', /root/.cargo/registry/src/github.com-1ecc6299db9ec823/axum-0.6.16/src/routing/mod.rs:318:25
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
thread 'tokio-runtime-worker' panicked at 'internal error: entered unreachable code: the default fallback added in `Router::new` matches everything', /root/.cargo/registry/src/github.com-1ecc6299db9ec823/axum-0.6.16/src/routing/mod.rs:318:25
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace

@kckeiks
Copy link
Collaborator

kckeiks commented Apr 24, 2023

Unlucky me, as I keep getting this issue every time. The following is the latest lines in the journalctl, as we see it keeps status=6/ABRT

Apr 24 18:49:33 debian-s-8vcpu-16gb-intel-lon1-01 systemd[1]: ursa-proxy.service: Scheduled restart job, restart counter is at 9.
Apr 24 18:49:33 debian-s-8vcpu-16gb-intel-lon1-01 systemd[1]: Stopped Ursa-proxy, for the Decentralized Content Delivery Network (DCDN).
Apr 24 18:49:33 debian-s-8vcpu-16gb-intel-lon1-01 systemd[1]: Started Ursa-proxy, for the Decentralized Content Delivery Network (DCDN).
Apr 24 18:49:33 debian-s-8vcpu-16gb-intel-lon1-01 systemd[1]: ursa-proxy.service: Main process exited, code=exited, status=1/FAILURE
Apr 24 18:49:33 debian-s-8vcpu-16gb-intel-lon1-01 systemd[1]: ursa-proxy.service: Failed with result 'exit-code'.
Apr 24 18:49:49 debian-s-8vcpu-16gb-intel-lon1-01 systemd[1]: ursa-proxy.service: Scheduled restart job, restart counter is at 10.
Apr 24 18:49:49 debian-s-8vcpu-16gb-intel-lon1-01 systemd[1]: Stopped Ursa-proxy, for the Decentralized Content Delivery Network (DCDN).
Apr 24 18:49:49 debian-s-8vcpu-16gb-intel-lon1-01 systemd[1]: Started Ursa-proxy, for the Decentralized Content Delivery Network (DCDN).
Apr 24 18:49:49 debian-s-8vcpu-16gb-intel-lon1-01 systemd[1]: ursa-proxy.service: Main process exited, code=exited, status=1/FAILURE
Apr 24 18:49:49 debian-s-8vcpu-16gb-intel-lon1-01 systemd[1]: ursa-proxy.service: Failed with result 'exit-code'.
Apr 24 18:50:01 debian-s-8vcpu-16gb-intel-lon1-01 systemd[1]: Stopped Ursa-proxy, for the Decentralized Content Delivery Network (DCDN).
Apr 24 18:50:40 debian-s-8vcpu-16gb-intel-lon1-01 systemd[1]: Started Ursa-proxy, for the Decentralized Content Delivery Network (DCDN).
Apr 24 18:50:40 debian-s-8vcpu-16gb-intel-lon1-01 systemd[1]: Stopping Ursa-proxy, for the Decentralized Content Delivery Network (DCDN)...
Apr 24 18:50:40 debian-s-8vcpu-16gb-intel-lon1-01 systemd[1]: ursa-proxy.service: Succeeded.
Apr 24 18:50:40 debian-s-8vcpu-16gb-intel-lon1-01 systemd[1]: Stopped Ursa-proxy, for the Decentralized Content Delivery Network (DCDN).
Apr 24 18:50:40 debian-s-8vcpu-16gb-intel-lon1-01 systemd[1]: Started Ursa-proxy, for the Decentralized Content Delivery Network (DCDN).
Apr 24 18:52:15 debian-s-8vcpu-16gb-intel-lon1-01 systemd[1]: ursa-proxy.service: Main process exited, code=killed, status=6/ABRT
Apr 24 18:52:15 debian-s-8vcpu-16gb-intel-lon1-01 systemd[1]: ursa-proxy.service: Failed with result 'signal'.

This occurred after:

* Created a VPS

* Debian 11

* Run the install script
curl https://get.fleek.network | bash
* Select "Native" install

As in the original reports, find stdout at /var/log/ursa-proxy/output.log stderr at /var/log/ursa-proxy/diagnostic.log

thread 'tokio-runtime-worker' panicked at 'internal error: entered unreachable code: the default fallback added in `Router::new` matches everything', /root/.cargo/registry/src/github.com-1ecc6299db9ec823/axum-0.6.16/src/routing/mod.rs:318:25
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
thread 'tokio-runtime-worker' panicked at 'internal error: entered unreachable code: the default fallback added in `Router::new` matches everything', /root/.cargo/registry/src/github.com-1ecc6299db9ec823/axum-0.6.16/src/routing/mod.rs:318:25
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace

It's interesting that this issue doesn't appear when I run the binaries locally w/o containerization 🤔. I'm on Mac btw.

@davidpdrsn
Copy link

I have found a minimal reproduction for the bug in axum: tokio-rs/axum#1955 (comment)

I expect to have a fix out by the end of the week.

@kckeiks
Copy link
Collaborator

kckeiks commented Apr 24, 2023

I have found a minimal reproduction for the bug in axum: tokio-rs/axum#1955 (comment)

I expect to have a fix out by the end of the week.

Awesome! Thank you!

@davidpdrsn
Copy link

I believe tokio-rs/axum#1958 should fix the issue. Are you able to test it?

@heldrida
Copy link
Member Author

@davidpdrsn Just to let you know that I'm having a look and will do a few tests and comment back to you, in the context of our project. Thanks for your time and the PR! I'll comment back here...

@heldrida
Copy link
Member Author

@davidpdrsn looks good so far, thank you!

@davidpdrsn
Copy link

Great! I'll get a release out some time this week.

@davidpdrsn
Copy link

The fix was just released in 0.6.17 🎉

@heldrida
Copy link
Member Author

The fix was just released in 0.6.17 🎉

Thanks! That was super quick 🔥

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants