Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] - __PROTONWIRE_KEY_MAP: bad array subscript #304

Closed
8 tasks done
tsjk opened this issue Apr 8, 2024 · 1 comment · Fixed by #316
Closed
8 tasks done

[BUG] - __PROTONWIRE_KEY_MAP: bad array subscript #304

tsjk opened this issue Apr 8, 2024 · 1 comment · Fixed by #316
Assignees
Labels
api/metadata Metadata API bug Issue is a bug s/keep-open Keep the issue /PR open and prevent from auto closing

Comments

@tsjk
Copy link

tsjk commented Apr 8, 2024

Version

7.5.0

Credential and Server Validation

  • I have verified that the servers I am trying to connect to are available under my plan.
  • I have verified that my generated Wireguard private keys are valid and have required features (Netshield Ad-blocker, VPN accelerator etc) are enabled.
  • I am using a valid server name (either fully qualified DNS name like nl-free-127.protonvpn.net or server name like NL#1) as mentioned in the docs.

System Architecture

x86_64

Kernel Version

6.1.0-18-amd64

Running on a NAS?

No

Runtime

podman

Version of Runtime

host:
  arch: amd64
  buildahVersion: 1.35.3
  cgroupControllers:
  - cpuset
  - cpu
  - io
  - memory
  - hugetlb
  - pids
  - rdma
  - misc
  cgroupManager: systemd
  cgroupVersion: v2
  conmon:
    package: Unknown
    path: /home/linuxbrew/.linuxbrew/bin/conmon
    version: 'conmon version 2.1.10, commit: '
  cpuUtilization:
    idlePercent: 88.98
    systemPercent: 5.56
    userPercent: 5.46
  cpus: 8
  databaseBackend: boltdb
  distribution:
    codename: bookworm
    distribution: debian
    version: "12"
  eventLogger: journald
  freeLocks: 1936
  hostname: almadbz
  idMappings:
    gidmap: null
    uidmap: null
  kernel: 6.1.0-18-amd64
  linkmode: dynamic
  logDriver: journald
  memFree: 3820220416
  memTotal: 16677240832
  networkBackend: netavark
  networkBackendInfo:
    backend: netavark
    dns:
      package: Unknown
      path: /home/linuxbrew/.linuxbrew/opt/podman/libexec/podman/aardvark-dns
      version: aardvark-dns 1.10.0
    package: Unknown
    path: /home/linuxbrew/.linuxbrew/opt/podman/libexec/podman/netavark
    version: netavark 1.10.3
  ociRuntime:
    name: crun
    package: Unknown
    path: /home/linuxbrew/.linuxbrew/bin/crun
    version: |-
      crun version 1.14.4
      commit: a220ca661ce078f2c37b38c92e66cf66c012d9c1
      rundir: /run/crun
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +YAJL
  os: linux
  pasta:
    executable: ""
    package: ""
    version: ""
  remoteSocket:
    exists: true
    path: unix:///run/podman/podman.sock
  security:
    apparmorEnabled: false
    capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: false
    seccompEnabled: true
    seccompProfilePath: ""
    selinuxEnabled: false
  serviceIsRemote: true
  slirp4netns:
    executable: /home/linuxbrew/.linuxbrew/bin/slirp4netns
    package: Unknown
    version: |-
      slirp4netns version 1.2.3
      commit: c22fde291bb35b354e6ca44d13be181c76a0a432
      libslirp: 4.7.0
      SLIRP_CONFIG_VERSION_MAX: 4
      libseccomp: 2.5.5
  swapFree: 17151553536
  swapTotal: 17179865088
  uptime: 1327h 41m 22.00s (Approximately 55.29 days)
  variant: ""
plugins:
  authorization: null
  log:
  - k8s-file
  - none
  - passthrough
  - journald
  network:
  - bridge
  - macvlan
  - ipvlan
  volume:
  - local
registries:
  localhost:5000:
    Blocked: false
    Insecure: true
    Location: localhost:5000
    MirrorByDigestOnly: false
    Mirrors: []
    Prefix: localhost:5000
    PullFromMirror: ""
  search:
  - docker.io
  - quay.io
  - registry.fedoraproject.org
store:
  configFile: /home/linuxbrew/.linuxbrew/etc/containers/storage.conf
  containerStore:
    number: 12
    paused: 0
    running: 12
    stopped: 0
  graphDriverName: zfs
  graphOptions:
    zfs.fsname: rpool/var/lib/podman-storage
    zfs.mountopt: nodev
  graphRoot: /var/lib/containers/storage
  graphRootAllocated: 277798191104
  graphRootUsed: 67239936
  graphStatus:
    Compression: lz4
    Parent Dataset: rpool/var/lib/podman-storage
    Parent Quota: "no"
    Space Available: "707746996224"
    Space Used By Parent: "7006674944"
    Zpool: rpool
    Zpool Health: ONLINE
  imageCopyTmpDir: /var/tmp
  imageStore:
    number: 150
  runRoot: /run/containers/storage
  transientStore: false
  volumePath: /var/lib/containers/storage/volumes
version:
  APIVersion: 5.0.1
  Built: 1711974511
  BuiltTime: Mon Apr  1 14:28:31 2024
  GitCommit: 946d055df324e4ed6c1e806b561af4740db4fea9-dirty
  GoVersion: go1.22.1
  Os: linux
  OsArch: linux/amd64
  Version: 5.0.1

systemd 252 (252.22-1~deb12u1)
+PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified

My configuration

# From .env:
# I - Image location, e.g. local/protonwire:latest
# S - Sources directory (with Dockerfile, natpmp-info-reporter-service.sh, etc)
# THIS - Runtime location (location of .env, private.key, etc)
# PROTONVPN_SERVER - ProtonVPN server

services:
  protonwire:
    build: "${S}"
    cap_add:
      - NET_ADMIN
    container_name: protonwire
    environment:
      DEBUG: "0"
      KILL_SWITCH: "0"
      PROTONVPN_SERVER: "${PROTONVPN_SERVER}"
      TZ: "Etc/UTC"
    image: "${I}"
    init: true
    healthcheck:
      test: ["CMD", "/bin/bash", "-c", "/usr/bin/protonwire check --container --silent || exit 1"]
      interval: 120s
      start_period: 20s
    network_mode: bridge
    #ports:
    #  - 127.0.0.1:3000:3000  # Example service to expose to localhost
    restart: never
    sysctls:
      - net.ipv4.conf.all.rp_filter=2
      - net.ipv6.conf.all.disable_ipv6=1
    volumes:
      - type: tmpfs
        target: /tmp
      - type: bind
        source: ${THIS}/private.key
        target: /etc/protonwire/private-key
        read_only: true

  protonwire-natpmpc:
    command:
    - |
      __update_natpmp() { { date "+%Y-%m-%d %H:%M:%S"; natpmpc -a 1 0 udp 60 -g 10.2.0.1 && natpmpc -a 1 0 tcp 60 -g 10.2.0.1 || return 1; } 2>&1 | ts "[%Y-%m-%dT%H:%M:%S] |" >> "/log/protonwire-natpmpc.log"; };
      __trim_log() { [[ $$(wc -l "/log/protonwire-natpmpc.log" | awk -F " " '{ print $$1 }') -lt $$((128 * 1024)) ]] || sed 1,$$(( (128 * 1024) - (96 * 1024) ))d "/log/protonwire-natpmpc.log" | sponge "/log/protonwire-natpmpc.log"; };
      __socat_server() { [[ ! -f "/natpmp-info-reporter-service.sh" || ! -s "/natpmp-info-reporter-service.sh" ]] || socat TCP-LISTEN:1009,bind=$$(ip -j address show eth0 | jq -r ".[] | .addr_info[].local"),crlf,reuseaddr,fork SYSTEM:"/bin/bash -c /natpmp-info-reporter-service.sh" & };
      __sleep() { [[ "$${1}" =~ ^[0-9][0-9]*$$ ]] && { local -i T=$$(( $$(date "+%s") + $${1} )); while [[ $$(date "+%s") -lt $${T} ]]; do sleep 1; done; }; }
      __socat_server; { while true; do __update_natpmp || break; __trim_log; __sleep 45; done; }
    container_name: protonwire-natpmpc
    depends_on:
      protonwire:
        condition: service_healthy
    entrypoint: ["/bin/bash", "-c"]
    environment:
      TZ: "Etc/UTC"
    healthcheck:
      test:
       - CMD-SHELL
       - |
         [ $$(( $$(date "+%s") - $$(stat -c "%Y" "/log/protonwire-natpmpc.log") )) -lt 120 ] &&
           tail -n 46 "/log/protonwire-natpmpc.log" | grep -q -E "Public IP address : [0-9]{1,3}(\.[0-9]{1,3}){3}\s*$$" &&
           tail -n 46 "/log/protonwire-natpmpc.log" | grep -q -E "Mapped public port [1-9][0-9]* protocol UDP to local port 0 liftime 60\s*$$" &&
           tail -n 46 "/log/protonwire-natpmpc.log" | grep -q -E "Mapped public port [1-9][0-9]* protocol TCP to local port 0 liftime 60\s*$$"
      interval: 120s
      start_period: 23s
    image: "${I}"
    init: true
    network_mode: service:protonwire
    restart: never
    volumes:
      - type: bind
        source: ${S}/natpmp-info-reporter-service.sh
        target: /natpmp-info-reporter-service.sh
      - type: bind
        source: /tmp/protonwire-natpmpc-log
        target: /log

version: '2.3'

Whitelisting API endpoints

I am not using ad-blocking DNS server or gateway

Troubleshooting & Runtime

  • Wireguard is supported by my kernel
  • I have read FAQ and Troubleshooting.
  • I am using latest stable version

Container/Pod/systemd log output with DEBUG=1 or --debug flag

N/A

Any additional info

The error happens now and then. This time it happened when I started the container, and as I recently updated my fork to the latest version, I thought I'd give this additional love by reporting a bug.
The error makes the script die, and in the normal setup this kills the container. On my system, this container dying is not acceptable, which is one reason for me having forked the project and made some changes.
As you can see, it does get going after a while. So, I wonder whether the error could be handled in the protonwire script, so that it doesn't crash. Here are my logs (without debug, but if you really need that I'll try to reproduce it with - but it doesn't happen on demand...):

[NOTICE  ] Skipped validating default IPCHECK_URL
[INFO    ] Refresing server metadata (for <PROTONVPN_SERVER>)
[SUCCESS ] Successfully refreshed server metadata
[SUCCESS ] Server <PROTONVPN_SERVER> is online
[SUCCESS ] Using PrivateKeyFile - /etc/protonwire/private-key
[SUCCESS ] net.ipv4.conf.all.rp_filter is already set to 2
[NOTICE  ] Creating WireGuard Interface - protonwire0
[INFO    ] Setting WireGuard interface address - 10.2.0.2
[INFO    ] Setting WireGuard interface MTU to 1480
[SUCCESS ] Configured WireGuard private key from /etc/protonwire/private-key
/usr/bin/protonwire: line 1963: __PROTONWIRE_KEY_MAP: bad array subscript
[ERROR   ] Setting WireGuard peer () on interface failed!
[ERROR   ] Failed to connect to <PROTONVPN_SERVER>
[INFO    ] No existing routes found in table 51821 (IPv4)
[INFO    ] Removing WireGuard interface
entrypoint.sh: protonwire script died (with exit code 3) - disconnecting...
[INFO    ] No existing routes found in table 51821 (IPv4)
[NOTICE  ] WireGuard interface 'protonwire0' does not exist
entrypoint.sh: protonwire script died (with exit code 3) - will try to reconnect in 20 seconds...
[NOTICE  ] Skipped validating default IPCHECK_URL
[SUCCESS ] Server <PROTONVPN_SERVER> is online
[SUCCESS ] Using PrivateKeyFile - /etc/protonwire/private-key
[SUCCESS ] net.ipv4.conf.all.rp_filter is already set to 2
[NOTICE  ] Creating WireGuard Interface - protonwire0
[INFO    ] Setting WireGuard interface address - 10.2.0.2
[INFO    ] Setting WireGuard interface MTU to 1480
[SUCCESS ] Configured WireGuard private key from /etc/protonwire/private-key
/usr/bin/protonwire: line 1963: __PROTONWIRE_KEY_MAP: bad array subscript
[ERROR   ] Setting WireGuard peer () on interface failed!
[ERROR   ] Failed to connect to <PROTONVPN_SERVER>
[INFO    ] No existing routes found in table 51821 (IPv4)
[INFO    ] Removing WireGuard interface
entrypoint.sh: protonwire script died (with exit code 133) - disconnecting...
[INFO    ] No existing routes found in table 51821 (IPv4)
[NOTICE  ] WireGuard interface 'protonwire0' does not exist
entrypoint.sh: protonwire script died (with exit code 133) - will try to reconnect in 20 seconds...
[NOTICE  ] Skipped validating default IPCHECK_URL
[SUCCESS ] Server <PROTONVPN_SERVER> is online
[SUCCESS ] Using PrivateKeyFile - /etc/protonwire/private-key
[SUCCESS ] net.ipv4.conf.all.rp_filter is already set to 2
[NOTICE  ] Creating WireGuard Interface - protonwire0
[INFO    ] Setting WireGuard interface address - 10.2.0.2
[INFO    ] Setting WireGuard interface MTU to 1480
[SUCCESS ] Configured WireGuard private key from /etc/protonwire/private-key
/usr/bin/protonwire: line 1963: __PROTONWIRE_KEY_MAP: bad array subscript
[ERROR   ] Setting WireGuard peer () on interface failed!
[ERROR   ] Failed to connect to <PROTONVPN_SERVER>
[INFO    ] No existing routes found in table 51821 (IPv4)
[INFO    ] Removing WireGuard interface
entrypoint.sh: protonwire script died (with exit code 256) - disconnecting...
[INFO    ] No existing routes found in table 51821 (IPv4)
[NOTICE  ] WireGuard interface 'protonwire0' does not exist
entrypoint.sh: protonwire script died (with exit code 256) - will try to reconnect in 20 seconds...
[NOTICE  ] Skipped validating default IPCHECK_URL
[SUCCESS ] Server <PROTONVPN_SERVER> is online
[SUCCESS ] Using PrivateKeyFile - /etc/protonwire/private-key
[SUCCESS ] net.ipv4.conf.all.rp_filter is already set to 2
[NOTICE  ] Creating WireGuard Interface - protonwire0
[INFO    ] Setting WireGuard interface address - 10.2.0.2
[INFO    ] Setting WireGuard interface MTU to 1480
[SUCCESS ] Configured WireGuard private key from /etc/protonwire/private-key
/usr/bin/protonwire: line 1963: __PROTONWIRE_KEY_MAP: bad array subscript
[ERROR   ] Setting WireGuard peer () on interface failed!
[ERROR   ] Failed to connect to <PROTONVPN_SERVER>
[INFO    ] No existing routes found in table 51821 (IPv4)
[INFO    ] Removing WireGuard interface
entrypoint.sh: protonwire script died (with exit code 379) - disconnecting...
[INFO    ] No existing routes found in table 51821 (IPv4)
[NOTICE  ] WireGuard interface 'protonwire0' does not exist
entrypoint.sh: protonwire script died (with exit code 379) - will try to reconnect in 20 seconds...
[NOTICE  ] Skipped validating default IPCHECK_URL
[SUCCESS ] Server <PROTONVPN_SERVER> is online
[SUCCESS ] Using PrivateKeyFile - /etc/protonwire/private-key
[SUCCESS ] net.ipv4.conf.all.rp_filter is already set to 2
[NOTICE  ] Creating WireGuard Interface - protonwire0
[INFO    ] Setting WireGuard interface address - 10.2.0.2
[INFO    ] Setting WireGuard interface MTU to 1480
[SUCCESS ] Configured WireGuard private key from /etc/protonwire/private-key
[INFO    ] WireGuard interface is configured with peer - //REDACTED//
[INFO    ] Bringing WireGuard interface up
[SUCCESS ] Configured fwmark on WireGuard interface to - 0xca6d
[NOTICE  ] Creating routes (IPv4)
[SUCCESS ] DNS is is set to 10.2.0.1 via /etc/resolv.conf
[SUCCESS ] Successfully configured DNS (protonwire)
[INFO    ] Verifying connection
[ERROR   ] Failed to resolve DNS domain (https://protonwire-api.vercel.app/v1/client/ip)
[ERROR   ] Retry (1/5) after 2 seconds
[ERROR   ] Failed to resolve DNS domain (https://protonwire-api.vercel.app/v1/client/ip)
[ERROR   ] Retry (2/5) after 2 seconds
[SUCCESS ] Connected to <PROTONVPN_SERVER> (as <IP_ADDRESS>)
[SUCCESS ] Connection verified!
[INFO    ] Checking status - every 120 seconds
[SUCCESS ] Connected to <PROTONVPN_SERVER> (as <IP_ADDRESS>)
[SUCCESS ] Connected to <PROTONVPN_SERVER> (as <IP_ADDRESS>)
...

Code of Conduct & PII Redaction

  • I agree to follow this project's Code of Conduct.
  • I have removed any sensitive personally identifying information(PII) and secrets from in this issue report.
@tsjk tsjk added the bug Issue is a bug label Apr 8, 2024
@tsjk
Copy link
Author

tsjk commented Apr 11, 2024

I wonder if this happens when the downloaded json is strange.
Perhaps we need a validation of that then json actually contains nodes?

diff --git a/protonwire b/protonwire
index 22b4a33..3d569a9 100755
--- a/protonwire
+++ b/protonwire
@@ -852,7 +852,7 @@ function protonvpn_fetch_metadata() {
         # Save to a backup file as download can fail or be corrupted
         if [[ $curl_rc == "0" ]]; then
             # Ensure file is json formatted and valid
-            if jq --exit-status '.Nodes' "${__PROTONWIRE_SRV_INFO_FILE}.bak" >/dev/null 2>&1; then
+            if [[ $(jq '.Nodes | length' "${__PROTONWIRE_SRV_INFO_FILE}.bak" 2> /dev/null) -gt 0 ]]; then
                 if mv --force "${__PROTONWIRE_SRV_INFO_FILE}.bak" "${__PROTONWIRE_SRV_INFO_FILE}"; then
                     log_success "Successfully refreshed server metadata"

But I'm just guessing here. Gonna try with that on my next restart of the container.

@tprasadtp tprasadtp added api/metadata Metadata API s/keep-open Keep the issue /PR open and prevent from auto closing labels Apr 15, 2024
@tprasadtp tprasadtp self-assigned this Apr 18, 2024
@tprasadtp tprasadtp linked a pull request Jun 17, 2024 that will close this issue
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
api/metadata Metadata API bug Issue is a bug s/keep-open Keep the issue /PR open and prevent from auto closing
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants