Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Helm-chart - client pod not starting up #18

Closed
dberardo-com opened this issue Mar 3, 2023 · 27 comments
Closed

Helm-chart - client pod not starting up #18

dberardo-com opened this issue Mar 3, 2023 · 27 comments

Comments

@dberardo-com
Copy link

dberardo-com commented Mar 3, 2023

Details

What steps did you take and what happened:

i have installed the helm chart from: https://github.com/angelnu/helm-charts/tree/main/charts/apps/pod-gateway following the instructions on: https://docs.k8s-at-home.com/guides/pod-gateway/

the chart installs correctly, but then by deploying the test pod from: https://docs.k8s-at-home.com/guides/pod-gateway/#test-deployment i run into a timeout from the sidebar init-container (see below)

What did you expect to happen:

expected an output similar to this: #15 (comment)

Anything else you would like to add:

using k3s, single-node cluster w/ flatten

Additional Information:

this is the output of the gateway-init sidecar container:

+ ip route
10.42.0.0/24 dev eth0 proto kernel scope link src 10.42.0.13 
10.42.0.0/16 via 10.42.0.1 dev eth0 
++ cut -d ' ' -f 1
+ K8S_DNS_IP=10.43.0.10
++ dig +short vpn-gateway-pod-gateway.pod-gateway.svc.cluster.local @10.43.0.10
+ GATEWAY_IP=';; connection timed out; no servers could be reached'
@dberardo-com
Copy link
Author

UPDATE

ok i have found the exact same issue on the doc: https://docs.k8s-at-home.com/guides/pod-gateway/#routed-pod-fails-to-init

that solves the issue above, but i still get this error now

...


+ echo 'Get dynamic IP'
+ dhclient -v -cf /etc/dhclient.conf vxlan0
Internet Systems Consortium DHCP Client 4.4.3
Copyright 2004-2022 Internet Systems Consortium.
All rights reserved.
For info, please visit https://www.isc.org/software/dhcp/

/etc/dhclient.conf line 3: semicolon expected.
link-timeout 10;
              ^
/etc/dhclient.conf line 4: semicolon expected.
reboot 
 ^
Listening on LPF/vxlan0/52:22:39:e1:6b:ec
Sending on   LPF/vxlan0/52:22:39:e1:6b:ec
Sending on   Socket/fallback
DHCPDISCOVER on vxlan0 to 255.255.255.255 port 67 interval 1
DHCPDISCOVER on vxlan0 to 255.255.255.255 port 67 interval 2
DHCPDISCOVER on vxlan0 to 255.255.255.255 port 67 interval 2
DHCPDISCOVER on vxlan0 to 255.255.255.255 port 67 interval 1
DHCPDISCOVER on vxlan0 to 255.255.255.255 port 67 interval 2
DHCPDISCOVER on vxlan0 to 255.255.255.255 port 67 interval 2
DHCPDISCOVER on vxlan0 to 255.255.255.255 port 67 interval 1
DHCPDISCOVER on vxlan0 to 255.255.255.255 port 67 interval 2
DHCPDISCOVER on vxlan0 to 255.255.255.255 port 67 interval 2
DHCPDISCOVER on vxlan0 to 255.255.255.255 port 67 interval 2
DHCPDISCOVER on vxlan0 to 255.255.255.255 port 67 interval 1
DHCPDISCOVER on vxlan0 to 255.255.255.255 port 67 interval 2
DHCPDISCOVER on vxlan0 to 255.255.255.255 port 67 interval 2
DHCPDISCOVER on vxlan0 to 255.255.255.255 port 67 interval 2
DHCPDISCOVER on vxlan0 to 255.255.255.255 port 67 interval 2
DHCPDISCOVER on vxlan0 to 255.255.255.255 port 67 interval 2
DHCPDISCOVER on vxlan0 to 255.255.255.255 port 67 interval 2
DHCPDISCOVER on vxlan0 to 255.255.255.255 port 67 interval 1
No DHCPOFFERS received.
No working leases in persistent database - sleeping.
+ ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0@if199954: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 47950 qdisc noqueue state UP group default 
    link/ether ae:b9:74:cb:db:43 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.42.0.14/24 brd 10.42.0.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::acb9:74ff:fecb:db43/64 scope link 
       valid_lft forever preferred_lft forever
5: vxlan0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 47900 qdisc noqueue state UNKNOWN group default qlen 1000
    link/ether 52:22:39:e1:6b:ec brd ff:ff:ff:ff:ff:ff
    inet6 fe80::5022:39ff:fee1:6bec/64 scope link 
       valid_lft forever preferred_lft forever
+ ip route
10.42.0.0/24 dev eth0 proto kernel scope link src 10.42.0.14 
10.42.0.0/16 via 10.42.0.1 dev eth0 
10.42.0.12 via 10.42.0.1 dev eth0 
10.43.0.0/16 via 10.42.0.1 dev eth0 
+ ping -c 1 172.16.0.1
ping: sendto: Network unreachable
PING 172.16.0.1 (172.16.0.1): 56 data bytes


---

@dberardo-com
Copy link
Author

so the important missing part in the log above is the default route, looking aht the k8s@home doc:

The important part is that the default gateway and the DNS are set to 172.16.0.1 which is the default IP of the gateway POD in the vlxlan network. If this is the case then you are ready for the (optional) VPN setup.

and indeed this is not there:

  • ip route
    10.42.0.0/24 dev eth0 proto kernel scope link src 10.42.0.14
    10.42.0.0/16 via 10.42.0.1 dev eth0
    10.42.0.12 via 10.42.0.1 dev eth0
    10.43.0.0/16 via 10.42.0.1 dev eth0

@dberardo-com
Copy link
Author

dberardo-com commented Mar 3, 2023

when running this line manually: https://github.com/angelnu/pod-gateway/blob/main/bin/gateway_init.sh#L30

the script returns:

$ ip link add vxlan0 type vxlan id $VXLAN_ID dev eth0 dstport 0
RTNETLINK answers: File exists

it seems like the vxlan exists, but not the route:

  • if i try to delete the vxlan before adding it, then the error from above disappears:
bash-5.1# ip route
10.42.0.0/24 dev eth0 proto kernel scope link src 10.42.0.19
10.42.0.0/16 via 10.42.0.1 dev eth0
10.42.0.12 via 10.42.0.1 dev eth0
10.43.0.0/16 via 10.42.0.1 dev eth0
bash-5.1# ip link del vxlan0
bash-5.1# ip link del vxlan0
Cannot find device "vxlan0"
bash-5.1# ip link add vxlan0 type vxlan id $VXLAN_ID dev eth0 dstport 0
bash-5.1# ip route
10.42.0.0/24 dev eth0 proto kernel scope link src 10.42.0.19
10.42.0.0/16 via 10.42.0.1 dev eth0
10.42.0.12 via 10.42.0.1 dev eth0
10.43.0.0/16 via 10.42.0.1 dev eth0

but as you can see from the last ip route, the default route is not applied ... how come ?

@arana198
Copy link

arana198 commented Mar 3, 2023

I'm not sure what your settings are but I spent months trying to get pod-gateway working and finally got it working. Here's my values.yaml

image:
  tag: v1.8.1

routed_namespaces:
 - mediaserver

settings:
  VPN_INTERFACE: "tun0"
  VPN_BLOCK_OTHER_TRAFFIC: true
  VPN_TRAFFIC_PORT: 443
  NOT_ROUTED_TO_GATEWAY_CIDRS: "10.0.0.0/8"

addons:
  vpn:
    enabled: true
    type: gluetun
    env:
      - name: VPN_SERVICE_PROVIDER
        value: "nordvpn"
      - name: VPN_TYPE
        value: "openvpn"
      - name: OPENVPN_PROTOCOL
        value: "tcp"
 
    securityContext:
      capabilities:
        add:
          - NET_ADMIN
          
    networkPolicy:
      enabled: false
      policyTypes:
        - Ingress
        - Egress
      ingress:
        - from:
            # Only allow ingress from K8S
            - ipBlock:
                cidr: 10.0.0.0/8
      egress:
        # Allow only VPN traffic to Internet
        - to:
          - ipBlock:
              cidr: 0.0.0.0/0
          ports:
            # VPN traffic port - change if your provider uses a different port
            - port: 443
              protocol: TCP
        - to:
            # Allow traffic within K8S - change if your K8S cluster uses a different CIDR
          - ipBlock:
              cidr: 10.0.0.0/8

@dberardo-com
Copy link
Author

thanks for the quick response, are you using Calico or Flannel ?

@arana198
Copy link

arana198 commented Mar 3, 2023

I am using Flannel with Canal

@dberardo-com
Copy link
Author

dberardo-com commented Mar 3, 2023

alright, in my case i am trying to setup the gateway first without any VPN, so using this:

settings:
  NOT_ROUTED_TO_GATEWAY_CIDRS: "10.42.0.0/16 10.43.0.0/16"

as per documentation.

but that seems not to work because the init container somehow does not manage to setup a default gateway within the container ... i wonder why. i am now trying with your setup, i.e. setting the image.tag value and the :

NOT_ROUTED_TO_GATEWAY_CIDRS: "10.0.0.0/8"

but i guest this will not make any difference. same goes for the VPN part, i guess that should not solve the core issue


indeed it didnt make any difference. do you also get this strange error/warning in the log?

For info, please visit https://www.isc.org/software/dhcp/

/etc/dhclient.conf line 3: semicolon expected.
link-timeout 10;
              ^
/etc/dhclient.conf line 4: semicolon expected.
reboot 

i cannot find a way to overwrite that file , but it seems that it is not supported: https://raspberrypi.stackexchange.com/questions/135208/semicolon-required-in-etc-dhcp-dhclient-conf

i think this might be the issue because that file ends with this:

interface "vxlan0"
{
  request subnet-mask,
          broadcast-address,
          routers;
          #domain-name-servers;
  require routers,
          subnet-mask;
          #domain-name-servers;
}

and perhaps this is why the interface/default route is not correctly created


here i found the same error reported by someone else: k8s-at-home/charts#1633 (comment)

dberardo-com added a commit to dberardo-com/pod-gateway that referenced this issue Mar 3, 2023
@arana198
Copy link

arana198 commented Mar 4, 2023

Can you post your full config (values.yaml) file?

@dberardo-com
Copy link
Author

Can you post your full config (values.yaml) file?

these are litterally the only values i have changed:

image:
  tag: v1.8.1
routed_namespaces:
- vpn
settings:
  NOT_ROUTED_TO_GATEWAY_CIDRS: 10.42.0.0/16 10.43.0.0/16

so the full values file is:

DNS: 172.16.0.1
DNSPolicy: None
addons:
  vpn:
    enabled: false
    networkPolicy:
      egress:
      - ports:
        - port: 1194
          protocol: UDP
        to:
        - ipBlock:
            cidr: 0.0.0.0/0
      - to:
        - ipBlock:
            cidr: 10.0.0.0/8
      enabled: true
    type: openvpn
clusterName: cluster.local
common:
  addons:
    codeserver:
      args:
      - --auth
      - none
      enabled: false
      env: {}
      git:
        deployKey: ""
        deployKeyBase64: ""
        deployKeySecret: ""
      image:
        pullPolicy: IfNotPresent
        repository: ghcr.io/coder/code-server
        tag: 4.10.0
      ingress:
        annotations: {}
        enabled: false
        hosts:
        - host: code.chart-example.local
          paths:
          - path: /
            pathType: Prefix
        labels: {}
        tls: []
      securityContext:
        runAsUser: 0
      service:
        annotations: {}
        enabled: true
        labels: {}
        ports:
          codeserver:
            enabled: true
            port: 12321
            protocol: TCP
            targetPort: 12321
        type: ClusterIP
      volumeMounts: []
      workingDir: ""
    netshoot:
      enabled: false
      env: {}
      image:
        pullPolicy: IfNotPresent
        repository: ghcr.io/nicolaka/netshoot
        tag: v0.9
      securityContext:
        capabilities:
          add:
          - NET_ADMIN
    vpn:
      additionalVolumeMounts: []
      args: []
      enabled: false
      env: {}
      gluetun:
        image:
          pullPolicy: IfNotPresent
          repository: docker.io/qmcgaw/gluetun
          tag: v3.32.0
      livenessProbe: {}
      networkPolicy:
        annotations: {}
        enabled: false
        labels: {}
        podSelectorLabels: {}
      scripts: {}
      securityContext:
        capabilities:
          add:
          - NET_ADMIN
          - SYS_MODULE
      type: gluetun
  affinity: {}
  args: []
  automountServiceAccountToken: true
  command: []
  configMaps:
    config:
      annotations: {}
      data: {}
      enabled: false
      labels: {}
  controller:
    annotations: {}
    cronjob:
      concurrencyPolicy: Forbid
      failedJobsHistory: 1
      schedule: '*/20 * * * *'
      startingDeadlineSeconds: 30
      successfulJobsHistory: 1
    enabled: true
    labels: {}
    replicas: 1
    revisionHistoryLimit: 3
    rollingUpdate: {}
    type: deployment
  dnsConfig: {}
  enableServiceLinks: true
  envFrom: []
  global:
    annotations: {}
    labels: {}
  hostAliases: []
  hostNetwork: false
  image: {}
  imagePullSecrets: []
  ingress:
    main:
      annotations: {}
      enabled: false
      hosts:
      - host: chart-example.local
        paths:
        - path: /
          pathType: Prefix
          service:
            name: null
            port: null
      labels: {}
      primary: true
      tls: []
  initContainers: {}
  lifecycle: {}
  nodeSelector: {}
  persistence:
    config:
      accessMode: ReadWriteOnce
      enabled: false
      readOnly: false
      retain: false
      size: 1Gi
      type: pvc
    shared:
      enabled: false
      mountPath: /shared
      type: emptyDir
  podAnnotations: {}
  podLabels: {}
  podSecurityContext: {}
  probes:
    liveness:
      custom: false
      enabled: true
      spec:
        failureThreshold: 3
        initialDelaySeconds: 0
        periodSeconds: 10
        timeoutSeconds: 1
      type: TCP
    readiness:
      custom: false
      enabled: true
      spec:
        failureThreshold: 3
        initialDelaySeconds: 0
        periodSeconds: 10
        timeoutSeconds: 1
      type: TCP
    startup:
      custom: false
      enabled: true
      spec:
        failureThreshold: 30
        initialDelaySeconds: 0
        periodSeconds: 5
        timeoutSeconds: 1
      type: TCP
  resources: {}
  route:
    main:
      annotations: {}
      enabled: false
      hostnames: []
      kind: HTTPRoute
      labels: {}
      parentRefs:
      - group: gateway.networking.k8s.io
        kind: Gateway
        name: null
        namespace: null
        sectionName: null
      rules:
      - backendRefs:
        - group: ""
          kind: Service
          name: null
          namespace: null
          port: null
          weight: 1
        matches:
        - path:
            type: PathPrefix
            value: /
  secrets:
    secret:
      annotations: {}
      enabled: false
      labels: {}
      stringData: {}
  securityContext: {}
  service:
    main:
      annotations: {}
      enabled: true
      ipFamilies: []
      labels: {}
      ports:
        http:
          enabled: true
          extraSelectorLabels: {}
          primary: true
          protocol: HTTP
      primary: true
      type: ClusterIP
  serviceAccount:
    annotations: {}
    create: false
    name: ""
  serviceMonitor:
    main:
      annotations: {}
      enabled: false
      endpoints:
      - interval: 1m
        path: /metrics
        port: http
        scheme: http
        scrapeTimeout: 10s
      labels: {}
      selector: {}
      serviceName: '{{ include "bjw-s.common.lib.chart.names.fullname" $ }}'
  sidecars: {}
  termination: {}
  tolerations: []
  topologySpreadConstraints: []
  volumeClaimTemplates: []
image:
  pullPolicy: IfNotPresent
  repository: ghcr.io/angelnu/pod-gateway
  tag: v1.8.1
publicPorts: null
routed_namespaces:
- vpn
settings:
  DNS_LOCAL_CIDRS: local
  NOT_ROUTED_TO_GATEWAY_CIDRS: 10.42.0.0/16 10.43.0.0/16
  VPN_BLOCK_OTHER_TRAFFIC: false
  VPN_INTERFACE: tun0
  VPN_LOCAL_CIDRS: 10.0.0.0/8 192.168.0.0/16
  VPN_TRAFFIC_PORT: 1194
  VXLAN_GATEWAY_FIRST_DYNAMIC_IP: 20
  VXLAN_ID: 42
  VXLAN_IP_NETWORK: 172.16.0
webhook:
  gatewayAnnotation: setGateway
  gatewayAnnotationValue: null
  gatewayDefault: true
  gatewayLabel: setGateway
  gatewayLabelValue: null
  image:
    pullPolicy: IfNotPresent
    repository: ghcr.io/angelnu/gateway-admision-controller
    tag: v3.8.0
  namespaceSelector:
    custom: {}
    label: routed-gateway
    type: label
  replicas: 1
  strategy:
    type: RollingUpdate

@arana198
Copy link

arana198 commented Mar 4, 2023

Just checking that your routed pods is not in the same namespace as pod-gateway? i.e pod-gateway is in different namespace to sonarr/radarr/any other routed pod

@dberardo-com
Copy link
Author

dberardo-com commented Mar 4, 2023

Just checking that your routed pods is not in the same namespace as pod-gateway? i.e pod-gateway is in different namespace to sonarr/radarr/any other routed pod

done, i have followed this procedure: https://docs.k8s-at-home.com/guides/pod-gateway/#pod-gateway-helm-release

so i have 2 different namespaces "vpn" for the routed pod and "pod-gateway" for the gateway container (which is healty, up and running)


here the gateway namespace:

image

here the pod namespace:

image

@arana198
Copy link

arana198 commented Mar 6, 2023

What helm chart are you using?

k8s-at-home never worked for me. I am using angelnu/pod-gateway (https://angelnu.github.io/helm-charts) and the above issue you had was resolved for me

@dberardo-com
Copy link
Author

might that be that the chart only works on Ubuntu/Debian systems? or is also CentOS supported?

What helm chart are you using?

k8s-at-home never worked for me. I am using angelnu/pod-gateway (https://angelnu.github.io/helm-charts) and the above issue you had was resolved for me

i am using the same chart that you have linked:

image

@DanielHouston
Copy link

I'm hitting the same issue here - I have been using RHEL-9 nodes, but just spun up a Ubuntu 22.04.2 to test if this is a node os issue. Unfortunatly, I am seeing the issue on this node as well

@DanielHouston
Copy link

DanielHouston commented Apr 6, 2023

@dberardo-com I resolved this error in my env. I turned off DOT and the Firewall for the gluetun pod.
This obviously has its own implications & I'm not sure exactly what it is about the firewall which causes problems, but with:

      - name: VPN_SERVICE_PROVIDER
        value: "nordvpn"
      - name: VPN_TYPE
        value: "openvpn"
      - name: OPENVPN_PROTOCOL
        value: "tcp"
      - name: FIREWALL
        value: "off"
      - name: DOT
        value: "off"

I have the sidecar connecting to the gateway and the test pod connecting out through the gateway

@dberardo-com
Copy link
Author

hi @DanielHouston i could give this a try. are the FIREWALL and the DOT variables the only ones that need to be set ? and also: are those vars specific of the pod-gateway chart or are them related to your vpn provider ?

also: i have got the pod-gateway running on another cluster, but i cant manage to get the vpn client up. it seems that chart variables are doing nothing. Do you also face the same issue? i have reported this problem here: #12 (comment)

@DanielHouston
Copy link

I've actually got the whole setup working now using nordvpn and gluetun; helm values as follows:

#
# IMPORTANT NOTE
#
# This chart inherits from our common library chart. You can check the default values/options here:
# https://github.com/k8s-at-home/library-charts/tree/main/charts/stable/common/values.yaml
#

image:
  # -- image repository of the gateway and inserted helper containers
  repository: ghcr.io/angelnu/pod-gateway
  # -- image pull policy of the gateway and inserted helper cotainers
  pullPolicy: IfNotPresent
  # -- image tag of the gateway and inserted helper containers
  # @default -- chart.appVersion
  tag:

# -- IP address of the DNS server within the vxlan tunnel.
# All mutated PODs will get this as their DNS server.
# It must match VXLAN_GATEWAY_IP in settings.sh
DNS: 172.16.0.1

# -- The DNSPolicy to apply to the POD. Only when set to "None" will the
# DNS value above apply. To avoid altering POD DNS (i.e., to allow
# initContainers to use DNS before the the VXLAN is up), set to "ClusterFirst"
DNSPolicy: None

# -- cluster name used to derive the gateway full name
clusterName: "cluster.local"

# -- Namespaces that might contain routed PODs and therefore
# require a copy of the gneerated settings configmap.
routed_namespaces:
- vpn-test

settings:
  # -- IPs not sent to the POD gateway but to the default K8S.
  # Multiple CIDRs can be specified using blanks as separator.
  # Example for Calico: ""172.22.0.0/16 172.24.0.0/16"
  #
  # This is needed, for example, in case your CNI does
  # not add a non-default rule for the K8S addresses (Flannel does).
  NOT_ROUTED_TO_GATEWAY_CIDRS: "10.0.0.0/8"

  # -- Vxlan ID to use
  VXLAN_ID: 42
  # -- VXLAN needs an /24 IP range not conflicting with K8S and local IP ranges
  VXLAN_IP_NETWORK: "172.16.0"
  # -- Keep a range of IPs for static assignment in nat.conf
  VXLAN_GATEWAY_FIRST_DYNAMIC_IP: 20

  # -- If using a VPN, interface name created by it
  VPN_INTERFACE: tun0
  # -- Prevent non VPN traffic to leave the gateway
  VPN_BLOCK_OTHER_TRAFFIC: true
  # -- If VPN_BLOCK_OTHER_TRAFFIC is true, allow VPN traffic over this port
  VPN_TRAFFIC_PORT: 1194
  # -- Traffic to these IPs will be send through the K8S gateway
  VPN_LOCAL_CIDRS: "10.0.0.0/8 192.168.0.0/16"

  # -- DNS queries to these domains will be resolved by K8S DNS instead of
  # the default (typcally the VPN client changes it)
  DNS_LOCAL_CIDRS:

# -- settings to expose ports, usually through a VPN provider.
# NOTE: if you change it you will need to manually restart the gateway POD
# publicPorts:
# - hostname: qbittorrent
#   IP: 10
#   ports:
#   - type: udp
#     port: 18289
#   - type: tcp
#     port: 18289

# -- settings to expose ports with IPv6, usually through a VPN provider.
# NOTE: if you change it you will need to manually restart the gateway POD
publicPortsV6:
# - hostname: qbittorrent
#   IP: 10
#   ports:
#   - type: udp
#     port: 18289
#   - type: tcp
#     port: 18289

addons:
  vpn:
    enabled: true
    type: gluetun
    env:
      - name: VPN_SERVICE_PROVIDER
        value: "nordvpn"
      - name: VPN_TYPE
        value: "openvpn"
      - name: OPENVPN_PROTOCOL
        value: "tcp"
      - name: FIREWALL
        value: "off"
      - name: DOT
        value: "off"
    securityContext:
      capabilities:
        add:
          - NET_ADMIN
          - NET_RAW
          

# -- The webhook is used to mutate the PODs matching the given
# namespace labels. It inserts an init and sidecard helper containers
# that connect to the gateway pod created by this chart.
# @default -- See below
webhook:
  image:
    # -- image repository of the webhook
    repository: ghcr.io/angelnu/gateway-admision-controller
    # -- image pullPolicy of the webhook
    pullPolicy: IfNotPresent
    # -- image tag of the webhook
    tag: v3.8.0

  # -- number of webhook instances to deploy
  replicas: 1

  # -- strategy for updates
  strategy:
    type: RollingUpdate

  # -- Selector for namespace.
  # All pods in this namespace will get evaluated by the webhook.
  # **IMPORTANT**: Do not select the namespace where the webhook
  # is deployed to or you will get locking issues.
  namespaceSelector:
    type: label
    label: "routed-gateway"
    custom: {}
      # matchExpressions:
      # - key: notTouch
      #   operator: NotIn
      #   values: ["1"]

  # -- default behviour for new PODs in the evaluated namespace
  gatewayDefault: true

  # -- label name to check when evaluating POD. If true the POD
  # will get the gateway. If not set setGatewayDefault will apply.
  gatewayLabel: setGateway

  # -- label value to check when evaluating POD. If set, the POD
  # with the gatewayLabel's value that matches, will get the
  # gateway. If not set gatewayLabel boolean value will apply.
  gatewayLabelValue:

  # -- annotation name to check when evaluating POD. If true the POD
  # will get the gateway. If not set setGatewayDefault will apply.
  gatewayAnnotation: setGateway

  # -- annotation value to check when evaluating POD. If set, the POD
  # with gatewayAnnotation'value that matches, will get the gateway.
  # If not set gatewayAnnotation boolean value will apply.
  gatewayAnnotationValue:

I believe these don't differ too much from the defaults outside of the FIREWALL and DOT changes (I think I had to add the extra NET_RAW capabilities for CRI-O on RHEL9 as well)

@dberardo-com
Copy link
Author

thanks a lot, i will check out these settings and see if the do work!

@DanielHouston
Copy link

DanielHouston commented Apr 12, 2023

No worries; note that I am also adding the secrets for auth post-helm install directly to the Deployment:

    - name: gluetun
      image: docker.io/qmcgaw/gluetun:v3.32.0
      env:
        - name: VPN_SERVICE_PROVIDER
          value: nordvpn
        - name: VPN_TYPE
          value: openvpn
        - name: OPENVPN_PROTOCOL
          value: udp
        - name: OPENVPN_USER
          value: xxx
        - name: OPENVPN_PASSWORD
          value: yyy
        - name: SERVER_REGIONS
          value: abc
        - name: SERVER_HOSTNAMES
          value: zzz.nordvpn.com

And that the username and password, in nordvpn's case, not the ones used to auth to your account in the UI, but are instead your Service Credentials which you can find on their site after logging in.

@dberardo-com
Copy link
Author

alright, how do you do that? directly from the helm chart settings or did you have to create a custom Deployment before install the chart and tag it as install-hook: https://helm.sh/docs/topics/charts_hooks/#writing-a-hook

or did you edit the deployment manually after install ?

@DanielHouston
Copy link

presumably either would work - but I actually created it manually as I'm self-managing my manifests (only using Helm to generate the manifests with --dry-run ala:
helm install -n vpn-gateway -f helm-values.yaml vpn-gateway angelnu/pod-gateway --dry-run > vpn-gateway.yaml )

@warent
Copy link

warent commented Apr 16, 2023

@DanielHouston's config worked for me, and I can confirm that both DNS over TLS and the firewall had to be disabled for me. With DOT enabled, the gateway complained that the DNS had errors and it couldn't bind. With the firewall enabled, all pods with the sidecar complained that they could not connect to the gateway and that there were no DHCP leases available.

Super strange and kind of a bummer it doesn't work with those enabled, but not a huge deal. Happy to finally have automatic VPN sidecars for any pods needed. Having all these pieces connected and working together is just really cool!

@dberardo-com
Copy link
Author

when you talk about disabled firewall, are you talking about some sort of internal firewall service of the containers, or should the firewall on the actual physical server be disabled?

and why would the pods need their own firwall to work ? thats a bit unclear to me.

@warent
Copy link

warent commented Apr 18, 2023

I'm referring to the ENV for the Gluetun service:

- name: FIREWALL
    value: "off"
- name: DOT
    value: "off"

https://github.com/qdm12/gluetun/wiki/Firewall-options
https://github.com/qdm12/gluetun/wiki/Explanations

@warent
Copy link

warent commented Apr 18, 2023

Ah interestingly this document shows a variable I hadn't seen:

FIREWALL_INPUT_PORTS
Comma separated list of ports to allow through the default interface. This seems needed for Kubernetes sidecars.

https://github.com/qdm12/gluetun/wiki/Firewall-options

@DanielHouston you might find interesting, may allow us to enable the firewall with correct configuration

@angelnu
Copy link
Owner

angelnu commented Jun 11, 2023

Going over my repositories today - if this is still relevant @dberardo-com , could you please test withe the latest chart?

@angelnu
Copy link
Owner

angelnu commented Dec 3, 2023

Please reopen if still present with the latest

@angelnu angelnu closed this as completed Dec 3, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants