Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

External Network: Provide standard ways to consume lighthouse from external network #600

Closed
mkimuram opened this issue Aug 10, 2021 · 2 comments
Labels
enhancement New feature or request wontfix This will not be worked on

Comments

@mkimuram
Copy link
Contributor

What would you like to be added:
Standard ways to consume lighthouse from external network.

Why is this needed:
To use submariner for the external network use case, users will find it useful if they can resolve the DNS names inside the clusters from external network.

@mkimuram mkimuram added the enhancement New feature or request label Aug 10, 2021
@mkimuram
Copy link
Contributor Author

mkimuram commented Aug 10, 2021

This issue is to discuss how to provide such a feature. Ideas are as below.

Ideas:
(1) Directly access to lighthouse via global ingress IP

  • Cluster side configuration:
    • Create service to expose lighthouse
cat << EOF | kubectl --kubeconfig kubeconfig.cluster-a apply -f -
apiVersion: v1
kind: Service
metadata:
  namespace: submariner-operator
  name: submariner-lighthouse-cluster-a
spec:
  ports:
  - name: udp
    port: 53
    protocol: UDP
    targetPort: 53
  selector:
    app: submariner-lighthouse-coredns
EOF
  • Create ServiceExport to assign global egressIP for the service
cat << EOF | kubectl --kubeconfig kubeconfig.cluster-a apply -f - 
kind: ServiceExport
apiVersion: multicluster.x-k8s.io/v1alpha1
metadata:
 namespace: submariner-operator
 name: submariner-lighthouse-cluster-a
EOF
  • Check the global egressIP
kubectl get --kubeconfig kubeconfig.cluster-a globalingressip submariner-lighthouse-cluster-a -n submariner-operator
NAME                              IP
submariner-lighthouse-cluster-a   242.0.255.252
  • Consumer side configuration:
    • Confirm that the dns name inside the cluster can be resolved when ask to the lighthouse from external network
nslookup http.default.svc.clusterset.local 242.0.255.252
Server:         242.0.255.252
Address:        242.0.255.252#53

Name:   http.default.svc.clusterset.local
Address: 242.1.255.253

Users can't just replace their dns configuration to 242.0.255.252, because it won't return records from their original dns servers. Users need to configure to conditionally use the dns by the other way.

(2) Providing a DNS server which refers both lighthouse and its original dns server as upstream servers

  • Cluster side configuration (just for PoC: it needs to use a proper image and configurations in production):
    • Create configMap for dnsmasq that has information on upstream dns servers
dnsip=192.168.122.1
lighthousednsip=$(kubectl get svc --kubeconfig kubeconfig.cluster-a -n submariner-operator submariner-lighthouse-coredns -o jsonpath='{.spec.clusterIP}')

cat << EOF > upstreamservers
server=/svc.clusterset.local/$lighthousednsip
server=$dnsip
EOF
kubectl create configmap external-dnsmasq --kubeconfig kubeconfig.cluster-a -n submariner-operator --from-file=upstreamservers
  • Create Deployment that runs dnsmasq
cat << EOF | kubectl --kubeconfig kubeconfig.cluster-a apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
  name: external-dns-cluster-a
  namespace: submariner-operator
  labels:
    app: external-dns-cluster-a
spec:
  replicas: 1
  selector:
    matchLabels:
      app: external-dns-cluster-a
  template:
    metadata:
      labels:
        app: external-dns-cluster-a
    spec:
      containers:
      - name: dnsmasq
        image: registry.access.redhat.com/ubi8/ubi-minimal:latest
        ports:
        - containerPort: 53
        command: [ "/bin/sh", "-c", "microdnf install -y dnsmasq; ln -s /upstreamservers /etc/dnsmasq.d/upstreamservers; dnsmasq -k" ]
        volumeMounts:
        - name: upstreamservers
          mountPath: /upstreamservers
      volumes:
        - name: upstreamservers
          configMap:
            name: external-dnsmasq
EOF
  • Expose the deployment as a service and Export the service
cat << EOF | kubectl --kubeconfig kubeconfig.cluster-a apply -f -
apiVersion: v1
kind: Service
metadata:
  namespace: submariner-operator
  name: external-dns-cluster-a
spec:
  ports:
  - name: udp
    port: 53
    protocol: UDP
    targetPort: 53
  selector:
    app: external-dns-cluster-a
EOF

cat << EOF | kubectl --kubeconfig kubeconfig.cluster-a apply -f - 
kind: ServiceExport
apiVersion: multicluster.x-k8s.io/v1alpha1
metadata:
 namespace: submariner-operator
 name: external-dns-cluster-a
EOF

kubectl --kubeconfig kubeconfig.cluster-a get globalingressip external-dns-cluster-a -n submariner-operator
NAME                     IP
external-dns-cluster-a   242.0.255.251
  • Consumer side configuration:
    • Replace the dns server
# cat /etc/resolv.conf 
#nameserver 192.168.122.1
nameserver 242.0.255.251
  • Test name resolution for cluster, local, internet:
nslookup http.default.svc.clusterset.local 
Server:         242.0.255.251
Address:        242.0.255.251#53

Name:   http.default.svc.clusterset.local
Address: 242.1.255.253

nslookup test-vm
Server:         242.0.255.251
Address:        242.0.255.251#53

Name:   test-vm
Address: 192.168.122.142

nslookup github.com
Server:         242.0.255.251
Address:        242.0.255.251#53

Non-authoritative answer:
Name:   github.com
Address: 140.82.114.3

@stale
Copy link

stale bot commented Dec 8, 2021

This issue has been automatically marked as stale because it has not had activity for 60 days. It will be closed if no further activity occurs. Please make a comment if this issue/pr is still valid. Thank you for your contributions.

@stale stale bot added the wontfix This will not be worked on label Dec 8, 2021
@stale stale bot closed this as completed Dec 16, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request wontfix This will not be worked on
Projects
None yet
Development

No branches or pull requests

1 participant