Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

peering: expose servers over K8s service #1371

Closed
wants to merge 24 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
24 commits
Select commit Hold shift + click to select a range
7c39c91
default partitions works with load balancer service
ndhanushkodi Jul 24, 2022
8200f92
nodeport works if external ip's are added to the firewall rules. need…
ndhanushkodi Jul 24, 2022
6c9e058
debug acceptance: set externalService to Nodeport and pass bool to co…
ndhanushkodi Jul 24, 2022
af1ea8e
eks load balancer health checks failed because 8501 isn't open when t…
ndhanushkodi Jul 25, 2022
7846bec
poll-server-external-service should be false by default
ndhanushkodi Jul 25, 2022
d430707
internal ip's work with the flattened networks in eks and gke
ndhanushkodi Jul 26, 2022
c520c53
to be reverted: test fix
ndhanushkodi Jul 26, 2022
4a6febc
scope this to support helm chart service only, add helm unit tests
ndhanushkodi Jul 26, 2022
6fdb6b8
attempt to get the peering connect test to use the nodePort config fo…
ndhanushkodi Jul 26, 2022
1a8fcd1
fix helm gen
ndhanushkodi Jul 26, 2022
82c0ac8
configure the service on the acceptor side, and update the service name
ndhanushkodi Jul 26, 2022
1ddec89
use new consul image that has generate token updates
ndhanushkodi Jul 26, 2022
f3888de
try running just peering tests
ndhanushkodi Jul 26, 2022
6cd03a6
actually only run peering tests, and use ent image for consul
ndhanushkodi Jul 26, 2022
4aabc9d
change the name in a few more places and add unit tests
ndhanushkodi Jul 26, 2022
5b18912
actually use a proper ent image :(((((((((((
ndhanushkodi Jul 26, 2022
c5197a6
update acceptance tests to use multiple server instances, except for …
ndhanushkodi Jul 27, 2022
f812dca
add more unit tests, make helm values merge the right way
ndhanushkodi Jul 27, 2022
dd7ebfc
add longer timeouts, enable on aks
ndhanushkodi Jul 27, 2022
2add121
add the right helm values
ndhanushkodi Jul 27, 2022
20d2a03
update service name
ndhanushkodi Jul 27, 2022
c345dac
update helm docs and connect inject tests
ndhanushkodi Jul 27, 2022
8ebf79a
try with less aggressive backoff in image
ndhanushkodi Jul 27, 2022
d6330fb
enable more tests with less aggressive backoff
ndhanushkodi Jul 27, 2022
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
15 changes: 14 additions & 1 deletion .circleci/config.yml
Original file line number Diff line number Diff line change
Expand Up @@ -101,6 +101,8 @@ commands:
${ENABLE_ENTERPRISE:+-enable-enterprise} \
-enable-multi-cluster \
-debug-directory="$TEST_RESULTS/debug" \
-run TestPeering_Connect \
-run TestPeering_ConnectNamespaces \
-consul-k8s-image=<< parameters.consul-k8s-image >>
then
echo "Tests in ${pkg} failed, aborting early"
Expand Down Expand Up @@ -132,6 +134,8 @@ commands:
-enable-multi-cluster \
${ENABLE_ENTERPRISE:+-enable-enterprise} \
-debug-directory="$TEST_RESULTS/debug" \
-run TestPeering_Connect \
-run TestPeering_ConnectNamespaces \
-consul-k8s-image=<< parameters.consul-k8s-image >>

jobs:
Expand Down Expand Up @@ -706,7 +710,7 @@ jobs:
- run: mkdir -p $TEST_RESULTS

- run-acceptance-tests:
additional-flags: -kubeconfig="$primary_kubeconfig" -secondary-kubeconfig="$secondary_kubeconfig" -disable-peering -enable-transparent-proxy
additional-flags: -kubeconfig="$primary_kubeconfig" -secondary-kubeconfig="$secondary_kubeconfig" -enable-transparent-proxy

- store_test_results:
path: /tmp/test-results
Expand Down Expand Up @@ -1004,6 +1008,15 @@ workflows:
context: consul-ci
requires:
- dev-upload-docker
- acceptance-gke-1-20:
requires:
- dev-upload-docker
- acceptance-eks-1-19:
requires:
- dev-upload-docker
- acceptance-aks-1-21:
requires:
- dev-upload-docker
nightly-acceptance-tests:
triggers:
- schedule:
Expand Down
2 changes: 1 addition & 1 deletion acceptance/framework/k8s/deploy.go
Original file line number Diff line number Diff line change
Expand Up @@ -96,7 +96,7 @@ func CheckStaticServerConnectionMultipleFailureMessages(t *testing.T, options *k
expectedOutput = expectedSuccessOutput
}

retrier := &retry.Timer{Timeout: 80 * time.Second, Wait: 2 * time.Second}
retrier := &retry.Timer{Timeout: 160 * time.Second, Wait: 2 * time.Second}
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We need to try this connection for a bit longer, because the peering connection needs to get set up and the exported service needs to make it over to the importing side, and the dialer is likely retrying a few times to reach the leader, so it makes sense that this takes a bit longer than it usually does.


args := []string{"exec", "deploy/" + sourceApp, "-c", sourceApp, "--", "curl", "-vvvsSf"}
args = append(args, curlArgs...)
Expand Down
1 change: 1 addition & 0 deletions acceptance/tests/partitions/partitions_connect_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -123,6 +123,7 @@ func TestPartitions_Connect(t *testing.T) {
serverHelmValues["global.adminPartitions.service.nodePort.https"] = "30000"
serverHelmValues["meshGateway.service.type"] = "NodePort"
serverHelmValues["meshGateway.service.nodePort"] = "30100"
serverHelmValues["server.exposeService.type"] = "NodePort"
}

releaseName := helpers.RandomName()
Expand Down
34 changes: 27 additions & 7 deletions acceptance/tests/peering/peering_connect_namespaces_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@ import (
"fmt"
"strconv"
"testing"
"time"

terratestk8s "github.com/gruntwork-io/terratest/modules/k8s"
"github.com/hashicorp/consul-k8s/acceptance/framework/consul"
Expand All @@ -13,6 +14,7 @@ import (
"github.com/hashicorp/consul-k8s/acceptance/framework/k8s"
"github.com/hashicorp/consul-k8s/acceptance/framework/logger"
"github.com/hashicorp/consul/api"
"github.com/hashicorp/consul/sdk/testutil/retry"
"github.com/hashicorp/go-version"
"github.com/stretchr/testify/require"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
Expand Down Expand Up @@ -76,7 +78,7 @@ func TestPeering_ConnectNamespaces(t *testing.T) {
"global.peering.enabled": "true",
"global.enableConsulNamespaces": "true",

"global.image": "thisisnotashwin/consul@sha256:446aad6e02f66e3027756dfc0d34e8e6e2b11ac6ec5637b134b34644ca7cda64",
"global.image": "ndhanushkodi/consul-dev@sha256:61b02ac369cc13db6b9af8808b7e3a811bcdc9a09c95ddac0da931f81743091c",

"global.tls.enabled": "false",
"global.tls.httpsOnly": strconv.FormatBool(c.ACLsAndAutoEncryptEnabled),
Expand All @@ -95,8 +97,10 @@ func TestPeering_ConnectNamespaces(t *testing.T) {

"controller.enabled": "true",

"dns.enabled": "true",
"dns.enableRedirection": strconv.FormatBool(cfg.EnableTransparentProxy),
"dns.enabled": "true",
"dns.enableRedirection": strconv.FormatBool(cfg.EnableTransparentProxy),
"server.replicas": "3",
"server.bootstrapExpect": "3",
ndhanushkodi marked this conversation as resolved.
Show resolved Hide resolved
}

staticServerPeerHelmValues := map[string]string{
Expand All @@ -110,14 +114,18 @@ func TestPeering_ConnectNamespaces(t *testing.T) {
staticServerPeerHelmValues["server.exposeGossipAndRPCPorts"] = "true"
staticServerPeerHelmValues["meshGateway.service.type"] = "NodePort"
staticServerPeerHelmValues["meshGateway.service.nodePort"] = "30100"
staticServerPeerHelmValues["server.exposeService.type"] = "NodePort"
staticServerPeerHelmValues["server.exposeService.nodePort.grpc"] = "30200"
staticServerPeerHelmValues["server.replicas"] = "1"
staticServerPeerHelmValues["server.bootstrapExpect"] = "1"
}

releaseName := helpers.RandomName()

helpers.MergeMaps(staticServerPeerHelmValues, commonHelmValues)
helpers.MergeMaps(commonHelmValues, staticServerPeerHelmValues)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we swap the order here so that we deploy static server cluster and the static client cluster with staticServerPeerValues and staticClientPeerValues? This makes it confusing because both use common values but we should not update the common values between the 2 deploys.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I removed the replicas 3 for servers, and made it a non-kind specific config instead!


// Install the first peer where static-server will be deployed in the static-server kubernetes context.
staticServerPeerCluster := consul.NewHelmCluster(t, staticServerPeerHelmValues, staticServerPeerClusterContext, cfg, releaseName)
staticServerPeerCluster := consul.NewHelmCluster(t, commonHelmValues, staticServerPeerClusterContext, cfg, releaseName)
staticServerPeerCluster.Create(t)

staticClientPeerHelmValues := map[string]string{
Expand All @@ -128,12 +136,16 @@ func TestPeering_ConnectNamespaces(t *testing.T) {
staticClientPeerHelmValues["server.exposeGossipAndRPCPorts"] = "true"
staticClientPeerHelmValues["meshGateway.service.type"] = "NodePort"
staticClientPeerHelmValues["meshGateway.service.nodePort"] = "30100"
staticClientPeerHelmValues["server.exposeService.type"] = "NodePort"
staticClientPeerHelmValues["server.exposeService.nodePort.grpc"] = "30200"
staticServerPeerHelmValues["server.replicas"] = "1"
staticServerPeerHelmValues["server.bootstrapExpect"] = "1"
}

helpers.MergeMaps(staticClientPeerHelmValues, commonHelmValues)
helpers.MergeMaps(commonHelmValues, staticClientPeerHelmValues)

// Install the second peer where static-client will be deployed in the static-client kubernetes context.
staticClientPeerCluster := consul.NewHelmCluster(t, staticClientPeerHelmValues, staticClientPeerClusterContext, cfg, releaseName)
staticClientPeerCluster := consul.NewHelmCluster(t, commonHelmValues, staticClientPeerClusterContext, cfg, releaseName)
staticClientPeerCluster.Create(t)

// Create the peering acceptor on the client peer.
Expand All @@ -142,6 +154,14 @@ func TestPeering_ConnectNamespaces(t *testing.T) {
k8s.KubectlDelete(t, staticClientPeerClusterContext.KubectlOptions(t), "../fixtures/bases/peering/peering-acceptor.yaml")
})

// Ensure the secret is created.
timer := &retry.Timer{Timeout: 1 * time.Minute, Wait: 1 * time.Second}
retry.RunWith(timer, t, func(r *retry.R) {
acceptorSecretResourceVersion, err := k8s.RunKubectlAndGetOutputE(t, staticClientPeerClusterContext.KubectlOptions(t), "get", "peeringacceptor", "server", "-o", "jsonpath={.status.secret.resourceVersion}")
require.NoError(r, err)
require.NotEmpty(r, acceptorSecretResourceVersion)
})

// Copy secret from client peer to server peer.
k8s.CopySecret(t, staticClientPeerClusterContext, staticServerPeerClusterContext, "api-token")

Expand Down
37 changes: 27 additions & 10 deletions acceptance/tests/peering/peering_connect_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@ import (
"fmt"
"strconv"
"testing"
"time"

terratestk8s "github.com/gruntwork-io/terratest/modules/k8s"
"github.com/hashicorp/consul-k8s/acceptance/framework/consul"
Expand All @@ -13,6 +14,7 @@ import (
"github.com/hashicorp/consul-k8s/acceptance/framework/k8s"
"github.com/hashicorp/consul-k8s/acceptance/framework/logger"
"github.com/hashicorp/consul/api"
"github.com/hashicorp/consul/sdk/testutil/retry"
"github.com/hashicorp/go-version"
"github.com/stretchr/testify/require"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
Expand Down Expand Up @@ -49,7 +51,7 @@ func TestPeering_Connect(t *testing.T) {
commonHelmValues := map[string]string{
"global.peering.enabled": "true",

"global.image": "thisisnotashwin/consul@sha256:446aad6e02f66e3027756dfc0d34e8e6e2b11ac6ec5637b134b34644ca7cda64",
"global.image": "ndhanushkodi/consul-dev@sha256:61b02ac369cc13db6b9af8808b7e3a811bcdc9a09c95ddac0da931f81743091c",

"global.tls.enabled": "false",
"global.tls.httpsOnly": strconv.FormatBool(c.ACLsAndAutoEncryptEnabled),
Expand All @@ -64,8 +66,10 @@ func TestPeering_Connect(t *testing.T) {

"controller.enabled": "true",

"dns.enabled": "true",
"dns.enableRedirection": strconv.FormatBool(cfg.EnableTransparentProxy),
"dns.enabled": "true",
"dns.enableRedirection": strconv.FormatBool(cfg.EnableTransparentProxy),
"server.replicas": "3",
"server.bootstrapExpect": "3",
ndhanushkodi marked this conversation as resolved.
Show resolved Hide resolved
}

staticServerPeerHelmValues := map[string]string{
Expand All @@ -79,14 +83,18 @@ func TestPeering_Connect(t *testing.T) {
staticServerPeerHelmValues["server.exposeGossipAndRPCPorts"] = "true"
staticServerPeerHelmValues["meshGateway.service.type"] = "NodePort"
staticServerPeerHelmValues["meshGateway.service.nodePort"] = "30100"
staticServerPeerHelmValues["server.exposeService.type"] = "NodePort"
staticServerPeerHelmValues["server.exposeService.nodePort.grpc"] = "30200"
staticServerPeerHelmValues["server.replicas"] = "1"
staticServerPeerHelmValues["server.bootstrapExpect"] = "1"
}

releaseName := helpers.RandomName()

helpers.MergeMaps(staticServerPeerHelmValues, commonHelmValues)
helpers.MergeMaps(commonHelmValues, staticServerPeerHelmValues)
ndhanushkodi marked this conversation as resolved.
Show resolved Hide resolved

// Install the first peer where static-server will be deployed in the static-server kubernetes context.
staticServerPeerCluster := consul.NewHelmCluster(t, staticServerPeerHelmValues, staticServerPeerClusterContext, cfg, releaseName)
staticServerPeerCluster := consul.NewHelmCluster(t, commonHelmValues, staticServerPeerClusterContext, cfg, releaseName)
staticServerPeerCluster.Create(t)

staticClientPeerHelmValues := map[string]string{
Expand All @@ -97,22 +105,31 @@ func TestPeering_Connect(t *testing.T) {
staticClientPeerHelmValues["server.exposeGossipAndRPCPorts"] = "true"
staticClientPeerHelmValues["meshGateway.service.type"] = "NodePort"
staticClientPeerHelmValues["meshGateway.service.nodePort"] = "30100"
staticClientPeerHelmValues["server.exposeService.type"] = "NodePort"
staticClientPeerHelmValues["server.exposeService.nodePort.grpc"] = "30200"
staticClientPeerHelmValues["server.replicas"] = "1"
staticClientPeerHelmValues["server.bootstrapExpect"] = "1"
}

helpers.MergeMaps(staticClientPeerHelmValues, commonHelmValues)
helpers.MergeMaps(commonHelmValues, staticClientPeerHelmValues)

// Install the second peer where static-client will be deployed in the static-client kubernetes context.
staticClientPeerCluster := consul.NewHelmCluster(t, staticClientPeerHelmValues, staticClientPeerClusterContext, cfg, releaseName)
staticClientPeerCluster := consul.NewHelmCluster(t, commonHelmValues, staticClientPeerClusterContext, cfg, releaseName)
staticClientPeerCluster.Create(t)

// Create the peering acceptor on the client peer.
k8s.KubectlApply(t, staticClientPeerClusterContext.KubectlOptions(t), "../fixtures/bases/peering/peering-acceptor.yaml")
helpers.Cleanup(t, cfg.NoCleanupOnFailure, func() {
k8s.KubectlDelete(t, staticClientPeerClusterContext.KubectlOptions(t), "../fixtures/bases/peering/peering-acceptor.yaml")
})
acceptorSecretResourceVersion, err := k8s.RunKubectlAndGetOutputE(t, staticClientPeerClusterContext.KubectlOptions(t), "get", "peeringacceptor", "server", "-o", "jsonpath={.status.secret.resourceVersion}")
require.NoError(t, err)
require.NotEmpty(t, acceptorSecretResourceVersion)

// Ensure the secret is created.
timer := &retry.Timer{Timeout: 1 * time.Minute, Wait: 1 * time.Second}
retry.RunWith(timer, t, func(r *retry.R) {
acceptorSecretResourceVersion, err := k8s.RunKubectlAndGetOutputE(t, staticClientPeerClusterContext.KubectlOptions(t), "get", "peeringacceptor", "server", "-o", "jsonpath={.status.secret.resourceVersion}")
require.NoError(r, err)
require.NotEmpty(r, acceptorSecretResourceVersion)
})

// Copy secret from client peer to server peer.
k8s.CopySecret(t, staticClientPeerClusterContext, staticServerPeerClusterContext, "api-token")
Expand Down
1 change: 1 addition & 0 deletions acceptance/tests/vault/vault_partitions_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -338,6 +338,7 @@ func TestVault_Partitions(t *testing.T) {
serverHelmValues["global.adminPartitions.service.nodePort.https"] = "30000"
serverHelmValues["meshGateway.service.type"] = "NodePort"
serverHelmValues["meshGateway.service.nodePort"] = "30100"
serverHelmValues["server.exposeService.type"] = "NodePort"
}

helpers.MergeMaps(serverHelmValues, commonHelmValues)
Expand Down
2 changes: 1 addition & 1 deletion charts/consul/templates/connect-inject-clusterrole.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ rules:
- get
{{- end }}
- apiGroups: [ "" ]
resources: [ "pods", "endpoints", "services", "namespaces" ]
resources: [ "pods", "endpoints", "services", "namespaces", "nodes" ]
verbs:
- "get"
- "list"
Expand Down
9 changes: 8 additions & 1 deletion charts/consul/templates/connect-inject-deployment.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,8 @@
{{- if .Values.global.lifecycleSidecarContainer }}{{ fail "global.lifecycleSidecarContainer has been renamed to global.consulSidecarContainer. Please set values using global.consulSidecarContainer." }}{{ end }}
{{ template "consul.validateVaultWebhookCertConfiguration" . }}
{{- template "consul.reservedNamesFailer" (list .Values.connectInject.consulNamespaces.consulDestinationNamespace "connectInject.consulNamespaces.consulDestinationNamespace") }}
{{- $serverEnabled := (or (and (ne (.Values.server.enabled | toString) "-") .Values.server.enabled) (and (eq (.Values.server.enabled | toString) "-") .Values.global.enabled)) -}}
{{- $serverExposeServiceEnabled := (or (and (ne (.Values.server.exposeService.enabled | toString) "-") .Values.server.exposeService.enabled) (and (eq (.Values.server.exposeService.enabled | toString) "-") (or .Values.global.peering.enabled .Values.global.adminPartitions.enabled))) -}}
# The deployment for running the Connect sidecar injector
apiVersion: apps/v1
kind: Deployment
Expand Down Expand Up @@ -129,6 +131,7 @@ spec:
-consul-k8s-image="{{ default .Values.global.imageK8S .Values.connectInject.image }}" \
-release-name="{{ .Release.Name }}" \
-release-namespace="{{ .Release.Namespace }}" \
-resource-prefix={{ template "consul.fullname" . }} \
-listen=:8080 \
{{- if .Values.connectInject.transparentProxy.defaultEnabled }}
-default-enable-transparent-proxy=true \
Expand All @@ -137,6 +140,11 @@ spec:
{{- end }}
{{- if .Values.global.peering.enabled }}
-enable-peering=true \
{{- if (eq .Values.global.peering.tokenGeneration.serverAddresses.source "") }}
{{- if (and $serverEnabled $serverExposeServiceEnabled) }}
-poll-server-expose-service=true \
ndhanushkodi marked this conversation as resolved.
Show resolved Hide resolved
{{- end }}
{{- end }}
{{- end }}
{{- if .Values.global.openshift.enabled }}
-enable-openshift \
Expand All @@ -146,7 +154,6 @@ spec:
{{- else }}
-transparent-proxy-default-overwrite-probes=false \
{{- end }}
-resource-prefix={{ template "consul.fullname" . }} \
{{- if (and .Values.dns.enabled .Values.dns.enableRedirection) }}
-enable-consul-dns=true \
{{- end }}
Expand Down
63 changes: 63 additions & 0 deletions charts/consul/templates/expose-servers-service.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,63 @@
{{- $serverEnabled := (or (and (ne (.Values.server.enabled | toString) "-") .Values.server.enabled) (and (eq (.Values.server.enabled | toString) "-") .Values.global.enabled)) -}}
{{- $serverExposeServiceEnabled := (or (and (ne (.Values.server.exposeService.enabled | toString) "-") .Values.server.exposeService.enabled) (and (eq (.Values.server.exposeService.enabled | toString) "-") (or .Values.global.peering.enabled .Values.global.adminPartitions.enabled))) -}}
{{- if (and $serverEnabled $serverExposeServiceEnabled) }}

# Service with an external IP to reach Consul servers.
# Used for exposing gRPC port for peering and ports for client partitions to discover servers.
apiVersion: v1
kind: Service
metadata:
name: {{ template "consul.fullname" . }}-expose-servers
namespace: {{ .Release.Namespace }}
labels:
app: {{ template "consul.name" . }}
chart: {{ template "consul.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
component: server
annotations:
{{- if .Values.server.exposeService.annotations }}
{{ tpl .Values.server.exposeService.annotations . | nindent 4 | trim }}
{{- end }}
spec:
type: "{{ .Values.server.exposeService.type }}"
ports:
{{- if (or (not .Values.global.tls.enabled) (not .Values.global.tls.httpsOnly)) }}
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we want to include port 8500? On EKS, load balancers will health check to the first port in the service. So previously, when only 8501 was here, and TLS was disabled, the load balancer would kick off all the servers as unhealthy endpoints. So unless we are always going to require TLS, we need to keep port 8500.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I still need to add TLS helm tests based on a decision on this comment.

- name: http
port: 8500
targetPort: 8500
{{ if (and (eq .Values.server.exposeService.type "NodePort") .Values.server.exposeService.nodePort.https) }}
nodePort: {{ .Values.server.exposeService.nodePort.http }}
{{- end }}
{{- end }}
{{- if .Values.global.tls.enabled }}
- name: https
port: 8501
targetPort: 8501
{{ if (and (eq .Values.server.exposeService.type "NodePort") .Values.server.exposeService.nodePort.https) }}
nodePort: {{ .Values.server.exposeService.nodePort.https }}
{{- end }}
{{- end }}
- name: serflan
port: 8301
targetPort: 8301
{{ if (and (eq .Values.server.exposeService.type "NodePort") .Values.server.exposeService.nodePort.serf) }}
nodePort: {{ .Values.server.exposeService.nodePort.serf }}
{{- end }}
- name: rpc
port: 8300
targetPort: 8300
{{ if (and (eq .Values.server.exposeService.type "NodePort") .Values.server.exposeService.nodePort.rpc) }}
nodePort: {{ .Values.server.exposeService.nodePort.rpc }}
{{- end }}
- name: grpc
port: 8503
targetPort: 8503
{{ if (and (eq .Values.server.exposeService.type "NodePort") .Values.server.exposeService.nodePort.grpc) }}
nodePort: {{ .Values.server.exposeService.nodePort.grpc }}
{{- end }}
selector:
app: {{ template "consul.name" . }}
release: "{{ .Release.Name }}"
component: server
{{- end }}
Loading