Skip to content
This repository has been archived by the owner on Feb 12, 2024. It is now read-only.

Feature/ldap #70

Closed
wants to merge 17 commits into from
Closed

Feature/ldap #70

wants to merge 17 commits into from

Conversation

AyadiAmen
Copy link
Contributor

What this PR does / why we need it:

Which issue this PR fixes

This pull request adds the connection to ldap server for authentication.

Special notes for your reviewer:

Please check the comments in mentioned issue ( #45 ) for a better understand of all the changes and configurations.

Checklist

[Place an '[x]' (no spaces) in all applicable fields. Please remove unrelated fields.]

  • DCO signed
  • Chart Version bumped
  • Variables are documented in the README.md

@AyadiAmen AyadiAmen requested review from banzo and fzalila August 4, 2020 08:13
@AyadiAmen AyadiAmen added the enhancement New feature or request label Aug 4, 2020
@banzo
Copy link
Contributor

banzo commented Aug 7, 2020

@AyadiAmen can you test that this feature is disabled by default via the values.yaml flag?

@AyadiAmen
Copy link
Contributor Author

@banzo i'll test that later today and i'lll let you know asap.

@iammoen
Copy link
Contributor

iammoen commented Aug 10, 2020

Does the readiness probe still work with this? We have been going down the path of ldap auth and found that the initial admin identity needs to be the node in order to use the secured nifi api.

@AyadiAmen
Copy link
Contributor Author

@iammoen yes it works, after the last commit you can pass your “Initial Admin Identity” in the variable auth.ldap.admin in thevalues.yaml file . after deployment there will be two users in the users.xml file: the node identity and what you pass in auth.ldap.admin. The latter will have the necessary permissions to create and add policies as he is the “Initial Admin Identity”.

@alexnuttinck alexnuttinck self-requested a review August 17, 2020 12:57
Copy link
Contributor

@alexnuttinck alexnuttinck left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hello @AyadiAmen, thanks for your work on this feature :) ! Can you see my comments before we can merge?
When it is fixed, I will try your new feature on my laptop.

#### Configure the cluster security:

- **Ldap**: Enable ldap to secure the cluster and add user/password authentication. when ldap is enabled make sure to change the variables `properties.isSecure` and `properties.clusterSecure` to **true** and set `properties.httpPort` to **null** and `properties.httpsPort` to **9443**.
- Also to use ldap you need to set the **namesapce** and the **release-name** you're going to use in advance respectivly in `properties.namespace` and `properties.release` .
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The release name can be retrieved by .Release.Name so the user doesn't need to add it in values file.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why the namespace name is needed?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also I propose to add ldap in a sub-section as it's one of the possible auth method. You can also say that it's mandatory for nifi to enable the ssl to use an auth method.

<property name="Manager DN">{{.Values.auth.ldap.admin}}</property>
<property name="Manager Password">{{.Values.auth.ldap.pass}}</property>
<property name="TLS - Keystore">/opt/nifi/nifi-current/conf/localhost/keystore.jks</property>
<property name="TLS - Keystore Password">keystorePasswdfadi</property>
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It should be possible to pass all these values via the values.yaml. You can nevertheless set ldap.enabled to false and add default values for the ldap propreties.

<property name="TLS - Client Auth"></property>
<property name="TLS - Protocol"></property>
<property name="TLS - Shutdown Gracefully"></property>
<property name="Authentication Strategy">SIMPLE</property>
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All these values should be pass via the values.yaml

@@ -144,6 +146,18 @@ nifi.sensitive.props.algorithm=PBEWITHMD5AND256BITAES-CBC-OPENSSL
nifi.sensitive.props.provider=BC
nifi.sensitive.props.additional.keys=

{{if .Values.auth.ldap.enabled}}
nifi.security.keystore=/opt/nifi/nifi-current/conf/{{.Values.properties.release}}-nifi-0.{{.Values.properties.release}}-nifi-headless.{{.Values.properties.namespace}}.svc.cluster.local/keystore.jks
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

by passing nifi-0 it will not work for a cluster of nifi. you can retrieve this value with hostname -f, and the variable should be injected in the propreties file.

@@ -144,6 +146,18 @@ nifi.sensitive.props.algorithm=PBEWITHMD5AND256BITAES-CBC-OPENSSL
nifi.sensitive.props.provider=BC
nifi.sensitive.props.additional.keys=

{{if .Values.auth.ldap.enabled}}
nifi.security.keystore=/opt/nifi/nifi-current/conf/{{.Values.properties.release}}-nifi-0.{{.Values.properties.release}}-nifi-headless.{{.Values.properties.namespace}}.svc.cluster.local/keystore.jks
nifi.security.keystoreType=jks
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

via values.yaml

values.yaml Outdated
@@ -17,8 +17,8 @@ image:
# pullSecret: myRegistrKeySecretName

securityContext:
runAsUser: 1000
fsGroup: 1000
runAsUser: 0
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not acceptable! we can't use root user

values.yaml Outdated
isSecure: false # switch to true if the cluster is secured ( if you're using Ldap for example )
webProxyHost:
webHttpsHost:
release: nifi # if you're using the secured cluster, provide the release-name here before deploying.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

use hostname -f in your scripts, so the user do not need to use these variables.

values.yaml Outdated
needClientAuth: false
provenanceStorage: "8 GB"
siteToSite:
secure: false
secure: true
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should be false by default?

httpPort: 8080 # switch to null ( 8080 by default )if the cluster is secured ( if you're using Ldap )
httpsPort: null # switch to 9443 ( null by default ) if the cluster is secured ( if you're using Ldap )
clusterPort: 8443
clusterSecure: false # switch to true if the cluster is secured ( if you're using Ldap )
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sould be usable even if we don't use ldap auth

type: NodePort
httpPort: 8080
httpsPort: 443
type: LoadBalancer
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

-> NodePort

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

there's an issue to access nifi when it runs in a secured cluster, accessing it with NodePort is not possible for and there might be a solution with LoadBalener.

searchFilter: CN=john
admin: cn=admin,dc=example,dc=com
searchFilter: (objectClass=*)
UserIdentityAttribute: cn
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should this variable be lower camel case like all the other ones?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I confirm @AyadiAmen SSL, UserIdentity, AuthStrategy, IdentityStrategy, AuthExpiration should be lower case.

<property name="TLS - Shutdown Gracefully"></property>
<property name="Referral Strategy">FOLLOW</property>
<property name="Authentication Strategy">{{.Values.auth.ldap.AuthStrategy}}</property>
<property name="Manager DN">{{.Values.auth.ldap.admin}}</property>
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

was also thinking that maybe this should be a different ldap user besides the one for initial admin? Since this one will be used to bind to the ldap server it would probably be a service account of some kind whereas the initial admin would be likely to be a proper user?

@AyadiAmen AyadiAmen linked an issue Aug 26, 2020 that may be closed by this pull request
@makeacode
Copy link
Contributor

Not to hijack this PR, but but i've also created my own PR #76 that configures TLS support (using nifi-toolkit) and OIDC authentication (tested using adfs).

@alexnuttinck
Copy link
Contributor

@makeacode once #76 merged, we can use this PR to work on the ldap auth :)

@makeacode
Copy link
Contributor

theoretically once TLS is enabled the auth should be pretty simple....theoretically.

@piotron
Copy link

piotron commented Sep 22, 2020

Practically it's too quite simple. there are few quirks(like adding proper permissions manually in UI) but this may be because of our configuration changes and nature of hybrid nifi cluster.
However we managed to spawn today secure Nifi cluster with ldap integration.

@alexnuttinck
Copy link
Contributor

Practically it's too quite simple. there are few quirks(like adding proper permissions manually in UI) but this may be because of our configuration changes and nature of hybrid nifi cluster.
However we managed to spawn today secure Nifi cluster with ldap integration.

Thanks for your feedback @piotron!

Could you share your configuration setup? It would help to improve the Readme part/section about ldap integration with nifi.

@alexnuttinck
Copy link
Contributor

We setup nifi with ldap and it seems to work: we can see that nifi and openldap communicate together in their respective logs. But we face a new error. We can't reach the UI as described in #72... Any ideas?

@alexnuttinck
Copy link
Contributor

Current work is on branch feature: https://github.com/cetic/helm-nifi/tree/feature/atuh

@piotron
Copy link

piotron commented Sep 25, 2020

@alexnuttinck you have this issue with browser OR nifi/minifi agents(We noticed that MiNiFi agent's cannot proxy through loadbalancer properly and had to do workaround with proxy setup)

Have you set up webProxyHost properly(including nifi listening port nifi.test:9443 ?)
if your netries for httpsPort are like this(both)

httpsPort: 9443

then your entry for webProxyHost also should contain domain:port because statefulset does not set it

Other case your UI cannot open port 443 because of security settings. Ports below 1024 needs to have specific capabilities to open ports for non-root user.

@alexnuttinck
Copy link
Contributor

@alexnuttinck you have this issue with browser OR nifi/minifi agents(We noticed that MiNiFi agent's cannot proxy through loadbalancer properly and had to do workaround with proxy setup)

Have you set up webProxyHost properly(including nifi listening port nifi.test:9443 ?)
if your netries for httpsPort are like this(both)

httpsPort: 9443

then your entry for webProxyHost also should contain domain:port because statefulset does not set it

Other case your UI cannot open port 443 because of security settings. Ports below 1024 needs to have specific capabilities to open ports for non-root user.

@piotron thanks for the reply!

This is a browser issue. Ok, I didn't setup the port for the webProxyHost, only the domain. Let's have a try.

@AyadiAmen
Copy link
Contributor Author

Thank you @alexnuttinck and @piotron ,

i tried nifi.test:9443 but it didn't work, i even tried to pass nifi.test:9443 in webProxyHost and still not working, the only way i can reach the UI now is via forwarding the port 9443 and using localhost.

@piotron
Copy link

piotron commented Sep 25, 2020

@AyadiAmen could you past whole list of allowed domains? it should show all allowed ports and combinations for access.

@AyadiAmen
Copy link
Contributor Author

@piotron

Valid host headers are [empty] or:
127.0.0.1
127.0.0.1:9443
localhost
localhost:9443
[::1]
[::1]:9443
nifi-0.nifi-headless.default.svc.cluster.local
nifi-0.nifi-headless.default.svc.cluster.local:9443
172.17.0.9
172.17.0.9:9443
nifi.test

@piotron
Copy link

piotron commented Sep 25, 2020

@AyadiAmen and your webProxyHost is exactly nifi.test:9443 in values.yml? which branch are you using for this test?

@AyadiAmen
Copy link
Contributor Author

@piotron i tried with webProxyHost: nifi.test, webProxyHost: nifi.test:9443 and webProxyHost: nifi.test, nifi.test:9443 with the branches feature/atuh and feature/ldap

@piotron
Copy link

piotron commented Sep 25, 2020

@AyadiAmen if possible please share values.yml(ommiting internal data ofc) you used, and overrides.
From what I see it's somehow stripping secure port, and this may be caused by some other flag then.

@AyadiAmen
Copy link
Contributor Author

@piotron

---
# Number of nifi nodes
replicaCount: 1

## Set default image, imageTag, and imagePullPolicy.
## ref: https://hub.docker.com/r/apache/nifi/
##
image:
  repository: apache/nifi
  tag: "1.11.4"
  pullPolicy: IfNotPresent

  ## Optionally specify an imagePullSecret.
  ## Secret must be manually created in the namespace.
  ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
  ##
  # pullSecret: myRegistrKeySecretName

securityContext:
  runAsUser: 1000
  fsGroup: 1000

sts:
  # Parallel podManagementPolicy for faster bootstrap and teardown. Default is OrderedReady.
  podManagementPolicy: Parallel
  AntiAffinity: soft
  hostPort: null
  pod:
    annotations:
      security.alpha.kubernetes.io/sysctls: net.ipv4.ip_local_port_range=10000 65000
      #prometheus.io/scrape: "true"      

## Useful if using any custom secrets
## Pass in some secrets to use (if required)
# secrets:
# - name: myNifiSecret
#   keys:
#     - key1
#     - key2
#   mountPath: /opt/nifi/secret

## Useful if using any custom configmaps
## Pass in some configmaps to use (if required)
# configmaps:
#   - name: myNifiConf
#     keys:
#       - myconf.conf
#     mountPath: /opt/nifi/custom-config


properties:
  # use externalSecure for when inbound SSL is provided by nginx-ingress or other external mechanism
  externalSecure: false
  isNode: true
  httpPort: null
  httpsPort: 9443
  webProxyHost: nifi.test
  clusterPort: 6007
  clusterSecure: true
  needClientAuth: false
  provenanceStorage: "8 GB"
  siteToSite:
    port: 10000
  authorizer: managed-authorizer
  # use properties.safetyValve to pass explicit 'key: value' pairs that overwrite other configuration
  safetyValve:
    #nifi.variable.registry.properties: "${NIFI_HOME}/example1.properties, ${NIFI_HOME}/example2.properties"
    nifi.web.http.network.interface.default: eth0
    # listen to loopback interface so "kubectl port-forward ..." works
    nifi.web.http.network.interface.lo: lo

## Include additional libraries in the Nifi containers by using the postStart handler
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/
# postStart: /opt/nifi/psql; wget -P /opt/nifi/psql https://jdbc.postgresql.org/download/postgresql-42.2.6.jar

# Nifi User Authentication
auth:
  admin: CN=admin, OU=NIFI
  SSL:
    keystorePasswd: env:PASS
    truststorePasswd: env:PASS
  ldap:
    enabled: true
    host: ldap://openldap:389
    searchBase: --
    admin: --
    pass: --
    searchFilter: (objectClass=*)
    userIdentityAttribute: cn
    authStrategy: SIMPLE 
    identityStrategy: USE_DN
    authExpiration: 12 hours

  oidc:
    enabled: false
    discoveryUrl:
    clientId:
    clientSecret:
    claimIdentifyingUser: email

## Expose the nifi service to be accessed from outside the cluster (LoadBalancer service).
## or access it from within the cluster (ClusterIP service). Set the service type and the port to serve it.
## ref: http://kubernetes.io/docs/user-guide/services/
##

# headless service
headless:
  type: ClusterIP
  annotations:
    service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"

# ui service
service:
  type: LoadBalancer
  httpPort: 8080
  httpsPort: 443
  annotations: {}
    # loadBalancerIP:
    ## Load Balancer sources
    ## https://kubernetes.io/docs/tasks/access-application-cluster/configure-cloud-provider-firewall/#restrict-access-for-loadbalancer-service
    ##
    # loadBalancerSourceRanges:
    # - 10.10.10.0/24
    ## OIDC authentication requires "sticky" session on the LoadBalancer for JWT to work properly...but AWS doesn't like it on creation
    # sessionAffinity: ClientIP
    # sessionAffinityConfig:
    #   clientIP:
    #     timeoutSeconds: 10800

  # Enables additional port/ports to nifi service for internal processors
  processors:
    enabled: false
    ports:
      - name: processor01
        port: 7001
        targetPort: 7001
        #nodePort: 30701
      - name: processor02
        port: 7002
        targetPort: 7002
        #nodePort: 30702

## Configure Ingress based on the documentation here: https://kubernetes.io/docs/concepts/services-networking/ingress/
##
ingress:
  enabled: true
  annotations: {}
  tls: []
  hosts: [nifi.test]
  path: /
  # If you want to change the default path, see this issue https://github.com/cetic/helm-nifi/issues/22

# Amount of memory to give the NiFi java heap
jvmMemory: 2g

# Separate image for tailing each log separately and checking zookeeper connectivity
sidecar:
  image: busybox
  tag: "1.32.0"

## Enable persistence using Persistent Volume Claims
## ref: http://kubernetes.io/docs/user-guide/persistent-volumes/
##
persistence:
  enabled: false

  # When creating persistent storage, the NiFi helm chart can either reference an already-defined
  # storage class by name, such as "standard" or can define a custom storage class by specifying
  # customStorageClass: true and providing the "storageClass", "storageProvisioner" and "storageType".
  # For example, to use SSD storage on Google Compute Engine see values-gcp.yaml
  #
  # To use a storage class that already exists on the Kubernetes cluster, we can simply reference it by name.
  # For example:
  # storageClass: standard
  #
  # The default storage class is used if this variable is not set.

  accessModes:  [ReadWriteOnce]
  ## Storage Capacities for persistent volumes
  configStorage:
    size: 100Mi
  authconfStorage:
    size: 100Mi
  # Storage capacity for the 'data' directory, which is used to hold things such as the flow.xml.gz, configuration, state, etc.
  dataStorage:
    size: 1Gi
  # Storage capacity for the FlowFile repository
  flowfileRepoStorage:
    size: 10Gi
  # Storage capacity for the Content repository
  contentRepoStorage:
    size: 10Gi
  # Storage capacity for the Provenance repository. When changing this, one should also change the properties.provenanceStorage value above, also.
  provenanceRepoStorage:
    size: 10Gi
  # Storage capacity for nifi logs
  logStorage:
    size: 5Gi

## Configure resource requests and limits
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
##
resources: {}
  # We usually recommend not to specify default resources and to leave this as a conscious
  # choice for the user. This also increases chances charts run on environments with little
  # resources, such as Minikube. If you do want to specify resources, uncomment the following
  # lines, adjust them as necessary, and remove the curly braces after 'resources:'.
  # limits:
  #  cpu: 100m
  #  memory: 128Mi
  # requests:
  #  cpu: 100m
#  memory: 128Mi

logresources:
  requests:
    cpu: 10m
    memory: 10Mi
  limits:
    cpu: 50m
    memory: 50Mi

nodeSelector: {}

tolerations: []

initContainers: {}
  # foo-init:  # <- will be used as container name
  #   image: "busybox:1.30.1"
  #   imagePullPolicy: "IfNotPresent"
  #   command: ['sh', '-c', 'echo this is an initContainer']
#   volumeMounts:
#     - mountPath: /tmp/foo
#       name: foo

extraVolumeMounts: []

extraVolumes: []

## Extra containers
extraContainers: []

terminationGracePeriodSeconds: 30

## Extra environment variables that will be pass onto deployment pods
env: []

# ca server details
# Setting this true would create a nifi-toolkit based ca server
# The ca server will be used to generate self-signed certificates required setting up secured cluster
ca:
  ## If true, enable the nifi-toolkit certificate authority
  enabled: true
  persistence:
    enabled: true
  server: ""
  port: 9090
  token: sixteenCharacters
  admin:
    cn: admin

# ------------------------------------------------------------------------------
# Zookeeper:
# ------------------------------------------------------------------------------
zookeeper:
  ## If true, install the Zookeeper chart
  ## ref: https://github.com/kubernetes/charts/tree/master/incubator/zookeeper
  enabled: true
  ## If the Zookeeper Chart is disabled a URL and port are required to connect
  url: ""
  port: 2181

# ------------------------------------------------------------------------------
# Nifi registry:
# ------------------------------------------------------------------------------
registry:
  ## If true, install the Nifi registry
  enabled: true
  url: ""
  port: 80
  ## Add values for the nifi-registry here
  ## ref: https://github.com/dysnix/charts/blob/master/nifi-registry/values.yaml

@piotron
Copy link

piotron commented Sep 25, 2020

@AyadiAmen please try this file

---
# Number of nifi nodes
replicaCount: 1

## Set default image, imageTag, and imagePullPolicy.
## ref: https://hub.docker.com/r/apache/nifi/
##
image:
  repository: apache/nifi
  tag: "1.11.4"
  pullPolicy: IfNotPresent

  ## Optionally specify an imagePullSecret.
  ## Secret must be manually created in the namespace.
  ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
  ##
  # pullSecret: myRegistrKeySecretName

securityContext:
  runAsUser: 1000
  fsGroup: 1000

sts:
  # Parallel podManagementPolicy for faster bootstrap and teardown. Default is OrderedReady.
  podManagementPolicy: Parallel
  AntiAffinity: soft
  hostPort: null
  pod:
    annotations:
      security.alpha.kubernetes.io/sysctls: net.ipv4.ip_local_port_range=10000 65000
      #prometheus.io/scrape: "true"      

## Useful if using any custom secrets
## Pass in some secrets to use (if required)
# secrets:
# - name: myNifiSecret
#   keys:
#     - key1
#     - key2
#   mountPath: /opt/nifi/secret

## Useful if using any custom configmaps
## Pass in some configmaps to use (if required)
# configmaps:
#   - name: myNifiConf
#     keys:
#       - myconf.conf
#     mountPath: /opt/nifi/custom-config


properties:
  # use externalSecure for when inbound SSL is provided by nginx-ingress or other external mechanism
  externalSecure: false
  isNode: true
  httpPort: null
  httpsPort: 9443
  webProxyHost: nifi.test:9443
  clusterPort: 6007
  clusterSecure: true
  needClientAuth: false
  provenanceStorage: "8 GB"
  siteToSite:
    port: 10000
  authorizer: managed-authorizer
  # use properties.safetyValve to pass explicit 'key: value' pairs that overwrite other configuration
  safetyValve:
    #nifi.variable.registry.properties: "${NIFI_HOME}/example1.properties, ${NIFI_HOME}/example2.properties"
    nifi.web.http.network.interface.default: eth0
    # listen to loopback interface so "kubectl port-forward ..." works
    nifi.web.http.network.interface.lo: lo

## Include additional libraries in the Nifi containers by using the postStart handler
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/
# postStart: /opt/nifi/psql; wget -P /opt/nifi/psql https://jdbc.postgresql.org/download/postgresql-42.2.6.jar

# Nifi User Authentication
auth:
  admin: CN=admin, OU=NIFI
  SSL:
    keystorePasswd: env:PASS
    truststorePasswd: env:PASS
  ldap:
    enabled: true
    host: ldap://openldap:389
    searchBase: --
    admin: --
    pass: --
    searchFilter: (objectClass=*)
    userIdentityAttribute: cn
    authStrategy: SIMPLE 
    identityStrategy: USE_DN
    authExpiration: 12 hours

  oidc:
    enabled: false
    discoveryUrl:
    clientId:
    clientSecret:
    claimIdentifyingUser: email

## Expose the nifi service to be accessed from outside the cluster (LoadBalancer service).
## or access it from within the cluster (ClusterIP service). Set the service type and the port to serve it.
## ref: http://kubernetes.io/docs/user-guide/services/
##

# headless service
headless:
  type: ClusterIP
  annotations:
    service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"

# ui service
service:
  type: LoadBalancer
  httpPort: 8080
  httpsPort: 9443
  annotations: {}
    # loadBalancerIP:
    ## Load Balancer sources
    ## https://kubernetes.io/docs/tasks/access-application-cluster/configure-cloud-provider-firewall/#restrict-access-for-loadbalancer-service
    ##
    # loadBalancerSourceRanges:
    # - 10.10.10.0/24
    ## OIDC authentication requires "sticky" session on the LoadBalancer for JWT to work properly...but AWS doesn't like it on creation
    # sessionAffinity: ClientIP
    # sessionAffinityConfig:
    #   clientIP:
    #     timeoutSeconds: 10800

  # Enables additional port/ports to nifi service for internal processors
  processors:
    enabled: false
    ports:
      - name: processor01
        port: 7001
        targetPort: 7001
        #nodePort: 30701
      - name: processor02
        port: 7002
        targetPort: 7002
        #nodePort: 30702

## Configure Ingress based on the documentation here: https://kubernetes.io/docs/concepts/services-networking/ingress/
##
ingress:
  enabled: true
  annotations: {}
  tls: []
  hosts: [nifi.test]
  path: /
  # If you want to change the default path, see this issue https://github.com/cetic/helm-nifi/issues/22

# Amount of memory to give the NiFi java heap
jvmMemory: 2g

# Separate image for tailing each log separately and checking zookeeper connectivity
sidecar:
  image: busybox
  tag: "1.32.0"

## Enable persistence using Persistent Volume Claims
## ref: http://kubernetes.io/docs/user-guide/persistent-volumes/
##
persistence:
  enabled: false

  # When creating persistent storage, the NiFi helm chart can either reference an already-defined
  # storage class by name, such as "standard" or can define a custom storage class by specifying
  # customStorageClass: true and providing the "storageClass", "storageProvisioner" and "storageType".
  # For example, to use SSD storage on Google Compute Engine see values-gcp.yaml
  #
  # To use a storage class that already exists on the Kubernetes cluster, we can simply reference it by name.
  # For example:
  # storageClass: standard
  #
  # The default storage class is used if this variable is not set.

  accessModes:  [ReadWriteOnce]
  ## Storage Capacities for persistent volumes
  configStorage:
    size: 100Mi
  authconfStorage:
    size: 100Mi
  # Storage capacity for the 'data' directory, which is used to hold things such as the flow.xml.gz, configuration, state, etc.
  dataStorage:
    size: 1Gi
  # Storage capacity for the FlowFile repository
  flowfileRepoStorage:
    size: 10Gi
  # Storage capacity for the Content repository
  contentRepoStorage:
    size: 10Gi
  # Storage capacity for the Provenance repository. When changing this, one should also change the properties.provenanceStorage value above, also.
  provenanceRepoStorage:
    size: 10Gi
  # Storage capacity for nifi logs
  logStorage:
    size: 5Gi

## Configure resource requests and limits
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
##
resources: {}
  # We usually recommend not to specify default resources and to leave this as a conscious
  # choice for the user. This also increases chances charts run on environments with little
  # resources, such as Minikube. If you do want to specify resources, uncomment the following
  # lines, adjust them as necessary, and remove the curly braces after 'resources:'.
  # limits:
  #  cpu: 100m
  #  memory: 128Mi
  # requests:
  #  cpu: 100m
#  memory: 128Mi

logresources:
  requests:
    cpu: 10m
    memory: 10Mi
  limits:
    cpu: 50m
    memory: 50Mi

nodeSelector: {}

tolerations: []

initContainers: {}
  # foo-init:  # <- will be used as container name
  #   image: "busybox:1.30.1"
  #   imagePullPolicy: "IfNotPresent"
  #   command: ['sh', '-c', 'echo this is an initContainer']
#   volumeMounts:
#     - mountPath: /tmp/foo
#       name: foo

extraVolumeMounts: []

extraVolumes: []

## Extra containers
extraContainers: []

terminationGracePeriodSeconds: 30

## Extra environment variables that will be pass onto deployment pods
env: []

# ca server details
# Setting this true would create a nifi-toolkit based ca server
# The ca server will be used to generate self-signed certificates required setting up secured cluster
ca:
  ## If true, enable the nifi-toolkit certificate authority
  enabled: true
  persistence:
    enabled: true
  server: ""
  port: 9090
  token: sixteenCharacters
  admin:
    cn: admin

# ------------------------------------------------------------------------------
# Zookeeper:
# ------------------------------------------------------------------------------
zookeeper:
  ## If true, install the Zookeeper chart
  ## ref: https://github.com/kubernetes/charts/tree/master/incubator/zookeeper
  enabled: true
  ## If the Zookeeper Chart is disabled a URL and port are required to connect
  url: ""
  port: 2181

# ------------------------------------------------------------------------------
# Nifi registry:
# ------------------------------------------------------------------------------
registry:
  ## If true, install the Nifi registry
  enabled: true
  url: ""
  port: 80
  ## Add values for the nifi-registry here
  ## ref: https://github.com/dysnix/charts/blob/master/nifi-registry/values.yaml

and call https://nifi.test:9443

@AyadiAmen
Copy link
Contributor Author

@piotron still not accessible :/

@piotron
Copy link

piotron commented Sep 25, 2020

@AyadiAmen this may be wild guess now but try to disable ingress and return it to default. because now this is only thing that differs.

@AyadiAmen
Copy link
Contributor Author

@piotron and access it how without ingress ? i'll try anyway but i think the ingress should the solution here

@piotron
Copy link

piotron commented Sep 25, 2020

@AyadiAmen Maybe Ingress is a way to go but for now we managed to get it to work with separate LoadBalancer service this way it presents with it's own certificates.

Our setup is K3S + Metallb

@makeacode
Copy link
Contributor

@AyadiAmen I'm definitely not a nifi expert but when i was working on adding TLS support it seemed that nifi wants/needs to do the termination itself and it cannot be done at the ingress level (unless ingress can do some trickery i'm not aware of). I used EKS for my development/testing and exposed nifi through a LoadBalancer and set the url i wanted to access it by to webProxyHost. After that I just added a dns CNAME to point the webProxyHost value to the load balancer url so all the values would match up.

@jrote1
Copy link

jrote1 commented Oct 21, 2020

What is this status of this ldap support as the currently implementation is a bit limiting especially when upgrading the chart as it wipes my configuration

@alexnuttinck
Copy link
Contributor

Done in #107. I close this PR.

@banzo banzo deleted the feature/ldap branch April 11, 2022 11:52
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

can't access UI with secured cluster
7 participants