Skip to content
This repository has been archived by the owner on Feb 12, 2024. It is now read-only.

can't access UI with secured cluster #72

Closed
AyadiAmen opened this issue Aug 26, 2020 · 5 comments
Closed

can't access UI with secured cluster #72

AyadiAmen opened this issue Aug 26, 2020 · 5 comments
Labels
bug Something isn't working

Comments

@AyadiAmen
Copy link
Contributor

Describe the bug

Following the issue #45 , when the authentication is enabled I can't access the UI, I receive either :

System Error

The request contained an invalid host header [abc.com] in the request [/nifi].

Check for request manipulation or third-party intercept.

Valid host headers are [empty] or:

127.0.0.1 127.0.0.1:9443 ....

or :

503 service temporarily unavailable 

openresty/1.15.8.2

Version of Helm and Kubernetes:

Helm: "v3.0.2"

kubernetes: "v1.17.1"

What happened:

NiFi UI is unreachable

After this update Allow whitelisting expected Host values, NiFi accepts requests where the Host header contains an expected value. Currently, the expected values are driven by the .host properties in nifi.properties.

This issue seems to be similar to the issue we're having, so reading the following :

<< You will need a stable network identity that you can use to configure as your "proxy" in advance. For example in a testing scenario where you have access to the kubernetes cluster you can simply use "localhost" as the name of the proxy and use kubernetes port forward to tunnel requests from the localhost to your individual nodes (only one node at a time).

Another option that could better work for non-local use cases is to use a LoadBalancer service in front of the nodes and configure DNS to point to your LoadBalancer IP. If you want to do this in advance it is possible to create floating IPs and preconfigure DNS for it at almost any cloud provider. Then add the configured DNS to nifi.web.proxy.host property when starting your cluster. If setting up DNS is not an option you can use the IP directly. If setting up the IP in advance is not an option you may use an arbitrary hostname as the proxy host and add that hostname to your hosts file (or dnsmasq or company dns) to point to the dynamically generated LoadBalancer IP after NiFi started up. >>

I tried to create a host name for the minikube IP in the /etc/hosts file and preconfigured that DNS in nifi.web.proxy.host variable in nifi.properties ( also nifi.web.proxy.context.path and nifi.web.https.host ) I ended up getting one or the other from the errors above (also tried the ip address directly not only the dns) .

What you expected to happen:

Access the NiFi UI with a dns that I pass in the ingress config and in the webProxyHost variable.

How to reproduce it (as minimally and precisely as possible):

  • Clone the branch feature\ldap.
  • In the values.yaml file: enable and pass the ldap config and change the http/https (httpPort/httpsPort) ports and set to true the variables isSecure and clusterSecure.
  • Give your minikube IP a DNS in the etc/hosts file and pass that DNS in the webProxyHost variable.
  • Enable ingress and set the .host variable to your DNS.

Anything else we need to know:

in the ingress.yaml file I changed {{- $ingressPort := .Values.service.httpPort -}} to {{- $ingressPort := .Values.service.httpsPort -}} and when I try to access the DNS it didn't work as well ( it downloads file ).

@AyadiAmen AyadiAmen added the bug Something isn't working label Aug 26, 2020
@AyadiAmen AyadiAmen linked a pull request Aug 26, 2020 that will close this issue
3 tasks
@AyadiAmen
Copy link
Contributor Author

reopening the issue and relating it to #70 and #76 .

After adding the TLS support and securing NIFI, the UI becomes unreachable, I believe we are having the same issue here .

a secured nifi cluster can only be reached through localhost ( by default ) or by whitelisting specific proxy addresses by setting nifi.web.proxy.host in nifi.properties (required by NiFi for security purposes), even by passing to nifi.web.proxy.host we still stumble into a 503 openresty error or a The request contained an invalid host header [address] in the request [/]. Check for request manipulation or third-party intercept. error, while both nifi and ldap are working as they should with no errors in the logs.

Potential solution:
Reconfigure the reverse proxy between the pod and the browser.

since ( request manipulation or third-party intercept ) might cause the problem we're having, the proxy ( Ingress Controller , traefik ... ) between the pod and the web browser might be the third party manipulating the address we pass in the browser, and since nifi is getting a different address from the proxy than the one we're setting for nifi.web.proxy.host, NiFi User Interface will always be unreachable.

@alexnuttinck alexnuttinck mentioned this issue Sep 25, 2020
3 tasks
@alexnuttinck
Copy link
Contributor

I reproduce the same bug on minikube and gke.

@iammoen
Copy link
Contributor

iammoen commented Sep 25, 2020

does your ssl cert generated for the node have the hostname you are trying to use listed as a subject alternative name? We ran into an issue where the nifi toolkit 1.12 in server mode as a CA didn't generate a cert with the appropriate SANs. Not sure that this is what is happening to you but thought I should bring it up.

@Subv
Copy link
Contributor

Subv commented Oct 11, 2020

The issue (besides the {{- $ingressPort := .Values.service.httpsPort -}} change) seems to be that the ingress is trying to communicate with the secured Nifi using HTTP instead of HTTPS, in my case adding the HTTPS backend annotation to the ingress worked (I'm using the nginx ingress controller)

ingress:
  enabled: true
  annotations:
    kubernetes.io/tls-acme: "true"
    nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"

@AyadiAmen
Copy link
Contributor Author

This issue has been resolved with the commit : dbc0712

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants