Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

User \"system:anonymous\" cannot get path \"/apis/constraints.gatekeeper.sh/v1beta1// #330

Closed
steve-heslouin opened this issue Mar 1, 2022 · 8 comments · Fixed by #364
Closed
Assignees
Labels
documentation Improvements or additions to documentation enhancement New feature or request good first issue Good for newcomers question Further information is requested
Milestone

Comments

@steve-heslouin
Copy link

steve-heslouin commented Mar 1, 2022

Hello, i wanted to try your dashboard so i could try runing it for my company, but that didn't worked.

On my local machine i used :

docker run -v ~/.kube/config:/home/gpm/.kube/config -p 8080:8080 quay.io/sighup/gatekeeper-policy-manager:v0.5.1

It loaded my kubeconfig file correctly, when i click "Get constraints status", it give me following error:

(403) Reason: Forbidden HTTP response headers: HTTPHeaderDict({'Audit-Id': '2b44e6b7-e43e-449b-a608-83d4347fde9e', 'Cache-Control': 'no-cache, private', 'Content-Type': 'application/json', 'X-Content-Type-Options': 'nosniff', 'X-Kubernetes-Pf-Flowschema-Uid': 'e8915a47-326c-450c-af43-f297c36367a6', 'X-Kubernetes-Pf-Prioritylevel-Uid': '38bf4b6b-d54e-4289-b143-c225265af301', 'Date': 'Tue, 01 Mar 2022 13:36:38 GMT', 'Content-Length': '225'}) HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/apis/constraints.gatekeeper.sh/v1beta1//\"","reason":"Forbidden","details":{},"code":403}
[2022-03-01 13:27:34 +0000] [1] [INFO] Starting gunicorn 20.1.0
[2022-03-01 13:27:34 +0000] [1] [INFO] Listening at: http://0.0.0.0:8080 (1)
[2022-03-01 13:27:34 +0000] [1] [INFO] Using worker: gthread
[2022-03-01 13:27:34 +0000] [9] [INFO] Booting worker with pid: 9
[2022-03-01 13:27:34 +0000] [11] [INFO] Booting worker with pid: 11
[2022-03-01 13:27:38,405] INFO: RUNNING WITH AUTHENTICATION DISABLED
[2022-03-01 13:27:38,405] INFO: RUNNING WITH AUTHENTICATION DISABLED
[2022-03-01 13:27:38,407] INFO: Attempting init with KUBECONFIG from path '~/.kube/config'
[2022-03-01 13:27:38,407] INFO: Attempting init with KUBECONFIG from path '~/.kube/config'
[2022-03-01 13:27:38,691] ERROR: [Errno 2] No such file or directory: 'aws-iam-authenticator'
[2022-03-01 13:27:38,693] ERROR: [Errno 2] No such file or directory: 'aws-iam-authenticator'

I have gatekeeper 3.7 installed on my EKS cluster and its up and running.

gatekeeper-system   gatekeeper-audit-59d4b6fd4c-lw8hj                1/1     Running   0          82d
gatekeeper-system   gatekeeper-controller-manager-66f474f785-448pz   1/1     Running   0          82d
gatekeeper-system   gatekeeper-controller-manager-66f474f785-895ng   1/1     Running   0          82d
gatekeeper-system   gatekeeper-controller-manager-66f474f785-cl8sz   1/1     Running   0          82d

We use STS assume role and use the aws-auth mechanism provided by AWS, could that be the issue , as it seems the client run by default in anonymous, and ofc we don't provide access to anonymous user in our clusters

Thanks

@ralgozino
Copy link
Member

ralgozino commented Mar 1, 2022

Hello @steve-heslouin
You seem to be using https://github.com/kubernetes-sigs/aws-iam-authenticator to authenticate to EKS. GPM's docker image doesn't include the aws-aim-authenticator binary.

You could try mounting also the binary to somewhere in $PATH together with the KUBECONFIG when you run the docker image, maybe that's enough.

Another option is to build your own image for GPM adding the aws-aim-authenticator binary, with a Dockerfile like:

FROM curlimages/curl:7.81.0 as downloader
RUN curl https://github.com/kubernetes-sigs/aws-iam-authenticator/releases/download/v0.5.5/aws-iam-authenticator_0.5.5_linux_amd64 --output /tmp/aws-iam-authenticator
RUN chmod +x /tmp/aws-iam-authenticator
FROM quay.io/sighup/gatekeeper-policy-manager:v0.5.1
COPY --from=downloader --chown=root:root /tmp/aws-iam-authenticator /usr/local/bin/

@steve-heslouin
Copy link
Author

@ralgozino Thanks a lot for your feedback, let me try it and i will let you know ;)

@ralgozino ralgozino added the question Further information is requested label Mar 7, 2022
@ralgozino ralgozino self-assigned this Mar 7, 2022
@steve-heslouin
Copy link
Author

@ralgozino
So i tried, and after having mounted the config and credentials file, i do not have error related to aws-iam-authenticator

I rather have this error:

ERROR: exec: plugin api version client.authentication.k8s.io/v1beta1 does not match client.authentication.k8s.io/v1alpha

DO you have an idea where that could come from?

Thanks a lot :)

@ralgozino
Copy link
Member

What version of kubectl do you have installed?

Could you please check your kubeconfig and see if you have a section like this:

# [...]
users:
- name: kubernetes-admin
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1beta1
      command: aws-iam-authenticator
      args:
        - "token"
        - "-i"
        - "REPLACE_ME_WITH_YOUR_CLUSTER_ID"
        - "-r"
        - "REPLACE_ME_WITH_YOUR_ROLE_ARN"
  # no client certificate/key needed here!

if you do, check that the apiVersion is client.authentication.k8s.io/v1beta1.

I think it could be a mismatch between the Kubernetes client version included in GPM and your kubeconfig format.

I would suggest trying with the unstable tag of GPM instead of v0.5.1 that has a newer version of the Kubernetes client.

@steve-heslouin
Copy link
Author

Ok so i edited my kubeconfig to target v1beta1 instead and that worked great, i logged in and saw my constraints.
Thanks a lot @ralgozino , i will be testing it in dev envs, see how it goes, and if that work, well, we will surely
will propose PRs to help for any improvements ;)

@ralgozino
Copy link
Member

Great to hear that!

We'll be waiting for your feedback 🙂

@steve-heslouin
Copy link
Author

steve-heslouin commented Mar 9, 2022

 name: arn:aws:eks:......
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1alpha1
      args:
      - --region
      - us-east-1
      - eks
      - get-token
      - --cluster-name
      - cluster_name
      command: aws
      env:
      - name: AWS_PROFILE
        value: dev
      interactiveMode: IfAvailable
      provideClusterInfo: false

Turns out i also had some cluster that were relying on aws cli to authenticate , and by adding it to your image, it also worked great, maybe that'd be cool to have those 2 binary installed by default ?

@ralgozino
Copy link
Member

ralgozino commented Mar 9, 2022

I need to think about it, as first thought, I would prefer not to include them in order to no couple GPM version to AWS's tooling. What we can do instead is to add documentation on how to do it so everyone can easily build the image with the versions they need.
EDIT: Because if we include AWS binaries we would need to include all the possible options also. the exec option in the kubeconfig files is generic. It allows you to use any executable.

@ralgozino ralgozino added documentation Improvements or additions to documentation enhancement New feature or request good first issue Good for newcomers labels Apr 26, 2022
@ralgozino ralgozino added this to the v1.0.0 milestone May 5, 2022
ralgozino added a commit that referenced this issue May 12, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
documentation Improvements or additions to documentation enhancement New feature or request good first issue Good for newcomers question Further information is requested
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants