Skip to content
This repository has been archived by the owner on Sep 30, 2020. It is now read-only.

Using APIServerFlags? #1136

Closed
kylegoch opened this issue Feb 15, 2018 · 11 comments
Closed

Using APIServerFlags? #1136

kylegoch opened this issue Feb 15, 2018 · 11 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@kylegoch
Copy link

I saw this:
https://github.com/kubernetes-incubator/kube-aws/blob/226827e26fc76df62ab94333dc3b1b7bbeb769cd/core/controlplane/config/templates/cloud-config-controller#L2088

And was wondering where exactly I could set these extra APIServer Flags. The documentation doesnt make mention of them at all, so I wasnt sure.

Thanks!

@whereisaaron
Copy link
Contributor

Me too @kylegoch, I want to add these flag so API extensions work, and that setting work. I couldn't find any mapping from cluster.yaml to APIServerFlags in the code or documentation.

    - --requestheader-client-ca-file=/etc/kubernetes/ssl/ca.pem
    - --requestheader-allowed-names=
    - "--requestheader-extra-headers-prefix=X-Remote-Extra-"
    - --requestheader-group-headers=X-Remote-Group
    - --requestheader-username-headers=X-Remote-User

@ktateish
Copy link
Contributor

I found the way to use .APIServerFlags. You can use it with kube-aws plugins.
A test code test/integration/plugin_test.go in the kube-aws repo will help you to understand how to use it.

  1. Create aggregator plugin directory
    Place plugins directory under the directory that have cluster.yaml. Then create the aggregator plugin directory
$ mkdir plugins
$ mkdir aggregator
  1. Create plugin.yaml
$ cat > plugins/aggregator/plugin.yaml
metadata:
  name: aggregator
  version: 0.0.1
spec:
  values:
  configuration:
    cloudformation:
      stacks:
        controlPlane:
          resources:
            append:
              inline: "{}"
        nodePool:
          resources:
            append:
              inline: "{}"
        root:
          resources:
            append:
              inline: "{}"
    kubernetes:
      apiserver:
        flags:
        - name: "requestheader-client-ca-file"
          value: "/etc/kubernetes/ssl/ca.pem"
        - name: "requestheader-allowed-names"
          value: ""
        - name: "requestheader-extra-headers-prefix"
          value: "X-Remote-Extra-"
        - name: "requestheader-group-headers"
          value: "X-Remote-Group"
        - name: "requestheader-username-headers"
          value: "X-Remote-User"
        - name: "proxy-client-cert-file"
          value: "/etc/kubernetes/ssl/proxy-client.pem"
        - name: "proxy-client-key-file"
          value: "/etc/kubernetes/ssl/proxy-client-key.pem"
    node:
      roles:
        controller:
          storage:
            files:
            - path: /etc/kubernetes/ssl/proxy-client.pem
              permissions: 0644
              contents:
                source:
                  path: assets/proxy-client.pem
            - path: /etc/kubernetes/ssl/proxy-client-key.pem
              permissions: 0600
              contents:
                source:
                  path: assets/proxy-client-key.pem
  1. Place assets (certs for proxy-client)
$ mkdir plugins/aggregator/assets
$ cp /path/to/proxy-client.pem  /path/to/proxy-client-key.pem plugins/aggregator/assets
  1. Enable the plugin in cluster.yaml
# Add following lines to your cluster.yaml
kubeAwsPlugins:
  aggregator:
    enabled: true
  1. render stack, validate and update
    (I'm not sure render stack is needed)

@whereisaaron
Copy link
Contributor

Nice @ktateish! That looks a lot nicer than patching the generated stack like I am doing!

@ktateish are you using kube-aws 0.9.9 or master build?

@mumoshu this looks good, has @ktateish used plugins as intended? Anything we might have missed?

@ktateish
Copy link
Contributor

I'm using 0.9.9.
(+following patch to fix #1171).

--- a/core/nodepool/cluster/cluster.go
+++ b/core/nodepool/cluster/cluster.go
@@ -200,7 +201,6 @@ func (c *ClusterRef) validateWorkerRootVolume(ec2Svc ec2CreateVolumeService) err
 	workerRootVolume := &ec2.CreateVolumeInput{
 		DryRun:           aws.Bool(true),
 		AvailabilityZone: aws.String(c.Subnets[0].AvailabilityZone),
-		Iops:             aws.Int64(int64(c.RootVolume.IOPS)),
 		Size:             aws.Int64(int64(c.RootVolume.Size)),
 		VolumeType:       aws.String(c.RootVolume.Type),
 	}

@ArchiFleKs
Copy link
Contributor

Could we generate a ca in addation to the ca.pem for everything that need to use API aggregator, as per : kubernetes-sigs/metrics-server#22 ?

I'm not sure it is related but i'm trying to use the AWS service catalog and catalog api server also needs the --requestheader-client-ca-file=/etc/kubernetes/ssl/ca.pem which is supposed to be a different ca that the default cluster CA.

@mumoshu
Copy link
Contributor

mumoshu commented Jun 11, 2018

@mumoshu this looks good, has @ktateish used plugins as intended? Anything we might have missed?

Definitely yes! I believe I have not documented about the feature at all. Wondering and impressed to see @ktateish managed to use it 👍

FYI:

render stack, validate and update
(I'm not sure render stack is needed)

render stack isn't needed. You can already see.Plugins references across cloud-configs which consumes plugins on kube-aws validate/update/up.

@mumoshu
Copy link
Contributor

mumoshu commented Jun 11, 2018

@ArchiFleKs We don't have a feature to instruct kube-aws to generate arbitrary certs and keys according the configuration, if that's what you're asking for. But I definitely see a value to it!

Would you mind raising a dedicate feature request for that?

Anyway, you'd need to generate your certs and keys on your own by running openssl or cfssl or so.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 24, 2019
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels May 24, 2019
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

7 participants