Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error: kops upgrade is required #193

Closed
kthompson opened this issue May 6, 2021 · 8 comments
Closed

Error: kops upgrade is required #193

kthompson opened this issue May 6, 2021 · 8 comments

Comments

@kthompson
Copy link

When creating a new terraform config with the kops plugin enabled I get the error Error: kops upgrade is required. I have the latest version of kops, I am not sure where this error is coming from.

Example config:

terraform {
  required_providers {
    kops = {
      source = "eddycharly/kops"
      //version = "1.19.0-alpha.6"
      version = "1.18.0-beta.2"
    }
  }
}
provider "aws" {
  region = "us-west-1"
}
provider "kops" {
  state_store = "s3://redacted"
  // optionally set up your cloud provider access config
  aws {
    profile = "default"
  }
}
resource "aws_vpc" "test-k8s" {
  cidr_block       = "10.10.0.0/16"
  instance_tenancy = "default"
  tags = {
    Name = "test-k8s"
  }
}
resource "aws_subnet" "k8s-1" {
  vpc_id     = aws_vpc.test-k8s.id
  cidr_block = "10.10.1.0/24"
  availability_zone = "us-west-1b"
  tags = {
    Name = "k8s-1"
  }
}
resource "aws_subnet" "k8s-2" {
  vpc_id     = aws_vpc.test-k8s.id
  cidr_block = "10.10.2.0/24"
  availability_zone = "us-west-1b"
  tags = {
    Name = "k8s-2"
  }
}
resource "aws_subnet" "k8s-3" {
  vpc_id     = aws_vpc.test-k8s.id
  cidr_block = "10.10.3.0/24"
  availability_zone = "us-west-1a"
  tags = {
    Name = "k8s-3"
  }
}
resource "aws_route53_zone" "private" {
  name = "test-k8s"
  vpc {
    vpc_id = aws_vpc.test-k8s.id
  }
}
resource "kops_cluster" "cluster" {
  name                 = "test-k8s"
  admin_ssh_key        = file("id_ed25519.pub")
  cloud_provider       = "aws"
  //kubernetes_version   = "stable"
  dns_zone             = "test-k8s"
  network_id           =  aws_vpc.test-k8s.id
  iam {
   legacy = false
  }
  networking {
    calico {}
  }
  topology {
    masters = "private"
    nodes   = "private"
    dns {
      type = "Private"
    }
  }
  # cluster subnets
  subnet {
    name        = "test-private-0"
    provider_id = aws_subnet.k8s-1.id
    type        = "Private"
    zone        = "us-west-1b"
    region      = "us-west-1"
  }
  subnet {
    name        = "test-private-1"
    provider_id = aws_subnet.k8s-2.id
    type        = "Private"
    zone        = "us-west-1b"
    region      = "us-west-1"
  }
  subnet {
    name        = "test-private-2"
    provider_id = aws_subnet.k8s-3.id
    type        = "Private"
    zone        = "us-west-1a"
    region      = "us-west-1"
  }
  # etcd clusters
  etcd_cluster {
    name            = "main"
    member {
      name             = "master-0"
      instance_group   = "master-0"
    }
    member {
      name             = "master-1"
      instance_group   = "master-1"
    }
    member {
      name             = "master-2"
      instance_group   = "master-2"
    }
  }
  etcd_cluster {
    name            = "events"
    member {
      name             = "master-0"
      instance_group   = "master-0"
    }
    member {
      name             = "master-1"
      instance_group   = "master-1"
    }
    member {
      name             = "master-2"
      instance_group   = "master-2"
    }
  }
}
resource "kops_instance_group" "master-0" {
  cluster_name = kops_cluster.cluster.name
  name         = "master-0"
  role         = "Master"
  min_size     = 1
  max_size     = 1
  machine_type = "t2.micro"
  subnets      = ["test-private-0"]
  depends_on   = [kops_cluster.cluster]
}
resource "kops_instance_group" "master-1" {
  cluster_name = kops_cluster.cluster.name
  name         = "master-1"
  role         = "Master"
  min_size     = 1
  max_size     = 1
  machine_type = "t2.micro"
  subnets      = ["test-private-1"]
  depends_on   = [kops_cluster.cluster]
}
resource "kops_instance_group" "master-2" {
  cluster_name = kops_cluster.cluster.name
  name         = "master-2"
  role         = "Master"
  min_size     = 1
  max_size     = 1
  machine_type = "t2.micro"
  subnets      = ["test-private-2"]
  depends_on   = [kops_cluster.cluster]
}
resource "kops_cluster_updater" "updater" {
  cluster_name = kops_cluster.cluster.name
  keepers = {
    cluster  = kops_cluster.cluster.id,
    master-0 = kops_instance_group.master-0.id,
    master-1 = kops_instance_group.master-1.id,
    master-2 = kops_instance_group.master-2.id
  }
  rolling_update {
    skip                = false
    fail_on_drain_error = true
    fail_on_validate    = true
    validate_count      = 1
  }
  validate {
    skip = false
  }
  # ensures rolling update happens after the cluster and instance groups are up to date
  depends_on   = [
    kops_cluster.cluster,
    kops_instance_group.master-0,
    kops_instance_group.master-1,
    kops_instance_group.master-2
  ]
}
@kthompson kthompson changed the title Error: Error: kops upgrade is required Error: kops upgrade is required May 6, 2021
@eddycharly
Copy link
Owner

Hello @kthompson, thanks for raising the issue.

Are you showing the real config here ?
I don't think kOps can start a cluster without at least a worker instance group (you seem to have only masters).
Also, remember that you will need internet access (either through an internet gateway or nat gateway).

Regarding your error, it looks like you created the cluster first with a newer version of kOps and enabled terraform management afterwards.
The provider is compiled with terraform 1.18.3 and does not rely on the version installed locally.

If it's the case, kOps detects that the kOps version used by the provider is lower than the kOps cli version used to create the cluster.
It doesn't allow to use a prior version by default, it can be configured though.

Please confirm your use case and i'll see how i can help with it.

@kthompson
Copy link
Author

kthompson commented May 6, 2021

For the most part I just coped the example from Basic as I was researching whether this terraform provider would be an option for us. I now realize the Basic example also does not have an instance group with a Node role. Let me try to wipe everything and add the node instance group.

@kthompson
Copy link
Author

kthompson commented May 6, 2021

Ok so I made a new test repo. again I am just looking to test out the provider. Now I am getting a different error that I am unable to figure out.

The Test repo is here:
https://github.com/kthompson/kops-cluster-test/blob/main/main.tf
In the repo I am using Terraform 0.15.2

new error is:

╷
│ Error: file already exists
│
│   with kops_cluster.cluster,
│   on main.tf line 63, in resource "kops_cluster" "cluster":
│   63: resource "kops_cluster" "cluster" {
│

I am trying to look through the code and I imagine its failing in here somewhere but I cant really tell where:

func CreateCluster(name, adminSshKey string, spec kops.ClusterSpec, clientset simple.Clientset) (*Cluster, error) {

@eddycharly
Copy link
Owner

From the error, it looks like the files in the state store already exist.

provider "kops" {
  # Configuration options
  state_store = "kevin-test-k8s-cluster"
}

Your state store should be something like s3://<name of your bucket>. Can you look in the bucket and make sur the files don't exist before creating the cluster ?

@kthompson
Copy link
Author

Ok I updated the state store. it was empty. Now the cluster resource is created as well as the instance groups. however now the kops_cluster_updater.updater is failing:

kops_instance_group.master-0: Creating...
kops_instance_group.master-1: Creating...
kops_instance_group.master-2: Creating...
kops_instance_group.nodes: Creation complete after 0s [id=cluster.example.com/nodes]
kops_instance_group.master-1: Creation complete after 1s [id=cluster.example.com/master-1]
kops_instance_group.master-0: Creation complete after 1s [id=cluster.example.com/master-0]
kops_instance_group.master-2: Creation complete after 1s [id=cluster.example.com/master-2]
kops_cluster_updater.updater: Creating...
╷
│ Error: spec.subnets[0]: Not found: "subnet-0e69ac93cc8922f0c"
│
│   with kops_cluster_updater.updater,
│   on main.tf line 210, in resource "kops_cluster_updater" "updater":
│  210: resource "kops_cluster_updater" "updater" {
│
╵

@eddycharly
Copy link
Owner

eddycharly commented May 6, 2021

Yeah, kops abstracts the subnets concept.
You should use the name you declared in your cluster, not the aws subnet id directly.

resource "kops_cluster" "cluster" {
  ...
  # cluster subnets
  subnet {
    name        = "test-private-0"
    provider_id = aws_subnet.k8s-1.id
    type        = "Private"
    zone        = "us-west-1b"
    region      = "us-west-1"
  }
  ...
}

resource "kops_instance_group" "master-0" {
  ...
  # use the name of the subnet you used when defining the cluster, not the aws subnet id below
  subnets      = ["test-private-0"]
  ...
}

@kthompson
Copy link
Author

Making that change seemed to help. Feel free to copy my test case as an example for AWS if you would like. Feel free to close this issue. Thanks for the help 👍

@eddycharly
Copy link
Owner

Glad it helped !

I will take time to improve the examples with a fully working sample tomorrow.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants