Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(disruption): add node notready controller #1755

Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion kwok/apis/crds/karpenter.kwok.sh_kwoknodeclasses.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.16.3
controller-gen.kubebuilder.io/version: v0.16.4
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we (you) split this bump into its own commit / explain more about the context?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This bump was created by running make verify and then presumably by go generate ./... - as this is my first time working on karpenter I wasn't sure if I should commit this change. Happy to remove the bumps if it is deemed not necessary.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(I would make the autogenerated changes a separate commit, then it is easy to omit if appropriate)

name: kwoknodeclasses.karpenter.kwok.sh
spec:
group: karpenter.kwok.sh
Expand Down
9 changes: 8 additions & 1 deletion kwok/charts/crds/karpenter.sh_nodeclaims.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.16.3
controller-gen.kubebuilder.io/version: v0.16.4
name: nodeclaims.karpenter.sh
spec:
group: karpenter.sh
Expand Down Expand Up @@ -275,6 +275,13 @@ spec:
If left undefined, the controller will wait indefinitely for pods to be drained.
pattern: ^([0-9]+(s|m|h))+$
type: string
unreachableTimeout:
default: Never
description: |-
unreachableTimeout is the duration the controller will wait
before terminating a node, measured from when the node is tainted unreachable
pattern: ^(([0-9]+(s|m|h))+)|(Never)$
type: string
required:
- nodeClassRef
- requirements
Expand Down
2 changes: 1 addition & 1 deletion kwok/charts/crds/karpenter.sh_nodepools.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.16.3
controller-gen.kubebuilder.io/version: v0.16.4
name: nodepools.karpenter.sh
spec:
group: karpenter.sh
Expand Down
4 changes: 2 additions & 2 deletions kwok/charts/values.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@ podDisruptionBudget:
name: karpenter
maxUnavailable: 1
# -- SecurityContext for the pod.
podSecurityContext:
podSecurityContext:
fsGroup: 65536
# -- PriorityClass name for the pod.
priorityClassName: system-cluster-critical
Expand Down Expand Up @@ -91,7 +91,7 @@ controller:
# -- Repository path to the controller image.
repository: ""
# -- Tag of the controller image.
tag: ""
tag: ""
# -- SHA256 digest of the controller image.
digest: ""
# -- Additional environment variables for the controller pod.
Expand Down
9 changes: 8 additions & 1 deletion pkg/apis/crds/karpenter.sh_nodeclaims.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.16.3
controller-gen.kubebuilder.io/version: v0.16.4
name: nodeclaims.karpenter.sh
spec:
group: karpenter.sh
Expand Down Expand Up @@ -273,6 +273,13 @@ spec:
If left undefined, the controller will wait indefinitely for pods to be drained.
pattern: ^([0-9]+(s|m|h))+$
type: string
unreachableTimeout:
default: Never
description: |-
unreachableTimeout is the duration the controller will wait
before terminating a node, measured from when the node is tainted unreachable
pattern: ^(([0-9]+(s|m|h))+)|(Never)$
type: string
required:
- nodeClassRef
- requirements
Expand Down
2 changes: 1 addition & 1 deletion pkg/apis/crds/karpenter.sh_nodepools.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.16.3
controller-gen.kubebuilder.io/version: v0.16.4
name: nodepools.karpenter.sh
spec:
group: karpenter.sh
Expand Down
8 changes: 8 additions & 0 deletions pkg/apis/v1/nodeclaim.go
Original file line number Diff line number Diff line change
Expand Up @@ -73,6 +73,14 @@ type NodeClaimSpec struct {
// +kubebuilder:validation:Schemaless
// +optional
ExpireAfter NillableDuration `json:"expireAfter,omitempty"`
// UnreachableTimeout is the duration the controller will wait
// before terminating a node, measured from when the node is tainted unreachable
// +kubebuilder:default:="Never"
// +kubebuilder:validation:Pattern=`^(([0-9]+(s|m|h))+)|(Never)$`
// +kubebuilder:validation:Type="string"
// +kubebuilder:validation:Schemaless
// +optional
UnreachableTimeout NillableDuration `json:"unreachableTimeout,omitempty"`
}

// A node selector requirement with min values is a selector that contains values, a key, an operator that relates the key and values
Expand Down
1 change: 1 addition & 0 deletions pkg/apis/v1/zz_generated.deepcopy.go

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

2 changes: 2 additions & 0 deletions pkg/controllers/controllers.go
Original file line number Diff line number Diff line change
Expand Up @@ -37,6 +37,7 @@ import (
"sigs.k8s.io/karpenter/pkg/controllers/nodeclaim/expiration"
nodeclaimgarbagecollection "sigs.k8s.io/karpenter/pkg/controllers/nodeclaim/garbagecollection"
nodeclaimlifecycle "sigs.k8s.io/karpenter/pkg/controllers/nodeclaim/lifecycle"
"sigs.k8s.io/karpenter/pkg/controllers/nodeclaim/notready"
podevents "sigs.k8s.io/karpenter/pkg/controllers/nodeclaim/podevents"
nodepoolcounter "sigs.k8s.io/karpenter/pkg/controllers/nodepool/counter"
nodepoolhash "sigs.k8s.io/karpenter/pkg/controllers/nodepool/hash"
Expand Down Expand Up @@ -68,6 +69,7 @@ func NewControllers(
provisioning.NewNodeController(kubeClient, p),
nodepoolhash.NewController(kubeClient),
expiration.NewController(clock, kubeClient),
notready.NewController(kubeClient),
informer.NewDaemonSetController(kubeClient, cluster),
informer.NewNodeController(kubeClient, cluster),
informer.NewPodController(kubeClient, cluster),
Expand Down
100 changes: 100 additions & 0 deletions pkg/controllers/nodeclaim/notready/controller.go
Original file line number Diff line number Diff line change
@@ -0,0 +1,100 @@
/*
Copyright The Kubernetes Authors.

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/

package notready

import (
"context"
"time"

"github.com/prometheus/client_golang/prometheus"
controllerruntime "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/log"
"sigs.k8s.io/controller-runtime/pkg/manager"
"sigs.k8s.io/controller-runtime/pkg/reconcile"
"sigs.k8s.io/karpenter/pkg/metrics"

nodeclaimutil "sigs.k8s.io/karpenter/pkg/utils/nodeclaim"

corev1 "k8s.io/api/core/v1"

v1 "sigs.k8s.io/karpenter/pkg/apis/v1"
)

// Controller NotReady is a nodeclaim controller that deletes nodeclaims when they have been unreachable for too long
type Controller struct {
kubeClient client.Client
}

// NewController constructs a nodeclaim disruption controller
func NewController(kubeClient client.Client) *Controller {
return &Controller{
kubeClient: kubeClient,
}
}

func (c *Controller) Reconcile(ctx context.Context, nodeClaim *v1.NodeClaim) (reconcile.Result, error) {
if nodeClaim.Spec.UnreachableTimeout.Duration == nil {
return reconcile.Result{}, nil
}

node, err := nodeclaimutil.NodeForNodeClaim(ctx, c.kubeClient, nodeClaim)
if err != nil {
return reconcile.Result{}, nodeclaimutil.IgnoreDuplicateNodeError(nodeclaimutil.IgnoreNodeNotFoundError(err))
}

for _, taint := range node.Spec.Taints {
if taint.Key == corev1.TaintNodeUnreachable {
if taint.TimeAdded != nil {
durationSinceTaint := time.Since(taint.TimeAdded.Time)
if durationSinceTaint > *nodeClaim.Spec.UnreachableTimeout.Duration {
// if node is unreachable for too long, delete the nodeclaim
if err := c.kubeClient.Delete(ctx, nodeClaim); err != nil {
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • Should something happen to the .status of the NodeClaim before deletion?
  • Should we record an Event regarding the NodeClaim with the Node as related? (I would)

log.FromContext(ctx).V(0).Error(err, "Failed to delete NodeClaim", "node", node.Name)
return reconcile.Result{}, err
}
log.FromContext(ctx).V(0).Info("Deleted NodeClaim because the node has been unreachable for more than unreachableTimeout", "node", node.Name)
metrics.NodeClaimsDisruptedTotal.With(prometheus.Labels{
metrics.ReasonLabel: metrics.UnreachableReason,
metrics.NodePoolLabel: nodeClaim.Labels[v1.NodePoolLabelKey],
metrics.CapacityTypeLabel: nodeClaim.Labels[v1.CapacityTypeLabelKey],
}).Inc()
return reconcile.Result{}, nil
} else {
// If the node is unreachable and the time since it became unreachable is less than the configured timeout,
// we requeue to prevent the node from remaining in an unreachable state indefinitely
log.FromContext(ctx).V(1).Info("Node has been unreachable for less than unreachableTimeout, requeueing", "node", node.Name)
return reconcile.Result{RequeueAfter: *nodeClaim.Spec.UnreachableTimeout.Duration}, nil
}
}
}
}

return reconcile.Result{}, nil
}

func (c *Controller) Register(_ context.Context, m manager.Manager) error {
builder := controllerruntime.NewControllerManagedBy(m)
return builder.
Named("nodeclaim.notready").
For(&v1.NodeClaim{}).
Watches(
&corev1.Node{},
nodeclaimutil.NodeEventHandler(c.kubeClient),
).
Complete(reconcile.AsReconciler(m.GetClient(), c))
}
135 changes: 135 additions & 0 deletions pkg/controllers/nodeclaim/notready/suite_test.go
Original file line number Diff line number Diff line change
@@ -0,0 +1,135 @@
/*
Copyright The Kubernetes Authors.

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/

package notready_test

import (
"context"
"testing"
"time"

. "github.com/onsi/ginkgo/v2"
. "github.com/onsi/gomega"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
clock "k8s.io/utils/clock/testing"
"sigs.k8s.io/controller-runtime/pkg/cache"
"sigs.k8s.io/controller-runtime/pkg/client"

"sigs.k8s.io/karpenter/pkg/apis"
v1 "sigs.k8s.io/karpenter/pkg/apis/v1"
"sigs.k8s.io/karpenter/pkg/controllers/nodeclaim/notready"
"sigs.k8s.io/karpenter/pkg/metrics"
"sigs.k8s.io/karpenter/pkg/operator/options"
"sigs.k8s.io/karpenter/pkg/test"
. "sigs.k8s.io/karpenter/pkg/test/expectations"
"sigs.k8s.io/karpenter/pkg/test/v1alpha1"
. "sigs.k8s.io/karpenter/pkg/utils/testing"
)

var ctx context.Context
var notReadyController *notready.Controller
var env *test.Environment
var fakeClock *clock.FakeClock

func TestAPIs(t *testing.T) {
ctx = TestContextWithLogger(t)
RegisterFailHandler(Fail)
RunSpecs(t, "NotReady Controller Suite")
}

var _ = BeforeSuite(func() {
fakeClock = clock.NewFakeClock(time.Now())
env = test.NewEnvironment(test.WithCRDs(apis.CRDs...), test.WithCRDs(v1alpha1.CRDs...), test.WithFieldIndexers(func(c cache.Cache) error {
return c.IndexField(ctx, &corev1.Node{}, "spec.providerID", func(obj client.Object) []string {
return []string{obj.(*corev1.Node).Spec.ProviderID}
})
}))
ctx = options.ToContext(ctx, test.Options())
notReadyController = notready.NewController(env.Client)
})

var _ = AfterSuite(func() {
Expect(env.Stop()).To(Succeed(), "Failed to stop environment")
})

var _ = BeforeEach(func() {
ctx = options.ToContext(ctx, test.Options())
fakeClock.SetTime(time.Now())
})

var _ = AfterEach(func() {
ExpectCleanedUp(ctx, env.Client)
})

var _ = Describe("NotReady", func() {
var nodePool *v1.NodePool
var nodeClaim *v1.NodeClaim
var node *corev1.Node
BeforeEach(func() {
nodePool = test.NodePool()
nodeClaim, node = test.NodeClaimAndNode(v1.NodeClaim{
ObjectMeta: metav1.ObjectMeta{
Labels: map[string]string{v1.NodePoolLabelKey: nodePool.Name},
},
Spec: v1.NodeClaimSpec{
UnreachableTimeout: v1.MustParseNillableDuration("10m"),
},
})
metrics.NodeClaimsDisruptedTotal.Reset()
})
It("should remove NodeClaim when the node has an unreachable taint for over the UnreachableTimeout duration", func() {
node.Spec.Taints = []corev1.Taint{
{
Key: corev1.TaintNodeUnreachable,
Effect: corev1.TaintEffectNoSchedule,
TimeAdded: &metav1.Time{Time: fakeClock.Now().Add(-12 * time.Minute)},
},
}
ExpectApplied(ctx, env.Client, nodeClaim, node)
ExpectObjectReconciled(ctx, env.Client, notReadyController, nodeClaim)
ExpectMetricCounterValue(metrics.NodeClaimsDisruptedTotal, 1, map[string]string{
metrics.ReasonLabel: metrics.UnreachableReason,
"nodepool": nodePool.Name,
})
ExpectNotFound(ctx, env.Client, nodeClaim)
})
It("should not remove NodeClaim if unreachable taint is less than the UnreachableTimeout duration", func() {
node.Spec.Taints = []corev1.Taint{
{
Key: corev1.TaintNodeUnreachable,
Effect: corev1.TaintEffectNoSchedule,
TimeAdded: &metav1.Time{Time: fakeClock.Now().Add(-7 * time.Minute)},
},
}
ExpectApplied(ctx, env.Client, nodeClaim, node)
ExpectObjectReconciled(ctx, env.Client, notReadyController, nodeClaim)
nodeClaim = ExpectExists(ctx, env.Client, nodeClaim)
})
It("should not remove the NodeClaim when UnreachableTimeout is disabled", func() {
nodeClaim.Spec.ExpireAfter = v1.MustParseNillableDuration("Never")
node.Spec.Taints = []corev1.Taint{
{
Key: corev1.TaintNodeUnreachable,
Effect: corev1.TaintEffectNoSchedule,
TimeAdded: &metav1.Time{Time: fakeClock.Now().Add(-12 * time.Minute)},
},
}
ExpectApplied(ctx, env.Client, nodeClaim)
ExpectObjectReconciled(ctx, env.Client, notReadyController, nodeClaim)
nodeClaim = ExpectExists(ctx, env.Client, nodeClaim)
})
})
1 change: 1 addition & 0 deletions pkg/metrics/constants.go
Original file line number Diff line number Diff line change
Expand Up @@ -33,6 +33,7 @@ const (
// Reasons for CREATE/DELETE shared metrics
ProvisionedReason = "provisioned"
ExpiredReason = "expired"
UnreachableReason = "unreachable"
)

// DurationBuckets returns a []float64 of default threshold values for duration histograms.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.16.3
controller-gen.kubebuilder.io/version: v0.16.4
name: testnodeclasses.karpenter.test.sh
spec:
group: karpenter.test.sh
Expand Down
Loading