-
Notifications
You must be signed in to change notification settings - Fork 4.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
test: Set num-nodes flag #16176
test: Set num-nodes flag #16176
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -21,6 +21,7 @@ import ( | |
"fmt" | ||
"os" | ||
"os/exec" | ||
"strconv" | ||
"strings" | ||
|
||
"github.com/octago/sflags/gen/gpflag" | ||
|
@@ -187,11 +188,6 @@ func (t *Tester) addNodeIG() error { | |
return err | ||
} | ||
|
||
// Skip this function for non gce clusters | ||
if cluster.Spec.LegacyCloudProvider != "gce" { | ||
return nil | ||
} | ||
|
||
igs, err := kops.GetInstanceGroups("kops", cluster.Name, nil) | ||
if err != nil { | ||
return err | ||
|
@@ -203,6 +199,15 @@ func (t *Tester) addNodeIG() error { | |
ig = v | ||
} | ||
} | ||
numNodes := int(*ig.Spec.MaxSize) // we assume that MinSize = Maxsize, this is true for e2e testing | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This will be the size of the last nodes InstanceGroup. Should we instead sum the max size of all nodes InstanceGroups? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. No, the ginkgo wants the number of all nodes except the master/controplane IG. When we are using kops for e2e testing we assume there are exactly 2 IGs(one master and one worker). If we add the master ig to the count, you'll have an extra node that you can't schedule workloads onto without tweaking manifests to tolerate control plane taints There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Ah I was thinking in the case of there being multiple nodes IGs. This tester code can be used in cluster configurations that have multiple node IGs so I don't know if your assumption is always true. |
||
klog.Infof("Setting -num-nodes=%v", numNodes) | ||
t.TestArgs += " -num-nodes=" + strconv.Itoa(numNodes) | ||
|
||
// Skip the rest of this function for non gce clusters | ||
if cluster.Spec.LegacyCloudProvider != "gce" { | ||
return nil | ||
} | ||
|
||
nodeTag := gce.TagForRole(cluster.ObjectMeta.Name, unversioned.InstanceGroupRoleNode) | ||
klog.Infof("Setting --node-tag=%s", nodeTag) | ||
t.TestArgs += " --node-tag=" + nodeTag | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I tried to adjust the prow job generator and it caused a big mess. We run ubuntu by default so it would need to be running most of the time. It is easier to skip, will open a PR in k/k now to park this behind a Feature tag