-
Notifications
You must be signed in to change notification settings - Fork 727
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Issue when updating the number of replicas, no effect on the number of primary pods #574
Comments
I am posting a second message with some extra info on this. So I tried to use the canary release with an HPA but it does not change the behavior and the scenario I described above is still reproducible. The impact of the HPA was the fact it spins up less canary pods. That's strange this has never been raised before, unless I have a miss-configuration somewhere but apart from the scaling issue, the canary deployment works well. |
Have you added the HPA reference to the canary spec? |
Yes I did (FYI, avd is the app name) autoscaling.yaml
canary.yaml
|
Actually, after a second thought, I think when HPA is enabled then the But it does not explain why it does not work when I don't use an HPA 🤔 |
You should remove the |
For now, if you don't want to use autoscaling you can use a dummy HPA with |
I tried to scale up and down using the same values I will leave the issue open so you can investigate why it does not work without a "dummy HPA". Thanks for your support! |
Hello there, we just ran into this issue as well on v1.0.0. Is there any progress on this? |
The same for me, it seems that flagger doesn't track the spec.replicas change when hpa is not used. |
It may be caused by how flagger detects changes in spec: // HasTargetChanged returns true if the canary deployment pod spec has changed
func (c *DeploymentController) HasTargetChanged(cd *flaggerv1.Canary) (bool, error) {
targetName := cd.Spec.TargetRef.Name
canary, err := c.kubeClient.AppsV1().Deployments(cd.Namespace).Get(context.TODO(), targetName, metav1.GetOptions{})
if err != nil {
return false, fmt.Errorf("deployment %s.%s get query error: %w", targetName, cd.Namespace, err)
}
return hasSpecChanged(cd, canary.Spec.Template)
} For deployments, only changes in pod templates would be recognized. Change of spec.replicas is simply omitted. |
Hey @stefanprodan , |
Just stumbled over this issue. |
Awesome that this is fixed so fast @somtochiama thanks! |
Hi,
I notice when I change the number of replicas to scale up the app, then it does not have the desired effect, like if Flagger is interfering and hijack the new setting for the canary pods only. I did not try scaling down, but I assume it is the same.
PS: I do not use an HPA (I plan to try out at a later stage)
Scenario to reproduce:
Let's say I have
replicaCount
set to 2, I did not change it. When I did my deployment of my app v1.1, there were 2 canary pods and 2 primary pods during the analysis. At the end of the promotion 0 canary pods and 2 primary pods. That's fine and was expected.Now, I increase the number of replica to 4 and canary deploy a new version v1.2. There were 4 canary pods and 2 primary pods during the analysis. I guess that's fine as I don't use an HPA. But, at the end of the promotion, I have 0 canary pods and 2 primary pods still.
To go further, then I deploy a v1.3 keeping the number of replica to 4. There were 1 canary pod (not sure why not 2 here?) and 2 primary pods during the analysis. And, at the end of the promotion, I have 0 canary pods and 2 primary pods. So the number 4 is totally ignored and the behavior is quite different, I can't explain why 1 canary pod and not 2.
Last test, I disabled Flagger and tried again the same scenario (i.e. replica set from 2 to 4) and it was ok and the new setting was taken into account ending up with 4 pods of my app.
I was able to reproduce the exact same scenario on my local Kubernetes but also on an AWS Sandbox Kube.
Flagger version used (the last one): 1.0.0 RC4
Can you help?
The text was updated successfully, but these errors were encountered: