-
Notifications
You must be signed in to change notification settings - Fork 35
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Max surge configuration #111
Comments
Unfortunately, there is not a way to specify how many pods can be created at a time. The current implementation spins up one pod at a time and doesn't take advantage of a deployment or statefulset. While this simplifies key migration, it does slow things down if you're considerably growing your cluster. |
I dont see it working as you described. It spins up pod after pod, it doesnt destroy any existing ones. EDIT: how many pods will the operator create during update before it starts deleting anything? |
If you're growing your cluster, why would it delete pods? Also where did I say anything about how pods are deleted? |
Ok sorry it appears we misunderstood each other then. Im interested in the case where I change for example cpu/memory limit of my redis pods and apply the change via helm. |
That's going to be controlled by the Kubernetes scheduler, not the operator. I would imagine PodDisruptionBudgets will come into play too. I'm trying to dig up some docs on the specifics of altering memory and CPU limits but am not getting too much helpful info. https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/ is about all I've found thus far. I'm definitely curious to learn about your findings though, so please post back with your results. :) |
Is it possible to specify how many new pods can be created at one time? Similar as maxSurge settings for rollingUpdate for k8s deployments.
The text was updated successfully, but these errors were encountered: