Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Max surge configuration #111

Open
mgrzeszczak opened this issue Aug 4, 2023 · 5 comments
Open

Max surge configuration #111

mgrzeszczak opened this issue Aug 4, 2023 · 5 comments

Comments

@mgrzeszczak
Copy link

Is it possible to specify how many new pods can be created at one time? Similar as maxSurge settings for rollingUpdate for k8s deployments.

@cin
Copy link
Contributor

cin commented Aug 4, 2023

Unfortunately, there is not a way to specify how many pods can be created at a time. The current implementation spins up one pod at a time and doesn't take advantage of a deployment or statefulset. While this simplifies key migration, it does slow things down if you're considerably growing your cluster.

@mgrzeszczak
Copy link
Author

mgrzeszczak commented Aug 4, 2023

I dont see it working as you described. It spins up pod after pod, it doesnt destroy any existing ones.
At what point will it start destroying some of the existing pods?

EDIT:
lets assume i have a cluster of 3 nodes, with replication factor =2
that gives me 9 pods

how many pods will the operator create during update before it starts deleting anything?

@cin
Copy link
Contributor

cin commented Aug 4, 2023

If you're growing your cluster, why would it delete pods? Also where did I say anything about how pods are deleted?

@mgrzeszczak
Copy link
Author

Ok sorry it appears we misunderstood each other then. Im interested in the case where I change for example cpu/memory limit of my redis pods and apply the change via helm.
Because we have limits on the namespace, I need to know how many additional pods will be created during such update

@cin
Copy link
Contributor

cin commented Aug 4, 2023

That's going to be controlled by the Kubernetes scheduler, not the operator. I would imagine PodDisruptionBudgets will come into play too. I'm trying to dig up some docs on the specifics of altering memory and CPU limits but am not getting too much helpful info. https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-constraint-namespace/ is about all I've found thus far. I'm definitely curious to learn about your findings though, so please post back with your results. :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants