-
Notifications
You must be signed in to change notification settings - Fork 405
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support for Multiple Kyma Runtime Clusters in a single VPC #18772
Comments
Could we add to the requirements to choose the node network subnets to use? or even the option to provide a broad range and kyma slicing the big range into small ranges for each of the clusters? |
Besides the already mentioned points we would like to be able to configure the size of the VPC as well as the clusters. The initial idea was to use Not sure how fine-granular you want to expose these things as configuration options but as these are mostly based on assumptions the exact numbers might change. |
An additional requirement is that we can peer two (or more) of these VPCs together to extend a single installation beyond 64 clusters. |
Kyma clusters don't benefit from this feature. This feature is required to support Gardener in the context of Korifi and depends on certain constraints that cannot be made in Kyma but in Korifi. The main usage sharing backing services between Korifi and CF Classic by providing private L3 connectivity to our existing multi tenant backing service networks via peering. Since the backing service networks do not implement any tenant isolation, tenant isolation is expected to be done by packet egress filtering in the client, i.e. the k8s clusters. This requires a separation of admin and dev roles. A developer must not be allowed to overwrite network policies that implement egress filtering. Kyma violates this constraint. Although there might be a more complex way to implement ingress filtering on the backing services side this is difficult as long as PODs/Containers (of different tenants, like in Korifi) share IP addresses because of SNAT. I don't see a need for private cluster communication in our current context. The reason we want to have multiple clusters within the same VPC is to simplify peering and IPAM management. The clusters are still set up without external routable POD addresses (although Gardener does support this, but it would only make sense when using IPv6). Communication between clusters can be done privately within that VPC but the clusters still require edge gateways for egress and ingress. The main role for Kyma in that scenario might be the reuse of existing Kyma functionality. The feature needs:
I would expect that Kyma already has a machinery that allows for shoot manipulation that can be used or extended for 2. Therefore goals should not focus on Kyma as the supported target but Gardener.
Unfortunately we cannot use that feature for Kyma. We expect existing Kyma functionality to help implementing that feature. NOTE: We would need bigger investments into BTP networking across different groups to help sharing (non public) backing services across all run-times, i.e. Kyma and CF (Classic and Korifi). A project like Super-6 could solve that issue. |
Currently, each Kyma Runtime cluster is provisioned in its own private VPC, which isolates resources but limits inter-cluster connectivity and complicates peering with shared backing services (e.g., databases, message queues). This Epic introduces the ability to create multiple Kyma Runtime clusters within a single VPC.
By enabling inter-cluster connectivity while maintaining private network access (no internet exposure) and simplifying resource peering (e.g., one-time VPC peering for shared services), this feature addresses critical customer requirements for scalability and flexibility.
Goals
Requirements
VPC Name
) in the Kyma Runtime provisioning flow:Acceptance Criteria
VPC Name
parameter is omitted.Out of Scope
The text was updated successfully, but these errors were encountered: