Skip to content

Commit

Permalink
add Nutanix support
Browse files Browse the repository at this point in the history
  • Loading branch information
tuxtof committed Jan 6, 2024
1 parent 58e5c03 commit e20d899
Show file tree
Hide file tree
Showing 5 changed files with 199 additions and 3 deletions.
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ You can refer to the [official documentation website](https://kamaji.clastix.io/
| [Equinix/Packet](https://github.com/kubernetes-sigs/cluster-api-provider-packet) ([technical considerations](docs/providers-packet.md)) | += v0.7.2 |
| [KubeVirt](https://github.com/kubernetes-sigs/cluster-api-provider-kubevirt) ([technical considerations](docs/providers-kubevirt.md)) | += 0.1.7 |
| [Metal³](https://github.com/metal3-io/cluster-api-provider-metal3) ([technical considerations](docs/providers-metal3.md)) | += 1.4.0 |
| Nutanix | _In road-map_ |
| [Nutanix](https://github.com/nutanix-cloud-native/cluster-api-provider-nutanix) ([technical considerations](docs/providers-nutanix.md)) | += 1.2.4 |
| [OpenStack](https://github.com/kubernetes-sigs/cluster-api-provider-openstack) | += 0.8.0 |
| Tinkerbell | _In road-map_ |
| [vSphere](https://github.com/kubernetes-sigs/cluster-api-provider-vsphere) ([technical considerations](docs/providers-vsphere.md)) | += 1.7.0 |
Expand Down
2 changes: 2 additions & 0 deletions config/control-plane-components.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -4038,13 +4038,15 @@ rules:
- infrastructure.cluster.x-k8s.io
resources:
- kubevirtclusters
- nutanixclusters
- packetclusters
verbs:
- patch
- apiGroups:
- infrastructure.cluster.x-k8s.io
resources:
- kubevirtclusters/status
- nutanixclusters/status
- packetclusters/status
verbs:
- patch
Expand Down
2 changes: 2 additions & 0 deletions config/rbac/role.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -54,13 +54,15 @@ rules:
- infrastructure.cluster.x-k8s.io
resources:
- kubevirtclusters
- nutanixclusters
- packetclusters
verbs:
- patch
- apiGroups:
- infrastructure.cluster.x-k8s.io
resources:
- kubevirtclusters/status
- nutanixclusters/status
- packetclusters/status
verbs:
- patch
Expand Down
6 changes: 4 additions & 2 deletions controllers/kamajicontrolplane_controller_cluster_patch.go
Original file line number Diff line number Diff line change
Expand Up @@ -37,6 +37,8 @@ func (r *KamajiControlPlaneReconciler) patchCluster(ctx context.Context, cluster
return r.patchGenericCluster(ctx, cluster, endpoint, port, true)
case "Metal3Cluster":
return r.checkGenericCluster(ctx, cluster, endpoint, port)
case "NutanixCluster":
return r.patchGenericCluster(ctx, cluster, endpoint, port, true)
case "OpenStackCluster":
return r.patchOpenStackCluster(ctx, cluster, endpoint, port)
case "PacketCluster":
Expand All @@ -63,8 +65,8 @@ func (r *KamajiControlPlaneReconciler) checkOrPatchVSphereCluster(ctx context.Co
return nil
}

//+kubebuilder:rbac:groups=infrastructure.cluster.x-k8s.io,resources=kubevirtclusters;packetclusters,verbs=patch
//+kubebuilder:rbac:groups=infrastructure.cluster.x-k8s.io,resources=kubevirtclusters/status;packetclusters/status,verbs=patch
//+kubebuilder:rbac:groups=infrastructure.cluster.x-k8s.io,resources=kubevirtclusters;nutanixclusters;packetclusters,verbs=patch
//+kubebuilder:rbac:groups=infrastructure.cluster.x-k8s.io,resources=kubevirtclusters/status;nutanixclusters/status;packetclusters/status,verbs=patch

func (r *KamajiControlPlaneReconciler) patchGenericCluster(ctx context.Context, cluster capiv1beta1.Cluster, endpoint string, port int64, patchStatus bool) error {
infraCluster := unstructured.Unstructured{}
Expand Down
190 changes: 190 additions & 0 deletions docs/providers-nutanix.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,190 @@
# Kamaji and Nutanix

The Kamaji Control Plane provider was able to create a _Nutanix_ backed Kubernetes cluster by providing Kamaji Control Planes.

```
NAME READY SEVERITY REASON SINCE MESSAGE
Cluster/capi-quickstart True 5m42s
├─ClusterInfrastructure - NutanixCluster/capi-quickstart
├─ControlPlane - KamajiControlPlane/kamaji-nutanix-127
└─Workers
└─MachineDeployment/capi-quickstart-md-0 True 68s
└─3 Machines... True 5m13s See capi-quickstart-md-0-nfz4l-7hkx7, capi-quickstart-md-0-nfz4l-8wj6v, ...
```

## Example manifests

This example need a Service Load Balancer (MetalLB, Kube-VIP, ...) and [CAAPH](https://github.com/kubernetes-sigs/cluster-api-addon-provider-helm) installed in your management cluster.

```yaml
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
labels:
cluster.x-k8s.io/cluster-name: capi-quickstart
name: capi-quickstart
spec:
clusterNetwork:
apiServerPort: 6443
pods:
cidrBlocks:
- 192.168.0.0/16
controlPlaneRef:
apiVersion: controlplane.cluster.x-k8s.io/v1beta1
kind: KamajiControlPlane
name: kamaji-nutanix-127
infrastructureRef:
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: NutanixCluster
name: capi-quickstart
---
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: NutanixCluster
metadata:
name: capi-quickstart
spec:
controlPlaneEndpoint:
host: 0.0.0.0 # will be automatically patch by Kamaji controller
port: 0 # will be automatically patch by Kamaji controller
---
apiVersion: controlplane.cluster.x-k8s.io/v1alpha1
kind: KamajiControlPlane
metadata:
name: kamaji-nutanix-127
namespace: default
spec:
apiServer:
extraArgs:
- --cloud-provider=external
controllerManager:
extraArgs:
- --cloud-provider=external
dataStoreName: default
addons:
coreDNS: { }
kubeProxy: { }
kubelet:
cgroupfs: systemd
preferredAddressTypes:
- ExternalIP
- InternalIP
- Hostname
network:
# serviceAddress: 10.83.1.2 # can be statically assigned
serviceType: LoadBalancer
deployment:
replicas: 2
version: 1.27.8
---
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: NutanixMachineTemplate
metadata:
name: capi-quickstart-worker
spec:
template:
spec:
bootType: legacy
cluster:
name: cloud-dev
type: name
image:
name: ubuntu-2204-kube-v1.27.8
type: name
memorySize: 4Gi
providerID: nutanix://mycluster-m1
subnet:
- name: capi
type: name
systemDiskSize: 40Gi
vcpuSockets: 2
vcpusPerSocket: 1
---
apiVersion: bootstrap.cluster.x-k8s.io/v1beta1
kind: KubeadmConfigTemplate
metadata:
name: capi-quickstart-md-0
spec:
template:
spec:
joinConfiguration:
nodeRegistration:
kubeletExtraArgs:
eviction-hard: nodefs.available<10%,nodefs.inodesFree<5%,imagefs.available<15%,memory.available<100Mi,imagefs.inodesFree<10%
tls-cipher-suites: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256
cloud-provider: external
postKubeadmCommands:
- echo "after kubeadm call" > /var/log/postkubeadm.log
preKubeadmCommands:
- echo "before kubeadm call" > /var/log/prekubeadm.log
- hostnamectl set-hostname "{{ ds.meta_data.hostname }}"
users:
- lockPassword: false
name: capiuser
sshAuthorizedKeys:
- ssh-ed25519 XXXXXXXXXX # Replace you SSH public key if you want direct access to worker nodes
sudo: ALL=(ALL) NOPASSWD:ALL
verbosity: 10
---
apiVersion: cluster.x-k8s.io/v1beta1
kind: MachineDeployment
metadata:
labels:
cluster.x-k8s.io/cluster-name: capi-quickstart
name: capi-quickstart-md-0
spec:
clusterName: capi-quickstart
replicas: 3
selector:
matchLabels: {}
template:
metadata:
labels:
cluster.x-k8s.io/cluster-name: capi-quickstart
spec:
bootstrap:
configRef:
apiVersion: bootstrap.cluster.x-k8s.io/v1beta1
kind: KubeadmConfigTemplate
name: capi-quickstart-md-0
clusterName: capi-quickstart
infrastructureRef:
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: NutanixMachineTemplate
name: capi-quickstart-worker
version: v1.27.8
---
apiVersion: addons.cluster.x-k8s.io/v1alpha1
kind: HelmChartProxy
metadata:
name: cilium
spec:
clusterSelector:
matchLabels:
cluster.x-k8s.io/cluster-name: capi-quickstart
releaseName: cilium
repoURL: https://helm.cilium.io/
chartName: cilium
namespace: kube-system
```
## Technical considerations
The Nutanix Kubernetes cluster is requiring a VIP for the Control Plane component.
To maintain the same experience you have to know in advance the Kamaji Tenant Control Plane address, and port.
In regard to the address, the following values must be the same:
- `KamajiControlPlane.spec.network.address`
- `NutanixCluster.spec.controlPlaneEndpoint.host`

The same applies for the Kubernetes API Server binding port:

- `Cluster.spec.clusterNetwork.apiServerPort`
- `NutanixCluster.spec.controlPlaneEndpoint.port`

If you install a Service Load Balancer solution (MetalLB, Kube-VIP, ...) in your management cluster you can skip this kind of check.
VIP will be automaticall assigned and the Kamaji Control Plane provider will take care of patching the `NutanixCluster` resource with the endpoint provided by Kamaji itself.

## Kubernetes Nutanix Cloud Provider customisation

WIP

0 comments on commit e20d899

Please sign in to comment.