-
Notifications
You must be signed in to change notification settings - Fork 1
/
Copy pathindex.json
1 lines (1 loc) · 79.4 KB
/
index.json
1
[{"body":"This document describes the network topology of OpenELB in BGP mode and how OpenELB functions in BGP mode.\nNOTE\nThe BGP mode is recommended because it allows you to create a high availability system free of failover interruptions and bandwidth bottlenecks. To use the BGP mode, your router must support BGP and Equal-Cost Multi-Path (ECMP) routing. If your router does not support BGP or ECMP, you can use the Layer 2 mode to achieve similar functionality. Network Topology The following figure shows the topology of the network between a Kubernetes cluster where OpenELB is installed and a peer BGP router.\nIP addresses and Autonomous System Numbers (ASNs) in the preceding figure are examples only. The topology is described as follows:\n A Service backed by two Pods is deployed in the Kubernetes cluster, and is assigned an IP address 172.22.0.2 for external access. OpenELB installed in the Kubernetes cluster establishes a BGP connection with the BGP router, and publishes routes destined for the Service to the BGP router. When an external client machine attempts to access the Service, the BGP router load balances the traffic among the master, worker 1, and worker 2 nodes based on the routes obtained from OpenELB. After the Service traffic reaches a node, kube-proxy can further forward the traffic to other nodes for load balancing (both Pod 1 and Pod 2 can be reached over kube-proxy). OpenELB uses GoBGP (integrated in OpenELB) to establish a BGP connection for route publishing. Two CustomResourceDefinitions (CRDs), BgpConf and BgpPeer, are provided for users to configure the local and peer BGP properties on OpenELB. BgpConf and BgpPeer are designed according to the GoBGP API. For details about how to use BgpConf and BgpPeer to configure OpenELB in BGP mode, see Configure OpenELB in BGP Mode.\n","excerpt":"This document describes the network topology of OpenELB in BGP mode and how OpenELB functions in BGP …","ref":"/docs/concepts/bgp-mode/","title":"BGP Mode"},{"body":"This document describes how to build the OpenELB project for testing.\nPrerequisites You need to prepare a Linux environment. You need to install Go 1.12 or later. You need to install Docker. You need to install Docker Buildx. Procedure Visit https://github.com/kubesphere/OpenELB and click Fork to fork the OpenELB repository to your own GitHub account.\n Log in to your environment, and run the following commands to clone the OpenELB repository and go to the openelb directory:\ngit clone \u0026lt;Address of your own OpenELB repository\u0026gt; cd openelb Run the following command to install Kustomize and Kubebuilder:\n./hack/install_tools.sh Run the following command to install controller-gen:\ngo get sigs.k8s.io/controller-tools/cmd/[email protected] Run the following command to configure the environment variable for controller-gen:\nexport PATH=/root/go/bin/:$PATH NOTE\nYou need to change /root/go/bin/ to the actual path of controller-gen. Run the following command to generate CRDs and webhooks:\nmake generate Customize the values of IMG_MANAGER and IMG_AGENT in Makefile and run the following command to generate a YAML release file in the deploy directory:\nmake release NOTE\n IMG_MANAGER specifies the repository and tag of the openelb-manager image.\n IMG_AGENT specifies the repository and tag of the openelb-agent image.\n Currently, OpenELB uses only the openelb-manager image. The openelb-agent image will be used in future versions.\n Run the following command to deploy OpenELB as a plugin:\nkubectl apply -f deploy/release.yaml ","excerpt":"This document describes how to build the OpenELB project for testing.\nPrerequisites You need to …","ref":"/docs/building-and-contributing/build-the-openelb-project/","title":"Build the OpenELB Project"},{"body":"This document describes how to configure an Eip object, which functions as an IP address pool for OpenELB both in BGP mode and in Layer 2 mode.\nOpenELB assigns IP addresses in Eip objects to LoadBalancer Services in the Kubernetes cluster. After that, OpenELB publishes routes destined for the Service IP addresses over BGP (in BGP mode), ARP (in Layer 2 mode for IPv4), or NDP (in Layer 2 mode for IPv6).\nNOTE\nCurrently, OpenELB supports only IPv4 and will soon support IPv6. Configure an Eip Object for OpenELB You can create an Eip object to provide an IP address pool for OpenELB. The following is an example of the Eip YAML configuration:\napiVersion:network.kubesphere.io/v1alpha2kind:Eipmetadata:name:eip-sample-poolspec:address:192.168.0.91-192.168.0.100protocol:layer2interface:eth0disable:falsestatus:occupied:falseusage:1poolSize:10used:\u0026#34;192.168.0.91\u0026#34;: \u0026#34;default/test-svc\u0026#34;firstIP:192.168.0.91lastIP:192.168.0.100ready:truev4:trueThe fields are described as follows:\nmetadata:\n name: Name of the Eip object. spec:\n address: One or more IP addresses, which will be used by OpenELB. The value format can be:\n IP address, for example, 192.168.0.100. IP address/Subnet mask, for example, 192.168.0.0/24. IP address 1-IP address 2, for example, 192.168.0.91-192.168.0.100. NOTE\nIP segments in different Eip objects cannot overlap. Otherwise, a resource creation error will occur. protocol: Specifies which mode of OpenELB the Eip object is used for. The value can be either layer2 or bgp. If this field is not specified, the default value bgp is used.\n interface: NIC on which OpenELB listens for ARP or NDP requests. This field is valid only when protocol is set to layer2.\nTIP\nIf the NIC names of the Kubernetes cluster nodes are different, you can set the value to can_reach:IP address (for example, can_reach:192.168.0.5) so that OpenELB automatically obtains the name of the NIC that can reach the IP address. In this case, you must ensure that the IP address is not used by Kubernetes cluster nodes but can be reached by the cluster nodes. disable: Specifies whether the Eip object is disabled. The value can be:\n false: OpenELB can assign IP addresses in the Eip object to new LoadBalancer Services. true: OpenELB stops assigning IP addresses in the Eip object to new LoadBalancer Services. Existing Services are not affected. status: Fields under status specify the status of the Eip object and are automatically configured. When creating an Eip object, you do not need to configure these fields.\n occupied: Specifies whether IP addresses in the Eip object have been used up.\n usage: Specifies how many IP addresses in the Eip object have been assigned to Services.\n used: Specifies the used IP addresses and the Services that use the IP addresses. The Services are displayed in the Namespace/Service name format (for example, default/test-svc).\n poolSize: Total number of IP addresses in the Eip object.\n firstIP: First IP address in the Eip object.\n lastIP: Last IP address in the Eip object.\n v4: Specifies whether the address family is IPv4. Currently, OpenELB supports only IPv4 and the value can only be true.\n ready: Specifies whether the Eip-associated program used for BGP/ARP/NDP routes publishing has been initialized. The program is integrated in OpenELB.\n ","excerpt":"This document describes how to configure an Eip object, which functions as an IP address pool for …","ref":"/docs/getting-started/configuration/configure-ip-address-pools-using-eip/","title":"Configure IP Address Pools Using Eip"},{"body":"This document describes how to use kubectl and Helm to install and delete OpenELB in a Kubernetes cluster.\nNOTE\n In a Kubernetes cluster, you only need to install OpenELB once. After the installation is complete, a openelb-manager Deployment that contains a openelb-manager Pod is installed in the cluster. The openelb-manager Pod implements the functionality of OpenELB for the entire Kubernetes cluster. After the installation is complete, you can scale the openelb-manager Deployment and assign multiple OpenELB replicas (openelb-manager Pods) to multiple cluster nodes to ensure high availability. For details, see Configure Multiple OpenELB Replicas. Prerequisites You need to prepare a Kubernetes cluster, and ensure that the Kubernetes version is 1.15 or later. OpenELB requires CustomResourceDefinition (CRD) v1, which is only supported by Kubernetes 1.15 or later. You can use the following methods to deploy a Kubernetes cluster:\n Use KubeKey (recommended). You can use KubeKey to deploy a Kubernetes cluster with or without KubeSphere. Follow official Kubernetes guides. OpenELB is designed to be used in bare-metal Kubernetes environments. However, you can also use a cloud-based Kubernetes cluster for learning and testing.\n If you use Helm to install OpenELB, ensure that the Helm version is Helm 3.\n Install OpenELB Using kubectl Log in to the Kubernetes cluster over SSH and run the following command:\nkubectl apply -f https://raw.githubusercontent.com/openelb/openelb/master/deploy/openelb.yaml Run the following command to check whether the status of openelb-manager is READY: 1/1 and STATUS: Running. If yes, OpenELB has been installed successfully.\nkubectl get po -n openelb-system It should return something like the following.\nNAME READY STATUS RESTARTS AGE openelb-admission-create-tjsqm 0/1 Completed 0 41s openelb-admission-patch-n247f 0/1 Completed 0 41s openelb-manager-74c5467674-bx2wg 1/1 Running 0 41s Delete OpenELB Using kubectl To delete OpenELB, log in to the Kubernetes cluster and run the following command:\nkubectl delete -f https://raw.githubusercontent.com/openelb/openelb/master/deploy/openelb.yaml NOTE\nBefore deleting OpenELB, you must first delete all Services that use OpenELB. Run the following command to check the result. If the openelb-system namespace does not exist, OpenELB has been deleted successfully.\nkubectl get ns Install OpenELB Using Helm Log in to the Kubernetes cluster over SSH and run the following commands:\nhelm repo add test https://charts.kubesphere.io/test helm repo update helm install openelb test/openelb Run the following command to check whether the status of openelb-manager is READY: 1/1 and STATUS: Running. If yes, OpenELB has been installed successfully.\nkubectl get po -A It should return something like the following.\nNAMESPACE NAME READY STATUS RESTARTS AGE openelb-system openelb-admission-create-m2p52 0/1 Completed 0 32s openelb-system openelb-admission-patch-qmvnq 0/1 Completed 0 31s openelb-system openelb-manager-74c5467674-pgtmh 1/1 Running 0 32s ... ... Delete OpenELB Using Helm To delete OpenELB, run the following command:\nhelm delete openelb NOTE\nBefore deleting OpenELB, you must first delete all Services that use OpenELB. Run the following command to check the result. If the OpenELB application does not exist, OpenELB has been deleted successfully.\nhelm ls ","excerpt":"This document describes how to use kubectl and Helm to install and delete OpenELB in a Kubernetes …","ref":"/docs/getting-started/installation/install-openelb-on-kubernetes/","title":"Install OpenELB on Kubernetes"},{"body":"This section describes how to install and delete OpenELB on Kubernetes, KubeSphere, and K3s.\n Install OpenELB on Kubernetes Describes how to install and delete OpenELB on Kubernetes using kubectl and Helm.\nInstall OpenELB on KubeSphere Describes how to install and delete OpenELB on the KubeSphere web console.\nInstall OpenELB on K3s Describes how to install and delete OpenELB on K3s using kubectl and Helm.\n","excerpt":"This section describes how to install and delete OpenELB on Kubernetes, KubeSphere, and K3s. …","ref":"/docs/getting-started/installation/","title":"Installation"},{"body":"This document describes the network topology of OpenELB in Layer 2 mode and how OpenELB functions in Layer 2 mode.\nNOTE\n Generally, you are advised to use the BGP mode because it allows you to create a high availability system free of failover interruptions and bandwidth bottlenecks. However, the BGP mode requires your router to support BGP and Equal-Cost Multi-Path (ECMP) routing, which may be unavailable in certain systems. In this case, you can use the Layer 2 mode to achieve similar functionality. The Layer 2 mode requires your infrastructure environment to allow anonymous ARP/NDP packets. If OpenELB is installed in a cloud-based Kubernetes cluster for testing, you need to confirm with your cloud vendor whether anonymous ARP/NDP packets are allowed. If not, the Layer 2 mode cannot be used. Network Topology The following figure shows the topology of the network between a Kubernetes cluster with OpenELB and a router.\nIP addresses and MAC addresses in the preceding figure are examples only. The topology is described as follows:\n A Service backed by two Pods is deployed in the Kubernetes cluster, and is assigned an IP address 192.168.0.91 for external access. The Service IP address is on the same network segment as the cluster node IP addresses. OpenELB installed in the Kubernetes cluster randomly selects a node (worker 1 in this example) to handle Service requests. After that, OpenELB sends an ARP/NDP packet to the router, which maps the Service IP address to the MAC address of worker 1. If multiple openelb-manager replicas have been deployed in the cluster, OpenELB uses the the leader election feature of Kubernetes to ensure that only one replica responds to ARP/NDP requests. When an external client machine attempts to access the Service, the router forwards the Service traffic to worker 1 based on the mapping between the Service IP address and the MAC address of worker 1. After the Service traffic reaches worker 1, kube-proxy can further forward the traffic to other nodes for load balancing (both Pod 1 and Pod 2 can be reached over kube-proxy). If worker 1 fails, OpenELB re-sends an APR/NDP packet to the router to map the Service IP address to the MAC address of worker 2, and the Service traffic switches to worker 2. NOTE\nThe Layer 2 mode has two limitations:\n Worker 1 and worker 2 work in active-standby mode in terms of traffic forwarding. When a failover occurs, Services in the Kubernetes cluster will be interrupted for a short while. All Service traffic is always sent to one node first and then forwarded to other nodes over kube-proxy in a second hop. Therefore, the Service bandwidth is limited to the bandwidth of a single node, which causes a bandwidth bottleneck. ","excerpt":"This document describes the network topology of OpenELB in Layer 2 mode and how OpenELB functions in …","ref":"/docs/concepts/layer-2-mode/","title":"Layer 2 Mode"},{"body":"OpenELB is an open-source load balancer implementation designed for bare-metal Kubernetes clusters.\nWhy OpenELB In cloud-based Kubernetes clusters, Services are usually exposed by using load balancers provided by cloud vendors. However, cloud-based load balancers are unavailable in bare-metal environments. OpenELB allows users to create LoadBalancer Services in bare-metal, edge, and virtualization environments for external access, and provides the same user experience as cloud-based load balancers.\nCore Features BGP mode and Layer 2 mode ECMP routing and load balancing IP address pool management BGP configuration using CRDs Installation using Helm Chart Support, Discussion and Contributing OpenELB is a sub-project of KubeSphere.\n Join us at the KubeSphere Slack Channel to get support or simply tell us that you are using OpenELB. You have code or documents for OpenELB? Contributions are always welcome! See Building and Contributing to obtain guidance. License OpenELB is licensed under the Apache License, Version 2.0. See LICENSE for the full license text.\n","excerpt":"OpenELB is an open-source load balancer implementation designed for bare-metal Kubernetes clusters. …","ref":"/docs/overview/","title":"Overview"},{"body":"This document demonstrates how to use OpenELB in BGP mode to expose a Service backed by two Pods. The BgpConf, BgpPeer, Eip, Deployment and Service described in this document are examples only and you need to customize the commands and YAML configurations based on your requirements.\nInstead of using a real router, this document uses a Linux server with BIRD to simulate a router so that users without a real router can also use OpenELB in BGP mode for tests.\nPrerequisites You need to prepare a Kubernetes cluster where OpenELB has been installed. You need to prepare a Linux server that communicates with the Kubernetes cluster properly. BIRD will be installed on the server to simulate a BGP router. If you use a real router instead of BIRD, the router must support BGP and Equal-Cost Multi-Path (ECMP) routing. In addition, the router must also support receiving multiple equivalent routes from the same neighbor. This document uses the following devices as an example:\n Device Name IP Address Description master1 192.168.0.2 Kubernetes cluster master, where OpenELB is installed. worker-p001 192.168.0.3 Kubernetes cluster worker 1 worker-p002 192.168.0.4 Kubernetes cluster worker 2 i-f3fozos0 192.168.0.5 BIRD machine, where BIRD will be installed to simulate a BGP router. Step 1: Install and Configure BIRD If you use a real router, you can skip this step and perform configuration on the router instead.\n Log in to the BIRD machine and run the following commands to install BIRD:\nsudo add-apt-repository ppa:cz.nic-labs/bird sudo apt-get update sudo apt-get install bird sudo systemctl enable bird NOTE\n BIRD 1.5 does not support ECMP. To use all features of OpenELB, you are advised to install BIRD 1.6 or later. The preceding commands apply only to Debian-based OSs such as Debian and Ubuntu. On Red Hat-based OSs such as RHEL and CentOS, use yum instead. You can also install BIRD according to the official BIRD documentation. Run the following command to edit the BIRD configuration file:\nvi /etc/bird/bird.conf Configure the BIRD configuration file as follows:\nrouter id 192.168.0.5; protocol kernel { scan time 60; import none; export all; merge paths on; } protocol bgp neighbor1 { local as 50001; neighbor 192.168.0.2 port 17900 as 50000; source address 192.168.0.5; import all; export all; enable route refresh off; add paths on; } NOTE\n For test usage, you only need to customize the following fields in the preceding configuration:\nrouter id: Router ID of the BIRD machine, which is usually set to the IP address of the BIRD machine.\nprotocol bgp neighbor1:\n local as: ASN of the BIRD machine, which must be different from the ASN of the Kubernetes cluster. neighbor: Master node IP address, BGP port number, and ASN of the Kubernetes cluster. Use port 17900 instead of the default BGP port 179 to avoid conflicts with other BGP components in the system. source address: IP address of the BIRD machine. If multiple nodes in the Kubernetes are used as BGP neighbors, you need to configure multiple BGP neighbors in the BIRD configuration file.\n For details about the BIRD configuration file, see the official BIRD documentation.\n Run the following command to restart BIRD:\nsudo systemctl restart bird Run the following command to check whether the status of BIRD is active:\nsudo systemctl status bird NOTE\nIf the status of BIRD is not active, you can run the following command to check the error logs:\njournalctl -f -u bird Step 2: Create a BgpConf Object The BgpConf object is used to configure the local (Kubernetes cluster) BGP properties on OpenELB.\n Run the following command to create a YAML file for the BgpConf object:\nvi bgp-conf.yaml Add the following information to the YAML file:\napiVersion:network.kubesphere.io/v1alpha2kind:BgpConfmetadata:name:defaultspec:as:50000listenPort:17900routerId:192.168.0.2NOTE\nFor details about the fields in the BgpConf YAML configuration, see Configure Local BGP Properties Using BgpConf. Run the following command to create the BgpConf object:\nkubectl apply -f bgp-conf.yaml Step 3: Create a BgpPeer Object The BgpPeer object is used to configure the peer (BIRD machine) BGP properties on OpenELB.\n Run the following command to create a YAML file for the BgpPeer object:\nvi bgp-peer.yaml Add the following information to the YAML file:\napiVersion:network.kubesphere.io/v1alpha2kind:BgpPeermetadata:name:bgp-peerspec:conf:peerAs:50001neighborAddress:192.168.0.5NOTE\nFor details about the fields in the BgpPeer YAML configuration, see Configure Peer BGP Properties Using BgpPeer. Run the following command to create the BgpPeer object:\nkubectl apply -f bgp-peer.yaml Step 4: Create an Eip Object The Eip object functions as an IP address pool for OpenELB.\n Run the following command to create a YAML file for the Eip object:\nvi bgp-eip.yaml Add the following information to the YAML file:\napiVersion:network.kubesphere.io/v1alpha2kind:Eipmetadata:name:bgp-eipspec:address:172.22.0.2-172.22.0.10NOTE\nFor details about the fields in the Eip YAML configuration, see Configure IP Address Pools Using Eip. Run the following command to create the Eip object:\nkubectl apply -f bgp-eip.yaml Step 5: Create a Deployment The following creates a Deployment of two Pods using the luksa/kubia image. Each Pod returns its own Pod name to external requests.\n Run the following command to create a YAML file for the Deployment:\nvi bgp-openelb.yaml Add the following information to the YAML file:\napiVersion:apps/v1kind:Deploymentmetadata:name:bgp-openelbspec:replicas:2selector:matchLabels:app:bgp-openelbtemplate:metadata:labels:app:bgp-openelbspec:containers:- image:luksa/kubianame:kubiaports:- containerPort:8080 Run the following command to create the Deployment:\nkubectl apply -f bgp-openelb.yaml Step 6: Create a Service Run the following command to create a YAML file for the Service:\nvi bgp-svc.yaml Add the following information to the YAML file:\nkind:ServiceapiVersion:v1metadata:name:bgp-svcannotations:lb.kubesphere.io/v1alpha1:openelbprotocol.openelb.kubesphere.io/v1alpha1:bgpeip.openelb.kubesphere.io/v1alpha2:bgp-eipspec:selector:app:bgp-openelbtype:LoadBalancerports:- name:httpport:80targetPort:8080externalTrafficPolicy:ClusterNOTE\n You must set spec:type to LoadBalancer. The lb.kubesphere.io/v1alpha1: openelb annotation specifies that the Service uses OpenELB. The protocol.openelb.kubesphere.io/v1alpha1: bgp annotation specifies that OpenELB is used in BGP mode. The eip.openelb.kubesphere.io/v1alpha2: bgp-eip annotation specifies the Eip object used by OpenELB. If this annotation is not configured, OpenELB automatically uses the first available Eip object that matches the protocol. You can also delete this annotation and add the spec:loadBalancerIP field (for example, spec:loadBalancerIP: 172.22.0.2) to assign a specific IP address to the Service. In the BGP mode, you can set spec:loadBalancerIP of multiple Services to the same value for IP address sharing (the Services are distinguished by different Service ports). In this case, you must set spec:ports:port to different values and spec:externalTrafficPolicy to Cluster for the Services. If spec:externalTrafficPolicy is set to Cluster (default value), OpenELB uses all Kubernetes cluster nodes as the next hops destined for the Service. If spec:externalTrafficPolicy is set to Local, OpenELB uses only Kubernetes cluster nodes that contain Pods as the next hops destined for the Service. Run the following command to create the Service:\nkubectl apply -f bgp-svc.yaml Step 7: Verify OpenELB in BGP Mode The following verifies whether OpenELB functions properly.\n In the Kubernetes cluster, run the following command to obtain the external IP address of the Service:\nroot@master1:~# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.233.0.1 \u0026lt;none\u0026gt; 443/TCP 20h bgp-svc LoadBalancer 10.233.15.12 172.22.0.2 80:32278/TCP 6m9s In the Kubernetes cluster, run the following command to obtain the IP addresses of the cluster nodes:\nroot@master1:~# kubectl get nodes -o wide NAME STATUS ROLES\tAGE\tVERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME master1 Ready master\t20h\tv1.17.9 192.168.0.2 \u0026lt;none\u0026gt; Ubuntu 18.04.3 LTS 4.15.0-55-generic docker://19.3.11 worker-p001 Ready worker\t20h\tv1.17.9 192.168.0.3 \u0026lt;none\u0026gt; Ubuntu 18.04.3 LTS 4.15.0-55-generic docker://19.3.11 worker-p002 Ready worker\t20h\tv1.17.9 192.168.0.4 \u0026lt;none\u0026gt; Ubuntu 18.04.3 LTS 4.15.0-55-generic docker://19.3.11 On the BIRD machine, run the following command to check the routing table. If equivalent routes using the Kubernetes cluster nodes as next hops destined for the Service are displayed, OpenELB functions properly.\nip route If spec:externalTrafficPolicy in the Service YAML configuration is set to Cluster, all Kubernetes cluster nodes are used as the next hops.\ndefault via 192.168.0.1 dev eth0 172.22.0.2 proto bird metric 64 nexthop via 192.168.0.2 dev eth0 weight 1 nexthop via 192.168.0.3 dev eth0 weight 1 nexthop via 192.168.0.4 dev eth0 weight 1 192.168.0.0/24 dev eth0 proto kernel scope link src 192.168.0.5 If spec:externalTrafficPolicy in the Service YAML configuration is set to Local, only Kubernetes cluster nodes that contain Pods are used as the next hops.\ndefault via 192.168.0.1 dev eth0 172.22.0.2 proto bird metric 64 nexthop via 192.168.0.3 dev eth0 weight 1 nexthop via 192.168.0.4 dev eth0 weight 1 192.168.0.0/24 dev eth0 proto kernel scope link src 192.168.0.5 On the BIRD machine, run the curl command to access the Service:\nroot@i-f3fozos0:~# curl 172.22.0.2 You\u0026#39;ve hit bgp-openelb-648bcf8d7c-86l8k root@i-f3fozos0:~# curl 172.22.0.2 You\u0026#39;ve hit bgp-openelb-648bcf8d7c-pngxj ","excerpt":"This document demonstrates how to use OpenELB in BGP mode to expose a Service backed by two Pods. …","ref":"/docs/getting-started/usage/use-openelb-in-bgp-mode/","title":"Use OpenELB in BGP Mode"},{"body":"Symptom During the installation of OpenELB, the openelb-manager Pod is stuck in pending state.\nroot@node1:~/openelb# kubectl get pod -n openelb-system NAME READY STATUS RESTARTS AGE openelb-admission-create-m2p52 0/1 Completed 0 30s openelb-admission-patch-qmvnq 0/1 Completed 0 30s openelb-manager-74c5467674-pgtmh 0/1 Pending 0 30s Possible Cause OpenELB uses port 443 by default. If another component in the system has occupied port 443, the openelb-manager Pod will be stuck in pending state.\nSolution Perform the following steps to change port 443 of OpenELB. The namespace in the commands is only an example.\n Run the following command to edit the openelb-manager Deployment:\nkubectl edit deployment openelb-manager -n openelb-system Change port 443 to a different value (for example, 30443) to avoid the port conflict:\nspec:template:spec:containers:- args:- --webhook-port=443# Change the port number.ports:- containerPort:443# Change the port number.hostPort:443# Change the port number. Run the following command to check whether the status of openelb-manager is READY: 1/1 and STATUS: Running. If yes, OpenELB has been installed successfully.\nroot@node1:~/openelb# kubectl get pod -n openelb-system NAME READY STATUS RESTARTS AGE openelb-admission-create-m2p52 0/1 Completed 0 7m22s openelb-admission-patch-qmvnq 0/1 Completed 0 7m22s openelb-manager-74c5467674-pgtmh 1/1 Running 0 30s ","excerpt":"Symptom During the installation of OpenELB, the openelb-manager Pod is stuck in pending state. …","ref":"/docs/faqs/what-do-i-do-if-the-openelb-manager-pod-is-stuck-in-pending-state/","title":"What do I do if the openelb-manager Pod is stuck in pending state?"},{"body":"This section describes the basic concepts of OpenELB.\n BGP Mode Describes how OpenELB functions in BGP mode.\nLayer 2 Mode Describes how OpenELB functions in Layer 2 mode.\n","excerpt":"This section describes the basic concepts of OpenELB.\n BGP Mode Describes how OpenELB functions in …","ref":"/docs/concepts/","title":"Concepts"},{"body":"This section describes how to perform configuration before using OpenELB.\n Configure IP Address Pools Using Eip Describes how to configure IP address pools for OpenELB using Eip.\nConfigure OpenELB in BGP Mode Describes how to configure local and peer BGP properties on OpenELB using BgpConf and BgpPeer.\nConfigure OpenELB for Multi-Router Clusters Describes how to configure OpenELB for multi-router Kubernetes clusters.\nConfigure Multiple OpenELB Replicas Describes how to configure multiple OpenELB replicas to ensure high availability.\n","excerpt":"This section describes how to perform configuration before using OpenELB.\n Configure IP Address …","ref":"/docs/getting-started/configuration/","title":"Configuration"},{"body":"This document describes how to configure OpenELB in BGP mode. If OpenELB is used in Layer 2 mode, you do not need to configure OpenELB.\nConfigure Local BGP Properties Using BgpConf You can create a BgpConf object in the Kubernetes cluster to configure the local BGP properties on OpenELB. The following is an example of the BgpConf YAML configuration:\napiVersion:network.kubesphere.io/v1alpha2kind:BgpConfmetadata:name:defaultspec:as:50000listenPort:17900routerId:192.168.0.2The fields are described as follows:\nmetadata:\n name: BgpConf object name. OpenELB recognizes only the name default. BgpConf objects with other names will be ignored. spec:\n as: Local ASN, which must be different from the value of spec:conf:peerAS in the BgpPeer configuration. listenPort: Port on which OpenELB listens. The default value is 179 (default BGP port number). If other components (such as Calico) in the Kubernetes cluster also use BGP and port 179, you must set a different value to avoid the conflict. routerID: Local router ID, which is usually set to the IP address of the master NIC of the Kubernetes master node. If this field is not specified, the first IP address of the node where openelb-manager is located will be used. Configure Peer BGP Properties Using BgpPeer You can create a BgpPeer object in the Kubernetes cluster to configure the peer BGP properties on OpenELB. The following is an example of the BgpPeer YAML configuration:\napiVersion:network.kubesphere.io/v1alpha2kind:BgpPeermetadata:name:bgppeer-samplespec:conf:peerAs:50001neighborAddress:192.168.0.5afiSafis:- config:family:afi:AFI_IPsafi:SAFI_UNICASTenabled:trueaddPaths:config:sendMax:10nodeSelector:matchLabels:openelb.kubesphere.io/rack:leaf1The fields are described as follows:\nmetadata:\n name: Name of the BgpPeer object. If there are multiple peer BGP routers, you can create multiple BgpPeer objects with different names. spec:conf:\n peerAS: ASN of the peer BGP router, which must be different from the value of spec:as in the BgpConf configuration. neighborAddress: IP address of the peer BGP router. spec:afiSafis:addPaths:config:\n sendMax: Maximum number of equivalent routes that OpenELB can send to the peer BGP router for Equal-Cost Multi-Path (ECMP) routing. The default value is 10. spec:nodeSelector:matchLabels:\n openelb.kubesphere.io/rack: If the Kubernetes cluster nodes are deployed under different routers and each node has one OpenELB replica, you need to configure this field so that the OpenELB replica on the correct node establishes a BGP connection with the peer BGP router. By default, all openelb-manager replicas will respond to the BgpPeer configuration and attempt to establish a BGP connection with the peer BGP router. Other fields under spec:afiSafis specify the address family. Currently, OpenELB supports only IPv4 and you can directly use the values in the example configuration.\n","excerpt":"This document describes how to configure OpenELB in BGP mode. If OpenELB is used in Layer 2 mode, …","ref":"/docs/getting-started/configuration/configure-openelb-in-bgp-mode/","title":"Configure OpenELB in BGP Mode"},{"body":"OpenELB is an open-source project and a sub-project of KubeSphere. Contributions in all forms are always welcome!\nNew Feature Proposals If you have any ideas or suggestions, please send design proposals to the GitHub OpenELB Repository Issues.\nCode Contributions Obtain the developer guides at the KubeSphere Community, build the OpenELB project locally for testing, and send pull requests to the GitHub OpenELB Repository.\nDocument Contributions Both original documents and localization works are welcome!\nObtain the OpenELB Style Guides in the root directory of the GitHub OpenELB Website Repository. All documents of OpenELB are located in the content directory.\n","excerpt":"OpenELB is an open-source project and a sub-project of KubeSphere. Contributions in all forms are …","ref":"/docs/building-and-contributing/contribute-to-the-openelb-project/","title":"Contribute to the OpenELB Project"},{"body":"This document describes how to install and delete OpenELB on the KubeSphere web console.\nNOTE\n In a Kubernetes cluster, you only need to install OpenELB once. After the installation is complete, a openelb-manager Deployment that contains a openelb-manager Pod is installed in the cluster. The openelb-manager Pod implements the functionality of OpenELB for the entire Kubernetes cluster. After the installation is complete, you can scale the openelb-manager Deployment and assign multiple OpenELB replicas (openelb-manager Pods) to multiple cluster nodes to ensure high availability. For details, see Configure Multiple OpenELB Replicas. Prerequisites You need to prepare a Kubernetes cluster with KubeSphere, and ensure that the Kubernetes version is 1.15 or later. OpenELB requires CustomResourceDefinition (CRD) v1, which is only supported by Kubernetes 1.15 or later. You can use the following methods to install KubeSphere:\n Deploy a new Kubernetes cluster with KubeSphere. Install KubeSphere in an existing Kubernetes cluster. OpenELB is designed to be used in bare-metal Kubernetes environments. However, you can also use a cloud-based Kubernetes cluster for learning and testing.\nInstall OpenELB on the KubeSphere Web Console Log in to the KubeSphere console and go to your workspace.\n On the left navigation bar, choose Apps Management \u0026gt; App Repos, and click Add Repo on the right.\n In the displayed dialog box, set App Repository Name (for example, kubesphere-test), set URL to https://charts.kubesphere.io/test, click Validate to check the URL, and click OK.\n Go to your project, choose Application Workloads \u0026gt; Apps on the left navigation bar, and click Deploy New Application on the right.\n In the displayed dialog box, click From App Templates.\n Select kubesphere-test from the drop-down list and click openelb.\n Click Install and follow the wizard instructions to complete the installation. You can customize the chart configuration in the YAML file based on your requirements.\n Choose Application Workloads \u0026gt; Pods on the left navigation bar to check whether the status of openelb-manager is running. If yes, OpenELB has been installed successfully.\n Delete OpenELB on the KubeSphere Web Console To delete OpenELB on the KubeSphere web console, go to your project, choose Application Workloads \u0026gt; Apps on the left navigation bar, click on the right of the OpenELB application, and choose Delete from the drop-down list.\nNOTE\nBefore deleting OpenELB, you must first delete all Services that use OpenELB. ","excerpt":"This document describes how to install and delete OpenELB on the KubeSphere web console.\nNOTE\n In a …","ref":"/docs/getting-started/installation/install-openelb-on-kubesphere/","title":"Install OpenELB on KubeSphere"},{"body":"This document demonstrates how to use OpenELB in Layer 2 mode to expose a Service backed by two Pods. The Eip, Deployment and Service described in this document are examples only and you need to customize the commands and YAML configurations based on your requirements.\nPrerequisites You need to prepare a Kubernetes cluster where OpenELB has been installed. All Kubernetes cluster nodes must be on the same Layer 2 network (under the same router). You need to prepare a client machine, which is used to verify whether OpenELB functions properly in Layer 2 mode. The client machine needs to be on the same network as the Kubernetes cluster nodes. The Layer 2 mode requires your infrastructure environment to allow anonymous ARP/NDP packets. If OpenELB is installed in a cloud-based Kubernetes cluster for testing, you need to confirm with your cloud vendor whether anonymous ARP/NDP packets are allowed. If not, the Layer 2 mode cannot be used. This document uses the following devices as an example:\n Device Name IP Address MAC Address Description master1 192.168.0.2 52:54:22:a3:9a:d9 Kubernetes cluster master worker-p001 192.168.0.3 52:54:22:3a:e6:6e Kubernetes cluster worker 1 worker-p002 192.168.0.4 52:54:22:37:6c:7b Kubernetes cluster worker 2 i-f3fozos0 192.168.0.5 52:54:22:fa:b9:3b Client machine Step 1: Enable strictARP for kube-proxy In Layer 2 mode, you need to enable strictARP for kube-proxy so that all NICs in the Kubernetes cluster stop answering ARP requests from other NICs and OpenELB handles ARP requests instead.\n Log in to the Kubernetes cluster and run the following command to edit the kube-proxy ConfigMap:\nkubectl edit configmap kube-proxy -n kube-system In the kube-proxy ConfigMap YAML configuration, set data.config.conf.ipvs.strictARP to true.\nipvs:strictARP:true Run the following command to restart kube-proxy:\nkubectl rollout restart daemonset kube-proxy -n kube-system Step 2: Specify the NIC Used for OpenELB If the node where OpenELB is installed has multiple NICs, you need to specify the NIC used for OpenELB in Layer 2 mode. You can skip this step if the node has only one NIC.\nIn this example, the master1 node where OpenELB is installed has two NICs (eth0 192.168.0.2 and eth1 192.168.1.2), and eth0 192.168.0.2 will be used for OpenELB.\nRun the following command to annotate master1 to specify the NIC:\nkubectl annotate nodes master1 layer2.openelb.kubesphere.io/v1alpha1=\u0026#34;192.168.0.2\u0026#34; Step 3: Create an Eip Object The Eip object functions as an IP address pool for OpenELB.\n Run the following command to create a YAML file for the Eip object:\nvi layer2-eip.yaml Add the following information to the YAML file:\napiVersion:network.kubesphere.io/v1alpha2kind:Eipmetadata:name:layer2-eipspec:address:192.168.0.91-192.168.0.100interface:eth0protocol:layer2NOTE\n The IP addresses specified in spec:address must be on the same network segment as the Kubernetes cluster nodes.\n For details about the fields in the Eip YAML configuration, see Configure IP Address Pools Using Eip.\n Run the following command to create the Eip object:\nkubectl apply -f layer2-eip.yaml Step 4: Create a Deployment The following creates a Deployment of two Pods using the luksa/kubia image. Each Pod returns its own Pod name to external requests.\n Run the following command to create a YAML file for the Deployment:\nvi layer2-openelb.yaml Add the following information to the YAML file:\napiVersion:apps/v1kind:Deploymentmetadata:name:layer2-openelbspec:replicas:2selector:matchLabels:app:layer2-openelbtemplate:metadata:labels:app:layer2-openelbspec:containers:- image:luksa/kubianame:kubiaports:- containerPort:8080 Run the following command to create the Deployment:\nkubectl apply -f layer2-openelb.yaml Step 5: Create a Service Run the following command to create a YAML file for the Service:\nvi layer2-svc.yaml Add the following information to the YAML file:\nkind:ServiceapiVersion:v1metadata:name:layer2-svcannotations:lb.kubesphere.io/v1alpha1:openelbprotocol.openelb.kubesphere.io/v1alpha1:layer2eip.openelb.kubesphere.io/v1alpha2:layer2-eipspec:selector:app:layer2-openelbtype:LoadBalancerports:- name:httpport:80targetPort:8080externalTrafficPolicy:ClusterNOTE\n You must set spec:type to LoadBalancer. The lb.kubesphere.io/v1alpha1: openelb annotation specifies that the Service uses OpenELB. The protocol.openelb.kubesphere.io/v1alpha1: layer2 annotation specifies that OpenELB is used in Layer 2 mode. The eip.openelb.kubesphere.io/v1alpha2: layer2-eip annotation specifies the Eip object used by OpenELB. If this annotation is not configured, OpenELB automatically uses the first available Eip object that matches the protocol. You can also delete this annotation and add the spec:loadBalancerIP field (for example, spec:loadBalancerIP: 192.168.0.91) to assign a specific IP address to the Service. If spec:externalTrafficPolicy is set to Cluster (default value), OpenELB randomly selects a node from all Kubernetes cluster nodes to handle Service requests. Pods on other nodes can also be reached over kube-proxy. If spec:externalTrafficPolicy is set to Local, OpenELB randomly selects a node that contains a Pod in the Kubernetes cluster to handle Service requests. Only Pods on the selected node can be reached. Run the following command to create the Service:\nkubectl apply -f layer2-svc.yaml Step 6: Verify OpenELB in Layer 2 Mode The following verifies whether OpenELB functions properly.\n In the Kubernetes cluster, run the following command to obtain the external IP address of the Service:\nroot@master1:~# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.233.0.1 \u0026lt;none\u0026gt; 443/TCP 20h layer2-svc LoadBalancer 10.233.13.139 192.168.0.91 80:32658/TCP 14s In the Kubernetes cluster, run the following command to obtain the IP addresses of the cluster nodes:\nroot@master1:~# kubectl get nodes -o wide NAME STATUS ROLES\tAGE\tVERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME master1 Ready master\t20h\tv1.17.9 192.168.0.2 \u0026lt;none\u0026gt; Ubuntu 18.04.3 LTS 4.15.0-55-generic docker://19.3.11 worker-p001 Ready worker\t20h\tv1.17.9 192.168.0.3 \u0026lt;none\u0026gt; Ubuntu 18.04.3 LTS 4.15.0-55-generic docker://19.3.11 worker-p002 Ready worker\t20h\tv1.17.9 192.168.0.4 \u0026lt;none\u0026gt; Ubuntu 18.04.3 LTS 4.15.0-55-generic docker://19.3.11 In the Kubernetes cluster, run the following command to check the nodes of the Pods:\nroot@master1:~# kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES layer2-openelb-7b4fdf6f85-mnw5k 1/1 Running 0 3m27s 10.233.92.38 worker-p001 \u0026lt;none\u0026gt; \u0026lt;none\u0026gt; layer2-openelb-7b4fdf6f85-px4sm 1/1 Running 0 3m26s 10.233.90.31 worker-p002 \u0026lt;none\u0026gt; \u0026lt;none\u0026gt; NOTE\nIn this example, the Pods are automatically assigned to different nodes. You can manually assign Pods to different nodes. On the client machine, run the following commands to ping the Service IP address and check the IP neighbors:\nroot@i-f3fozos0:~# ping 192.168.0.91 -c 4 PING 192.168.0.91 (192.168.0.91) 56(84) bytes of data. 64 bytes from 192.168.0.91: icmp_seq=1 ttl=64 time=0.162 ms 64 bytes from 192.168.0.91: icmp_seq=2 ttl=64 time=0.119 ms 64 bytes from 192.168.0.91: icmp_seq=3 ttl=64 time=0.145 ms 64 bytes from 192.168.0.91: icmp_seq=4 ttl=64 time=0.123 ms --- 192.168.0.91 ping statistics --- 4 packets transmitted, 4 received, 0% packet loss, time 3076ms rtt min/avg/max/mdev = 0.119/0.137/0.162/0.019 ms root@i-f3fozos0:~# ip neigh 192.168.0.1 dev eth0 lladdr 02:54:22:99:ae:5d STALE 192.168.0.2 dev eth0 lladdr 52:54:22:a3:9a:d9 STALE 192.168.0.3 dev eth0 lladdr 52:54:22:3a:e6:6e STALE 192.168.0.4 dev eth0 lladdr 52:54:22:37:6c:7b STALE 192.168.0.91 dev eth0 lladdr 52:54:22:3a:e6:6e REACHABLE In the output of the ip neigh command, the MAC address of the Service IP address 192.168.0.91 is the same as that of worker-p001 192.168.0.3. Therefore, OpenELB has mapped the Service IP address to the MAC address of worker-p001.\n On the client machine, run the curlcommand to access the Service:\ncurl 192.168.0.91 If spec:externalTrafficPolicy in the Service YAML configuration is set to Cluster, both Pods can be reached.\nroot@i-f3fozos0:~# curl 192.168.0.91 You\u0026#39;ve hit layer2-openelb-7b4fdf6f85-px4sm root@i-f3fozos0:~# curl 192.168.0.91 You\u0026#39;ve hit layer2-openelb-7b4fdf6f85-mnw5k root@i-f3fozos0:~# curl 192.168.0.91 You\u0026#39;ve hit layer2-openelb-7b4fdf6f85-px4sm root@i-f3fozos0:~# curl 192.168.0.91 You\u0026#39;ve hit layer2-openelb-7b4fdf6f85-mnw5k If spec:externalTrafficPolicy in the Service YAML configuration is set to Local, only the Pod on the node selected by OpenELB can be reached.\nroot@i-f3fozos0:~# curl 192.168.0.91 You\u0026#39;ve hit layer2-openelb-7b4fdf6f85-mnw5k root@i-f3fozos0:~# curl 192.168.0.91 You\u0026#39;ve hit layer2-openelb-7b4fdf6f85-mnw5k root@i-f3fozos0:~# curl 192.168.0.91 You\u0026#39;ve hit layer2-openelb-7b4fdf6f85-mnw5k ","excerpt":"This document demonstrates how to use OpenELB in Layer 2 mode to expose a Service backed by two …","ref":"/docs/getting-started/usage/use-openelb-in-layer-2-mode/","title":"Use OpenELB in Layer 2 Mode"},{"body":"This document describes how to configure OpenELB in BGP mode for Kubernetes cluster nodes deployed under multiple routers. You can skip this document if all Kubernetes cluster nodes are deployed under the same router.\nNOTE\nThis document applies only to the BGP mode. The Layer 2 mode requires that all Kubernetes cluster nodes be on the same Layer 2 network (under the same router). Network Topology Before Configuration This section explains why you need to perform the configuration. The following figure shows the network topology of a Kubernetes cluster before the configuration.\nIP addresses in the preceding figure are examples only. The topology is described as follows:\n In the Kubernetes cluster, the master and worker 1 nodes are deployed under the leaf 1 BGP router, and the worker 2 node is deployed under the leaf 2 BGP router. OpenELB is only installed under leaf 1 (by default, only one OpenELB replica is installed). A Service backed by two Pods is deployed in the Kubernetes cluster, and is assigned an IP address 172.22.0.2 for external access. Pod 1 and Pod 2 are deployed on worker 1 and worker 2 respectively. OpenELB establishes a BGP connection with leaf 1 and publishes the IP addresses of the master node and worker 1 (192.168.0.3 and 192.168.0.4) to leaf 1 as the next hop destined for the Service IP address 172.22.0.2. Leaf 1 establishes a BGP connection with the spine BGP router and publishes its own IP address 192.168.0.2 to the spine router as the next hop destined for the Service IP address 172.22.0.2. When an external client machine attempts to access the Service, the spine router forwards the Service traffic to leaf 1, and leaf 1 load balances the traffic among the master node and worker 1. Although Pod 2 on worker 2 can also be reached over kube-proxy, router-level load balancing is implemented only among the master node and worker 1 and the Service bandwidth is limited to the bandwidth of the master node and worker 1. To resolve the problem, you need to label the Kubernetes cluster nodes and change the OpenELB Deployment configuration so that OpenELB is installed on nodes under all leaf routers. In addition, you need to specify the spec:nodeSelector:matchLabels field in the BgpPeer configuration so that the OpenELB replicas establish BGP connections with the correct BGP routers.\nNetwork Topology After Configuration This section describes the configuration result you need to achieve. The following figure shows the network topology of a Kubernetes cluster after the configuration.\nIP addresses in the preceding figure are examples only. The topology is described as follows:\n After the configuration, OpenELB is installed on nodes under all leaf routers. In addition to what happens before the configuration, the OpenELB replica installed under leaf 2 also establishes a BGP connection with leaf 2 and publishes the worker 2 IP address 192.168.1.2 to leaf 2 as the next hop destined for the Service IP address 172.22.0.2. Leaf 2 establishes a BGP connection with the spine router and publishes its own IP address 192.168.1.1 to the spine router as the next hop destined for the Service IP address 172.22.0.2. When an external client machine attempts to access the Service, the spine router load balances the Service traffic among leaf 1 and leaf 2. Leaf 1 load balances the traffic among the master node and worker 1. Leaf 2 forwards the traffic to worker 2. Therefore, the Service traffic is load balanced among all three Kubernetes cluster nodes, and the Service bandwidth of all three nodes can be utilized. Configuration Procedure Prerequisites You need to prepare a Kubernetes cluster where OpenELB has been installed.\nProcedure NOTE\nThe node names, leaf router names, and namespace in the following steps are examples only. You need to use the actual values in your environment. Log in to the Kubernetes cluster and run the following commands to label the Kubernetes cluster nodes where OpenELB is to be installed:\nkubectl label --overwrite nodes master1 worker-p002 lb.kubesphere.io/v1alpha1=openelb NOTE\nOpenELB works properly if it is installed on only one node under each leaf router. In this example, OpenELB will be installed on master1 under leaf1 and worker-p002 under leaf2. However, to ensure high availability in a production environment, you are advised to installed OpenELB on at least two nodes under each leaf router. Run the following command to scale the number of openelb-manager Pods to 0:\nkubectl scale deployment openelb-manager --replicas=0 -n openelb-system Run the following command to edit the openelb-manager Deployment:\nkubectl edit deployment openelb-manager -n openelb-system In the openelb-manager Deployment YAML configuration, add the following fields under spec:template:spec:\nnodeSelector:kubernetes.io/os:linuxlb.kubesphere.io/v1alpha1:openelb Run the following command to scale the number of openelb-manager Pods to the required number (change the number 2 to the actual value):\nkubectl scale deployment openelb-manager --replicas=2 -n openelb-system Run the following command to check whether OpenELB has been installed on the required nodes.\nkubectl get po -n openelb-system -o wide It should return something like the following.\nNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES openelb-admission-create-m2p52 0/1 Completed 0 49m 10.233.92.34 worker-p001\t\u0026lt;none\u0026gt; \u0026lt;none\u0026gt; openelb-admission-patch-qmvnq 0/1 Completed 0 49m 10.233.96.15 worker-p002\t\u0026lt;none\u0026gt; \u0026lt;none\u0026gt; openelb-manager-74c5467674-pgtmh 1/1 Running 0 19m 192.168.0.2 master1 \u0026lt;none\u0026gt; \u0026lt;none\u0026gt; openelb-manager-74c5467674-wmh5t 1/1 Running 0 19m 192.168.0.4 worker-p002 \u0026lt;none\u0026gt; \u0026lt;none\u0026gt; Run the following commands to label the Kubernetes cluster nodes so that the OpenELB replicas establish BGP connections with the correct BGP routers.\nkubectl label --overwrite nodes master1 openelb.kubesphere.io/rack=leaf1 kubectl label --overwrite nodes worker-p002 openelb.kubesphere.io/rack=leaf2 When creating BgpPeer objects, configure the spec:nodeSelector:matchLabels field in the BgpPeer YAML configuration for each leaf router. The following YAML configurations specify that the OpenELB replica on master1 communicates with leaf1, and the OpenELB replica on worker-p002 communicates with leaf2.\n# BgpPeer YAML for master1 and leaf1nodeSelector:matchLabels:openelb.kubesphere.io/rack:leaf1# BgpPeer YAML for worker-p002 and leaf2nodeSelector:matchLabels:openelb.kubesphere.io/rack:leaf2 ","excerpt":"This document describes how to configure OpenELB in BGP mode for Kubernetes cluster nodes deployed …","ref":"/docs/getting-started/configuration/configure-openelb-for-multi-router-clusters/","title":"Configure OpenELB for Multi-router Clusters (BGP Mode)"},{"body":"This section describes how to install, configure, and use OpenELB.\n Installation Describes how to install OpenELB on Kubernetes, KubeSphere, and K3s.\nConfiguration Describes how to perform configuration before using OpenELB.\nUsage Describes how to use OpenELB.\n","excerpt":"This section describes how to install, configure, and use OpenELB.\n Installation Describes how to …","ref":"/docs/getting-started/","title":"Getting Started"},{"body":"This document describes how to use kubectl and Helm to install and delete OpenELB in a K3s Kubernetes cluster.\nNOTE\n In a Kubernetes cluster, you only need to install OpenELB once. After the installation is complete, a openelb-manager Deployment that contains a openelb-manager Pod is installed in the cluster. The openelb-manager Pod implements the functionality of OpenELB for the entire Kubernetes cluster. After the installation is complete, you can scale the openelb-manager Deployment and assign multiple OpenELB replicas (openelb-manager Pods) to multiple cluster nodes to ensure high availability. For details, see Configure Multiple OpenELB Replicas. Prerequisites You need to prepare a K3s Kubernetes cluster, and ensure that the Kubernetes version is 1.15 or later. OpenELB requires CustomResourceDefinition (CRD) v1, which is only supported by Kubernetes 1.15 or later. OpenELB is designed to be used in bare-metal Kubernetes environments. However, you can also use a cloud-based K3s Kubernetes cluster for learning and testing.\n If you use Helm to install OpenELB, ensure that the Helm version is Helm 3.\n Install OpenELB Using kubectl Log in to the master node of the K3s Kubernetes cluster over SSH and run the following command:\nkubectl apply -f https://raw.githubusercontent.com/openelb/openelb/master/deploy/openelb.yaml Run the following command to edit the openelb-manager Deployment:\nkubectl edit deployment openelb-manager -n openelb-system Change port 443 to a different value (for example, 30443) to avoid a port conflict with K3s:\nspec:template:spec:containers:- args:- --webhook-port=443# Change the port number.ports:- containerPort:443# Change the port number.hostPort:443# Change the port number.NOTE\nBy default, both OpenELB and K3s use port 443. Therefore, you need to change port 443 of OpenELB to avoid the port conflict. For details, see the official K3s document. Run the following command to check whether the status of openelb-manager is READY: 1/1 and STATUS: Running. If yes, OpenELB has been installed successfully.\nkubectl get po -n openelb-system It should return something like the following.\nNAME READY STATUS RESTARTS AGE openelb-admission-create-tjsqm 0/1 Completed 0 41s openelb-admission-patch-n247f 0/1 Completed 0 41s openelb-manager-74c5467674-bx2wg 1/1 Running 0 41s Delete OpenELB Using kubectl To delete OpenELB, log in to the master node of the K3s Kubernetes cluster and run the following command:\nkubectl delete -f https://raw.githubusercontent.com/openelb/openelb/master/deploy/openelb.yaml NOTE\nBefore deleting OpenELB, you must first delete all Services that use OpenELB. Run the following command to check the result. If the openelb-system namespace does not exist, OpenELB has been deleted successfully.\nkubectl get ns Install OpenELB Using Helm Log in to the master node of the K3s Kubernetes cluster over SSH and run the following command to configure the environment variable:\nexport KUBECONFIG=/etc/rancher/k3s/k3s.yaml NOTE\nYou need to configure this environment variable so that Helm can reach the K3s Kubernetes cluster. For details, see the official K3s document. Run the following commands to install OpenELB:\nhelm repo add test https://charts.kubesphere.io/test helm repo update helm install openelb test/openelb Run the following command to edit the openelb-manager Deployment:\nkubectl edit deployment openelb-manager Change port 443 to a different value (for example, 30443) to avoid a port conflict with K3s:\nspec: template: spec: containers: - args: - --webhook-port=443 # Change the port number. ports: - containerPort: 443 # Change the port number. hostPort: 443 # Change the port number. NOTE\nBy default, both OpenELB and K3s use port 443. Therefore, you need to change port 443 of OpenELB to avoid the port conflict. For details, see the official K3s document. Run the following command to check whether the status of openelb-manager is READY: 1/1 and STATUS: Running. If yes, OpenELB has been installed successfully.\nkubectl get po -A It should return something like the following.\nNAMESPACE NAME READY STATUS RESTARTS AGE openelb-system openelb-admission-create-m2p52 0/1 Completed 0 32s openelb-system openelb-admission-patch-qmvnq 0/1 Completed 0 31s openelb-system openelb-manager-74c5467674-pgtmh 1/1 Running 0 32s ... ... Delete OpenELB Using Helm To delete OpenELB, run the following command:\nhelm delete openelb NOTE\nBefore deleting OpenELB, you must first delete all Services that use OpenELB. Run the following command to check the result. If the OpenELB application does not exist, OpenELB has been deleted successfully.\nhelm ls ","excerpt":"This document describes how to use kubectl and Helm to install and delete OpenELB in a K3s …","ref":"/docs/getting-started/installation/install-openelb-on-k3s/","title":"Install OpenELB on K3s"},{"body":"This section describes how to use OpenELB in BGP mode and Layer 2 mode.\n Use OpenELB in BGP Mode Describes how to use OpenELB in BGP mode.\nUse OpenELB in Layer 2 Mode Describes how to use OpenELB in Layer 2 mode.\n","excerpt":"This section describes how to use OpenELB in BGP mode and Layer 2 mode.\n Use OpenELB in BGP Mode …","ref":"/docs/getting-started/usage/","title":"Usage"},{"body":"This section describes how to build the OpenELB project for testing and contribute to the OpenELB project.\nBuild the OpenELB Project Describes how to build the OpenELB project for testing.\nContribute to the OpenELB Project Describes how to contribute to the OpenELB project.\n","excerpt":"This section describes how to build the OpenELB project for testing and contribute to the OpenELB …","ref":"/docs/building-and-contributing/","title":"Building and Contributing"},{"body":"This document describes how to configure multiple OpenELB replicas to ensure high availability in a production environment. You can skip this document if OpenELB is used in a test environment. By default, only one OpenELB replica is installed in a Kubernetes cluster.\n If all Kubernetes cluster nodes are deployed under the same router (BGP mode or Layer 2 mode), you are advised to configure at least two OpenELB replicas, which are installed on two Kubernetes cluster nodes respectively. If the Kubernetes cluster nodes are deployed under different leaf routers (BGP mode only), you are advised to configure at least two OpenELB replicas (one replica for one node) under each leaf router. For details, see Configure OpenELB for Multi-router Clusters. Prerequisites You need to prepare a Kubernetes cluster where OpenELB has been installed.\nProcedure NOTE\nThe node names and namespace in the following steps are examples only. You need to use the actual values in your environment. Log in to the Kubernetes cluster and run the following command to label the Kubernetes cluster nodes where OpenELB is to be installed:\nkubectl label --overwrite nodes master1 worker-p002 lb.kubesphere.io/v1alpha1=openelb NOTE\nIn this example, OpenELB will be installed on master1 and worker-p002. Run the following command to scale the number of openelb-manager Pods to 0:\nkubectl scale deployment openelb-manager --replicas=0 -n openelb-system Run the following command to edit the openelb-manager Deployment:\nkubectl edit deployment openelb-manager -n openelb-system In the openelb-manager Deployment YAML configuration, add the following fields under spec:template:spec:\nnodeSelector:kubernetes.io/os:linuxlb.kubesphere.io/v1alpha1:openelb Run the following command to scale the number of openelb-manager Pods to the required number (change the number 2 to the actual value):\nkubectl scale deployment openelb-manager --replicas=2 -n openelb-system Run the following command to check whether OpenELB has been installed on the required nodes.\nkubectl get po -n openelb-system -o wide It should return something like the following.\nNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES openelb-admission-create-m2p52 0/1 Completed 0 49m 10.233.92.34 worker-p001\t\u0026lt;none\u0026gt; \u0026lt;none\u0026gt; openelb-admission-patch-qmvnq 0/1 Completed 0 49m 10.233.96.15 worker-p002\t\u0026lt;none\u0026gt; \u0026lt;none\u0026gt; openelb-manager-74c5467674-pgtmh 1/1 Running 0 19m 192.168.0.2 master1 \u0026lt;none\u0026gt; \u0026lt;none\u0026gt; openelb-manager-74c5467674-wmh5t 1/1 Running 0 19m 192.168.0.4 worker-p002 \u0026lt;none\u0026gt; \u0026lt;none\u0026gt; NOTE\n In Layer 2 mode, OpenELB uses the leader election feature of Kubernetes to ensure that only one replica responds to ARP/NDP requests. In BGP mode, all OpenELB replicas will respond to the BgpPeer configuration and attempt to establish a BGP connection with the peer BGP router by default. If the Kubernetes cluster nodes are deployed under different routers, you need to perform further configuration so that the OpenELB replicas establish BGP connections with the correct BGP routers. For details, see Configure OpenELB for Multi-router Clusters. ","excerpt":"This document describes how to configure multiple OpenELB replicas to ensure high availability in a …","ref":"/docs/getting-started/configuration/configure-multiple-openelb-replicas/","title":"Configure Multiple OpenELB Replicas"},{"body":"This section provides solutions to frequently asked questions.\n What do I do if the openelb-manager Pod is stuck in pending state? ","excerpt":"This section provides solutions to frequently asked questions.\n What do I do if the openelb-manager …","ref":"/docs/faqs/","title":"FAQs"},{"body":"Follow the documentation to learn how to use OpenELB.\n Overview Introduces OpenELB.\nConcepts Describes the basic concepts of OpenELB.\nGetting Started Describes how to install, configure, and use OpenELB.\nBuilding and Contributing Describes how to build the OpenELB project for testing and contribute to the OpenELB project.\nFAQs Provides solutions to frequently asked questions.\n","excerpt":"Follow the documentation to learn how to use OpenELB.\n Overview Introduces OpenELB.\nConcepts …","ref":"/docs/","title":"Documentation"},{"body":"PorterLB makes it possible for users to expose Services in bare-metal Kubernetes environments. Currently, PorterLB supports the BGP mode and the Layer 2 mode, which use BGP and ARP/NDP respectively to expose Services.\nGenerally, the BGP mode is recommended because it allows you to build a high availability system free of failover interruptions and bandwidth bottlenecks. However, BGP may be unavailable in certain systems because of security requirements or because the router does not support BGP. In this case, you can use PorterLB in Layer 2 mode to achieve similar functionality.\nThough the Layer 2 mode does not provide the same high availability as the BGP mode, it does implement certain mechanisms to ensure that PorterLB can still function as long as the Kubernetes cluster is not entirely down.\nIn this article, I am going to discuss the high availability mechanisms of PorterLB in Layer 2 mode. The following scenarios will be examined:\n Scenario 1: Only one PorterLB replica is deployed and the node selected by PorterLB is down. Scenario 2: Only one PorterLB replica is deployed and the node that hosts PorterLB is down. Scenario 3: Multiple PorterLB replicas are deployed and one of the nodes that contain PorterLB replicas is down. Scenario 1: The Next Hop Is Down In the Kubernetest cluster, only one PorterLB replica (porter-manager Pod) is deployed on node 3 and an application Pod is deployed on node 2. The application Pod is exposed by using a Service (192.168.0.91). PorterLB maps the IP address of the Service to the MAC address of node 2.\nIf the node (node 2 in this example) selected by PorterLB is down, PorterLB automatically maps the Service IP address to the MAC address of another node to rebuild connection to the Service.\nTherefore, the network topology after node 2 is down probably looks like the following:\nOne thing you should be aware of is that although PorterLB automatically rebuilds the connection to the Service, there is a short period of failover interruption, which is one of the reasons why the BGP mode better suits scenarios where availability is vital.\nScenario 2: The Only PorterLB Node Is Down So what if the node that hosts the porter-manager Pod is down?\nWell, the porter-manager Pod is deployed under a ReplicaSet in a Deployment. Therefore, if the node that hosts the porter-manager Pod is down, the Kubernetes system automatically re-creates the porter-manager Pod on another node. The network topology changes to the following:\nThough existing Services that use PorterLB are not affected, the functionality of PorterLB is unavailable during the re-creation, which is why you are advised to deploy multiple PorterLB replicas (porter-manager Pods) in the cluster.\nScenario 3: One of Multiple PorterLB Nodes Is Down When multiple PorterLB replicas (porter-manager Pods) are deployed, PorterLB uses the leader election mechanism to ensure that only one replica (the leader) communicates with the router. If the node that hosts the leader is down, another PorterLB replica automatically takes over the service after a leader re-election. The network topology changes to the following:\nAlthough the functionality of PorterLB is still unavailable during the leader re-election, the downtime is much shorter than that in scenario 2. Therefore, if you need to use the Layer 2 mode in a production environment, it is highly recommended that you deploy multiple PorterLB replicas to improve the availability.\n","excerpt":"PorterLB makes it possible for users to expose Services in bare-metal Kubernetes environments. …","ref":"/blog/2021/03/30/porterlb-high-availability-mechanisms-in-layer-2-mode/","title":"PorterLB: High Availability Mechanisms in Layer 2 Mode"},{"body":"PorterLB is a load balancer implementation designed for bare-metal Kubernetes clusters. As a sub-project of KubeSphere, PorterLB fits well into the KubeSphere ecosystem. You can seamlessly integrate PorterLB as a plugin with KubeSphere to utilize the abundant features of the KubeSphere ecosystem.\nDuring new feature release, the KubeSphere grayscale release feature allows users to freely distribute traffic among a stable version and a beta version of an application to both ensure service continuity and test the beta version before formally rolling it out.\nIn this article, I am going to introduce how to use KubeSphere and PorterLB to implement grayscale release for an application in a bare-metal Kubernetes cluster. To make you quickly understand how it works, I am going to directly use demonstration settings without digging too much into the details. You can obtain detailed guidance from the KubeSphere documentation and PorterLB documentation.\nArchitecture Grayscale release\nIn the preceding figure, an application Service backed by Pod v1 (stable version) is deployed in a Kubernetes cluster. After grayscale release is configured on KubeSphere, Pod v2 (beta version) is created and users can determine how much traffic is forwarded to Pod v1 and how much to Pod v2. After Pod v2 is fully tested, Pod v1 can be taken offline and Pod v2 will take over all traffic.\n KubeSphere gateway\nThe application Service is exposed to the outside by using the KubeSphere gateway, which is in effect a Service backed by a Pod that functions as a reverse proxy. An external client uses a path of a domain name to access the application Service. The reverse proxy Pod obtains the mapping between the path and the application Service from a Route object.\n PorterLB\nPorterLB installed in the Kubernetes cluster sends an ARP packet to the router, and tells the router to forward traffic destined for the gateway Service to node 1. If node 1 fails, the traffic will be forwarded to node 2.\n Procedure Prerequisites You need to prepare a Kubernetes cluster, and install KubeSphere and PorterLB in the Kubernetes cluster. On KubeSphere, you need to create a project and an account. The role of the account in the project must be project-admin. Operations Step 1: Set the KubeSphere gateway to use PorterLB and create an Eip object.\n Log in to KubeSphere and go to your project.\n Choose Project Settings \u0026gt; Advanced Settings on the left navigation bar and click Set Gateway on the right.\n Click LoadBalancer, set Application Governance to On, add the following annotations, and click Save. The annotations set the KubeSphere gateway Service to use PorterLB in Layer 2 mode.\nlb.kubesphere.io/v1alpha1:porterprotocol.porter.kubesphere.io/v1alpha1:layer2eip.porter.kubesphere.io/v1alpha2:porter-layer2-eip# Name of the Eip object. Move the cursor to in the lower-right corner and click Kubectl to open the CLI.\n Run the vi porter-layer2-eip.yaml command to create a YAML file for an Eip object and add the following information to the YAML file:\napiVersion:network.kubesphere.io/v1alpha2kind:Eipmetadata:name:porter-layer2-eipspec:# Use an unoccupied address on the same network segment as your K8s cluster.address:192.168.0.100interface:eth0protocol:layer2 Run the kubectl apply -f eip.yaml command to create the Eip object, which functions as an IP address pool for PorterLB.\n Step 2: Create an application.\n Choose Application Workloads on the left navigation bar and click Create Composing Application on the right. Set Application Name to demo-app, Application Version(Optional) to v1, Application Governance to On, and click Next. Click Add Service, click Stateless Service, set Name to demo-svc, and click Next. Click Add Container Image, set Image to luksa/kubia, Container Port to 8080, Service Port to 80, click √, and click Next. Click Next on the Mount Volumes tab, click Add on the Advanced Settings tab, and click Next. Click Add Route Rule, click Specify Domain, set HostName to demo.com, Paths to /path | demo-svc | 80, and click OK. Click Create. Step 3: Configure grayscale release.\n Choose Grayscale Release on the left navigation bar, move the cursor to Canary Release, and click Create Job. Set Release Job Name to demo-canary and click Next. Select demo-app from the drop-down list, click Select on the right of demo-svc, and click Next. Set Grayscale Release Version Number to v2 and click Next. Click Create on the Policy Config tab. Step 4: Test grayscale release.\n Choose Grayscale Release on the left navigation bar, click the Job Status tab, and click demo-canary on the right.\n In the Real-time traffic distribution area, move the slider so that 100% traffic is sent to v2.\n Log in to a client machine connected to the gateway Service IP address (configured in the Eip object) and add the domain name information to the etc/hosts file:\n192.168.0.100 demo.com On the client machine, run the curl demo.com/path command for multiple times to access the application Service.\nIf grayscale release functions properly, only v2 can be accessed.\n On KubeSphere, repeat step 2 so that 100% traffic is sent to v1.\n On the client machine, run the curl demo.com/path command for multiple times to access the application Service.\nIf grayscale release functions properly, only v1 can be accessed.\n After v2 is fully tested, you can set v2 to take over all traffic and take the canary release job offline to formally release v2.\n ","excerpt":"PorterLB is a load balancer implementation designed for bare-metal Kubernetes clusters. As a …","ref":"/blog/2021/03/12/kubesphere-porterlb-implement-grayscale-release-on-bare-metal-kubernetes/","title":"KubeSphere+PorterLB: Implement Grayscale Release on Bare-Metal Kubernetes"},{"body":"Applications deployed on Kubernetes are usually exposed by using LoadBalancer Services, which rely heavily on load balancer implementations provided by cloud vendors. For applications deployed in bare-metal Kubernetes clusters, where cloud-based load balancer implementations are unavailable, Kubernetes has yet to provide a viable LoadBalancer solution.\nPorterLB well addresses this problem. As a sub-project of KubeSphere, PorterLB boosts application containerization in bare-metal Kubernetes environments, and complements the KubeSphere ecosystem in the bare-metal field.\nIn this article, I am going to introduce how PorterLB functions in Layer 2 mode and BGP mode, which are provided for users to expose applications in different scenarios, and the advantages of PorterLB compared with other bare-metal load balancer implementations such as MetalLB.\nBGP Mode In BGP mode, PorterLB publishes routes to a BGP router deployed outside the Kubernetes cluster, and the BGP router forwards Service traffic from external clients to the Kubernetes cluster nodes based on the routes obtained from PorterLB. In this process, PorterLB uses Equal-Cost Multi-Path (ECMP) routing to ensure that all Kubernetes nodes or nodes that contain Pods, depending on the user configuration, are used as next hops by the BGP router.\nThe process of using PorterLB in BGP mode in a Kubernetes cluster is simple:\n Install PorterLB. Configure an IP address pool by using Eip. Configure BGP properties by using BgpConf and BgpPeer. Create a Service and set the Service to use PorterLB, which is similar to what you do to use a load balancer plugin in a cloud-based Kubernetes cluster. PorterLB can be configured by using the Eip, BgpConf, and BgpPeer CRDs, no other configuration files are required. In addition, as BGP is decentralized, you can use the BGP mode to easily establish a high availability network free of failover interruptions and bandwidth bottlenecks.\nLayer 2 Mode Generally, you are advised to use the BGP mode to expose your Services in a high availability network. However, BGP may be unavailable in certain systems because of security requirements or because the router does not support BGP. In this case, you can use PorterLB in Layer 2 mode to achieve similar functionality.\nIn Layer 2 mode, PorterLB uses ARP packets (for IPv4) or NDP packets (for IPv6) to map the Service IP address to the MAC address of a Kubernetes node. The mechanism of the Layer 2 mode is similar to that of the BGP mode, except that BGP is replaced with ARP/NDP and the router obtains only one route destined for the Service.\nThough the Layer 2 mode does not provide the same high availability as the BGP mode does, it is easier to use (you don\u0026rsquo;t even need to configure BGP properties):\n Install PorterLB. Configure an IP address pool by using Eip. Create a Service and set the Service to use PorterLB. You can obtain detailed guidance on how to install, configure, and use PorterLB from the PorterLB documentation.\nAdvantages of PorterLB There are other load balancer implementations such as MetalLB designed for bare-metal Kubernetes clusters. Compared with others, PorterLB has the following advantages.\nCloud native To manage IP address pools and BGP properties for PorterLB, you only need to use the kubectl apply command provided by Kubernetes to create CRD objects. To obtain the status information about IP address pools and BGP peers, you can simply run kubectl get to view the status of the CRD objects. No other configuration files are required. In addition, a PorterLB GUI will be available soon, which will further simplify the usage of PorterLB.\nElegant After PorterLB is installed in a Kubernetes cluster, a porter-manager Deployment that contains a porter-manager Pod is created. The porter-manager Pod implements the functionality of PorterLB for the entire Kubernetes cluster. For high availability, you can scale the porter-manager Deployment and assign multiple PorterLB replicas (porter-manager Pods) to multiple cluster nodes. This simple architecture ensures that PorterLB can be easily managed and integrated with other systems.\nFlexible PorterLB can be used in conventional Kubernetes clusters. As a sub-project of KubeSphere, PorterLB also fits well into the KubeSphere ecosystem. You can seamlessly integrate PorterLB as a plugin with KubeSphere to utilize the abundant features of the KubeSphere ecosystem, such as observability and troubleshooting, unified monitoring and logging, centralized storage and networking management, and easy-to-use CI/CD pipelines.\n","excerpt":"Applications deployed on Kubernetes are usually exposed by using LoadBalancer Services, which rely …","ref":"/blog/2021/02/25/porterlb-for-bare-metal-kubernetes-cloud-native-elegant-and-flexible/","title":"PorterLB for Bare-Metal Kubernetes: Cloud Native, Elegant, and Flexible"},{"body":"As we know, the backend workload can be exposed externally using service of type \u0026ldquo;LoadBalancer\u0026rdquo; in Kubernetes cluster. Cloud vendors often provide cloud LB plugins for Kubernetes which requires the cluster to be deployed on a specific IaaS platform. However, many enterprise users usually deploy Kubernetes clusters on bare meta especially for production use. For the on-premise bare meta clusters, Kubernetes does not provide Load-Balancer implementation. PorterLB, an open-source project, is the right solution for such issue.\nThis video will focus on the network technologies to help expose service and EIP management for bare meta Kubernetes cluster.\n ","excerpt":"As we know, the backend workload can be exposed externally using service of type …","ref":"/blog/2019/06/24/an-open-source-load-balancer-for-bare-metal-kubernetes/","title":"An Open-Source Load Balancer for Bare-Metal Kubernetes"},{"body":"On this livestream from KubeCon + CloudNativeCon China, Alex Williams was sitting down with Xuetao Song, Senior Software Engineer at Beijing Yunify Technology Co., Ltd. and Fang (Flora) Du, QingCloud Solution Architect at Beijing Yunify Technology Co., Ltd. to discuss open source load balancing on bare metal. PorterLB exists as an OSS solution to the issue of load balancing on bare metal in production on Kubernetes, which Song and Du are giving a presentation on at KCCNC + OSS Summit China 2019.\n Related Post Please reference How a China-Based Bare Metal Service Provider Tackled Kubernetes Load Balancing for details.\n","excerpt":"On this livestream from KubeCon + CloudNativeCon China, Alex Williams was sitting down with Xuetao …","ref":"/blog/2019/06/24/the-interview-to-porterlb-from-the-new-stack/","title":"The Interview to PorterLB from The New Stack"},{"body":"","excerpt":"","ref":"/index.json","title":""},{"body":"","excerpt":"","ref":"/blog/","title":"Blog"},{"body":"","excerpt":"","ref":"/community/","title":"Community"},{"body":" #td-cover-block-0 { background-image: url(/featured-background_hu394e7663bf611d6303fc6d5151c4565c_47376_960x540_fill_q75_catmullrom_top.jpg); } @media only screen and (min-width: 1200px) { #td-cover-block-0 { background-image: url(/featured-background_hu394e7663bf611d6303fc6d5151c4565c_47376_1920x1080_fill_q75_catmullrom_top.jpg); } } OpenELB: Load Balancer Implementation for Kubernetes in Bare-Metal, Edge, and Virtualization Learn More Download Expose your LoadBalancer Services in bare-metal, edge, and virtualization environments\n\n Why OpenELB? In cloud-based Kubernetes clusters, Services are usually exposed by using load balancers provided by cloud vendors. However, cloud-based load balancers are unavailable in bare-metal environments. OpenELB allows users to create LoadBalancer Services in bare-metal, edge, and virtualization environments for external access, and provides the same user experience as cloud-based load balancers. New Feature Request If you have any new ideas or suggestions, please submit a proposal.\n Contributions Welcome! Find something to work on from GitHub Issues. New contributors are always welcome!\nRead more …\n Follow Us on Twitter! Obtain up-to-date information about OpenELB.\nRead more …\n \u0026nbsp; OpenELB is a Cloud Native Computing Foundation sandbox project ","excerpt":"#td-cover-block-0 { background-image: …","ref":"/","title":"OpenELB"},{"body":"","excerpt":"","ref":"/search/","title":"Search Results"},{"body":" #td-cover-block-0 { background-image: url(/about/featured-background_hu394e7663bf611d6303fc6d5151c4565c_47376_960x540_fill_q75_catmullrom_bottom.jpg); } @media only screen and (min-width: 1200px) { #td-cover-block-0 { background-image: url(/about/featured-background_hu394e7663bf611d6303fc6d5151c4565c_47376_1920x1080_fill_q75_catmullrom_bottom.jpg); } } Who Uses OpenELB OpenELB is being used in production and testing. BENLAI.COM: Use OpenELB to expose Services in bare-metal Kubernetes clusters. Architect Mr. Chen from BENLAI.COM: \"We have been using OpenELB to establish BGP connections with the cluster's border routers in production for several months. OpenELB is also very easy to integrate with KubeSphere seamlessly.\" QingCloud: Integrate OpenELB to provide LoadBalancer Services for customers. This is another section -- ","excerpt":"#td-cover-block-0 { background-image: …","ref":"/about/","title":"Who Uses OpenELB"}]