How to expose a lower port in kubernetes using a LoadBalancer?

Libu Jacob
5 min readNov 8, 2020

In this guide, I’m exposing a MySQL service on its default port number 3306 from a Local Kubernetes Cluster(Spawned few VM in OpenStack and deployed k8s).

There are multiple ways to expose an application port in Kubernetes, such as NodePort, Ingress, Load-Balancer. Out of this, the NodePort is the simplest. Just by adding a NodePort service like below can open up the MySQL service on 30306 port.

apiVersion: v1
kind: Service
metadata:
name: mysql-np
namespace: default
spec:
ports:
- nodePort: 30306 # change this according to your usecase
port: 3306
protocol: TCP
targetPort: 3306
selector:
app: mysql
type: NodePort

But the trouble with NodePort is that, by default kubernetes will not allow you to select a node-port number fewer than 30000. The default range of the node-port is from 30000 to 32767. There are procedures to change this default range by modifying the kubernetes configurations, but this might restart your cluster and disturb your environment.

Ingress controller is the next option, and it works well with http(s), not with other protocols. Ingress uses one http header field to route the request. Hence, if we use a protocol such as mysql, dns, ingress cannot provide the routing and thus cannot support.

Now we left with only one option, Load-Balancer. By using a load-balancer in the kubernetes environment, you can exposelower ports for any kind of applications. But this also comes with a problem, by default kubernetes doesn’t have a load-balancer deployed if it is a local cluster. Managed kubernetes clusters from public-cloud comes with pre-deployed loadbalancers. If you availed a kubernetes cluster from any public cloud, then you can directly use the load-balancer provided by them, you just need to pay a little extra.

Below session we will see how I have deployed and configured a load-balancer in the local kubernetes cluster.

Which load-balancer to use?

Very few open sourced load-balancers are available to use. One of that is MetalLB, which I’m using in my deployment. MetalLB is mainly for bare metal kubernetes deployments, but I’m using it in my local kubernetes cluster spawned on a virtual environment.

Which IP to use?

Load-Balancer need an IP to provide the service. Ideally we need to provide an unused range of IP pool to metallb and they will select one IP from this, but it is little complicated to make the IP reachable in the OpenStack cloud. Hence, in this example I have used node IP as the load-balancer IP. If you use a non-OpenStack cloud, you can provide an unused IP range or on production scenario you can fix the OpenStack ARP spoofing issue like this and use it.

Steps to deploy the metal load-balancer

  1. First, we need to create a new namespace for the metal-lb. We can use the manifest provided by metal-lb directly to deploy it.

kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.5/manifests/namespace.yaml

$ kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.5/manifests/namespace.yaml
namespace/metallb-system created
$
$ kubectl get ns
NAME STATUS AGE
default Active 47m
kube-node-lease Active 47m
kube-public Active 47m
kube-system Active 47m
metallb-system Active 8s
$

Note: You can update Version according to your use case. In this example, I’m using the latest version, v0.9.5.

2. Next we deploy the metal-lb.

kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.5/manifests/metallb.yaml

$ kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.5/manifests/metallb.yaml
podsecuritypolicy.policy/controller created
podsecuritypolicy.policy/speaker created
serviceaccount/controller created
serviceaccount/speaker created
clusterrole.rbac.authorization.k8s.io/metallb-system:controller created
clusterrole.rbac.authorization.k8s.io/metallb-system:speaker created
role.rbac.authorization.k8s.io/config-watcher created
role.rbac.authorization.k8s.io/pod-lister created
clusterrolebinding.rbac.authorization.k8s.io/metallb-system:controller created
clusterrolebinding.rbac.authorization.k8s.io/metallb-system:speaker created
rolebinding.rbac.authorization.k8s.io/config-watcher created
rolebinding.rbac.authorization.k8s.io/pod-lister created
daemonset.apps/speaker created
deployment.apps/controller created
$

Note: You can update Version according to your use case. In this example, I’m using the latest version, v0.9.5.

3. After deploying the metal-lb, we need to create a random secret for the metal-lb speakers to communicate.

kubectl create secret generic -n metallb-system memberlist — from-literal=secretkey=”$(openssl rand -base64 128)”

$ kubectl create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)"
secret/memberlist created
$

We have now completed the deployment of metal-lb load balancer in our local kubernetes cluster.

Configure Load-Balancer IP

IP range or list can be provided using a config-map input to metal-lb.

I’m using a two node kubernetes cluster with one master and one worker node for this example.

master:   172.16.104.48
worker-1: 172.16.104.47
$ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP
master Ready master 94m v1.19.3 172.16.104.48
worker-1 Ready <none> 92m v1.19.3 172.16.104.47
$

Below is an example configuration based on my kubernetes node configurations.

$ cat <<EOF | kubectl create -f -
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: ip-range-1
protocol: layer2
addresses:
- 172.16.104.48/32 # update this ip same as your node IP
- 172.16.104.47/32 # update this ip same as your node IP
EOF
$

How to provide the load-balancer IP? IPs are provided as a list of IP ranges. The range is defined by the CIDR value(subnet mask) specified immediately after the IP.

In the above example, I have provided only two IPs. You might have noticed the /32 after the IP, this section specifies the range. As per the subnet-mask CIDR mapping 32 means 255.255.255.255, which means it represents a single IP. So in this example I have provided two individual IP as load balancer IP.

If you have a range of IP address instead of individual IPs, we can represent it like below.

data:
config: |
address-pools:
- name: ip-range-1
protocol: layer2
addresses:
- 138.18.31.163-138.18.31.172

Note: This is for illustration purpose.

You can have one or more or a range of IPs configured in the config-map.

Using Load-Balancer in application

We will see now how we can use this newly deployed load-balancer in our application. Below yaml will expose the mysql port 3306 on one of the load-balancer IP.

apiVersion: v1
kind: Service
metadata:
name: mysql-lb
namespace: default
spec:
ports:
- port: 3306
protocol: TCP
targetPort: 3306
selector:
app: mysql
type: LoadBalancer

Once it is deployed, we can use the below command to check the status of the service.

$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
mysql-lb LoadBalancer 10.105.37.7 172.16.104.48 3306:30663/TCP 2s

In the external-ip section you will find the IP over which the service exposed. You can check this by connecting to the load-balancer IP.

$ mysql -h172.16.104.48 -uroot -p
Enter password:
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 8
Server version: 8.0.22 MySQL Community Server - GPL
Copyright (c) 2000, 2020, Oracle and/or its affiliates. All rights reserved.Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.mysql>

Done! Now you exposed an application on a lower port number.

There are few more options available in metal-lb like sharing same IP for multiple applications on different port, requesting specific IPs, etc. We can find here details.

References:

  1. https://metallb.universe.tf/
  2. https://github.com/metallb/metallb
  3. https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing

--

--

Libu Jacob

Cloud/Networking Architect and Developer working in 5G/Teleco area. Interested in Kubernetes, MEC, Cloud, Networking, BigData etc.