If you are using Gitlab and you set up your cloud environment on GCP with the cluster-management project from Gitlab, you will earlier or later notice, that there are a few things missing which you may need. Here we will focus on the missing implementation to setup a Global HTTPs Load Balancer instead of a Regional HTTPs Load Balancer in order to use Cloud Armor on the Cloud Project.
The cluster-management project from gitlab provides an easy opportunity to quickly set up everything you need in / for your kubernetes cluster. For example a cert-manager or the ingress definition. I encountered a problem while trying to install Google Cloud Armor in GCP. To use Cloud Armor you need to have a Global HTTP(s) Load Balancer. If you use the cluster-management project from Gitlab, the created ingress will automatically serve as a load balancer. Google provides three types of loadbalancers:
- Internal: For Internal traffic
- Regional: For external traffic (Backend endpoint lives in a single region)
- Global: For external traffic (Backend endpoint are living in multiple regions)
The problem right here is, for whatever reason, Google only allows the usage of Cloud Armor on a Global Load Balancer. The automatically created loadbalancer from gitlab is a regional one and you are not able to switch it to a global one. Therefore we need to take matters into our own hand.
First of all we need to change the values.yaml file, in which we define the ingress configuration. We need to switch the configuration to the following:
By applying these settings, we override the type of the ingress to ClusterIP (this was before LoadBalancer by default) and we also annotate that ingress with a NEG (Network Endpoint Group). The NEG will serve as the backend of the new Global Load Balancer which we need to create. Let the pipeline run, so the changes are deployed in your cluster. The ingress should be deployed with the type ClusterIP.
First of all you need to connect to your GCP project and make sure you are on the right context if you have multiple. You can check by using kubectl config get-contexts (shows current context by "star").
First of all you need to set some global variables. They can look like this:
Please note that most (maybe all?) following configurations can be done via the GCP frontend. However, I did not tested that and sticked to the console-way.
If you don't have your own IP address you need to create a static IP address with the following command:
You need to create a firewall rule, to allow our new Global Load Balancer the access and communication to our cluster. The IP addresses which we are permitting are used for the health checks and are IP addresses of the Google Front End (GFE) that connected to the backend. Without them, the health checks will not succeed for the neg (backend of LB). You need to use the following command to create the needed firewall:
We need to create a health-check for the upcoming Backend Service to see if everything is still fine in our system. To create the healthcheck add the following command:
The backend service of the upcoming Loadbalancer will now get the ingress, which is now a NEG and not a LoadBalancer anymore, attached. Use the following command:
You can create the new loadbalancer by using the following command. But be patient, since this may take a few minutes to complete or to be visible in the GCP frontend.
I will dive more into the certificate topic on the loadbalancer on the next blogpost. Since we use a HTTPs load balancer we need to have some kind of certificate. I created a self-managed certificate via OpenSSL and uploaded the content of the files to GCP. However you can also do this via console:
Please do not use this on production since the certificate will not be valid and the connection will not be secured. Refer to the next blogpost on how to handle certification on the loadbalancer.
The loadbalancer should be visible. Select it and click on edit. Everything should be configured except for the frontend configuration. Do the following configurations:
- Select Add Frontend IP and Port
- Name it and select HTTPs
- If you reserverd a static ip address, use it in the ip address field. otherwise use Ephemeral
- Select the created certificate (You can also use a google managed one)
- If you have a static IP Address, you can enable HTTP to HTTPS redirect. There will be a new "loadbalancer" / "mapping" created, without a backend. I'm pretty sure (not 100%) it's more or less a forwarding rule
- Save the Loadbalancer
- Check that everything is healthy and works
- Congratulation, your loadbalancer is up and running 🙂
The Load Balancer / The Neg gets notified if the deployment of the ingress changes the pod. Therefore its also possible to scale up the ingress itself. The NEG updates itself if something happens. This is per default configured in the deployment of the ingress with --publish-service
The firewall rule for the connection to the backend only applies to a specific tag, which looks like a node. For example: "gke-staging-456b4340-node".
However this is a Network Tag, which is on every Compute Instance of the cluster. Therefor the healthchecks are working even if there are new nodes or the existings are changing.