Run WordPress over https in Kubernetes with ingress, all behind HAProxy

devtud
6 min readJun 11, 2019

What you need to know before starting to read:

  • I’m not a WordPress expert, I use it for its power without writing PHP code
  • I don’t claim that the configuration detailed in this post is production ready, nor that this is the best approach of mixing the technologies involved
  • My Kubernetes configuration consists in a single node — the master node
  • The VPS I use is a Centos 7

What you will learn in this article

  1. Deploy a WordPress website in a bare-metal Kubernetes cluster
  2. Deploy NGINX Ingress Controller in a bare-metal Kubernetes cluster
  3. Install HAProxy 1.9 on Centos 7 VPS
  4. Generate a SSL certificate with Let’s Encrypt

Issue I had to solve in order to make it work

  • infinite loop (too many redirects in browser) caused by forcing SSL in WordPress config

As my preferred way of deploying personal projects is Kubernetes on my VPS, I’ve decided to run my new WordPress website in the same way.

1. Deploy WordPress in Kubernetes

For a quick deploy of WordPress in Kubernetes I followed the documentation from the original site (Kubernetes v1.14). Shortly, after following the described steps, you will end up with a MySQL deployment and a WordPress deployment, each one having a single pod and a service resource. Note that it is not recommended to use NodePort and as a result I’ve configured both services to be ClusterIP instead. The access from the outside of the cluster is achieved through an ingress resource:

---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: wordpress
labels:
app: wordpress
spec:
rules:
— host: my-domain.tld
http:
paths:
— backend:
serviceName: wordpress
servicePort: 80
---

The WordPress service definition is slightly changed:

---
apiVersion: v1
kind: Service
metadata:
name: wordpress
labels:
app: wordpress
spec:
ports:
- port: 80
protocol: TCP
selector:
app: wordpress
tier: frontend
---

NOTE: If you read the deployment definitions for WordPress and MySQL, you will spot two PersistentVolumeClaim resources. On a bare-metal Kubernetes cluster these will need a PersistentVolume resource each. I won’t cover this aspect here, but if you need help please let me know.

Let’s continue…

Because we will use a reverse proxy which will handle the encryption, we have to make WordPress aware of it. More information can be found here: https://wordpress.org/support/article/administration-over-ssl/#using-a-reverse-proxy.

My deployment already had the necessary config in wp-config.php file. The location of this file on your disc depends on how you defined the persistent volume, but it can be easily found:

$ updatedb
$ locate wp-config.php
/var/data/wordpress/wp-config.php

…and if I open it I can see:

...
// If we're behind a proxy server and using HTTPS, we need to alert Wordpress of that fact
// see also http://codex.wordpress.org/Administration_Over_SSL#Using_a_Reverse_Proxy
if (isset($_SERVER['HTTP_X_FORWARDED_PROTO']) && $_SERVER['HTTP_X_FORWARDED_PROTO'] === 'https') {
$_SERVER['HTTPS'] = 'on';
}
...

2. Deploy NGINX Ingress Controller in Kubernetes

In order to access the services within the cluster we need an ingress controller, which, in my case is NGINX Ingress Controller (you can quickly install it following their docs). As I stated at the beginning, my Kubernetes cluster is deployed on bare-metal, so I downloaded the service definition from here https://kubernetes.github.io/ingress-nginx/deploy/#bare-metal and edited it to set my preferred node ports as it follows:

---
apiVersion: v1
kind: Service
metadata:
name: ingress-nginx
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
type: NodePort
ports:
- name: http
port: 80
targetPort: 80
protocol: TCP
nodePort: 32080
- name: https
port: 443
targetPort: 443
protocol: TCP
nodePort: 32433
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---

After applying this, two ports will be opened on your machine: 32080 for http traffic and 32433 for https traffic. All requests sent to your machine’s address 127.0.0.1:32080 will reach the nginx service on port 80, and the same idea applies between 127.0.0.1:32443 (your machine) and nginx service port 443.

Please take what’s left of this section as a time travel from future into present…

After I’ve finished everything (including what you will see in the sections 3 and 4), I was wandering why it still doesn’t work… Well, the most time consuming issue I had in getting this project done was that I forgot for a while that the requests are passing also through nginx, not only through HAProxy and WordPress.

Even if HAProxy was configured to properly set the flag X-Forwarded-Proto (as you will see in the next section), nginx played a joke on me and reset it without me realizing it.

The next day I had the eureka moment and changed the config map of the NGINX Ingress Controller to pass the flag downstream as it receives it from upstream, unmodified, unchanged and unaltered. All I had to do was to add data and the entry use-forwarded-headers: “true" under it:

$ kubectl -n ingress-nginx get configmaps nginx-configuration -o yaml > configmap.yaml$ vim configmap.yaml

apiVersion: v1
kind: ConfigMap
metadata:
...
data:
use-forwarded-headers: "true"
$ kubectl apply -f configmap.yaml

I am not 100% that this is the best approach but I got it working. The header set by HAProxy was passed unchanged to WordPress.

3. Install HAProxy 1.9 in Centos 7

NOTE: A HAProxy version different than 1.9 may also be compatible with the configuration used in this article. However, the only version I used and tested is HAProxy 1.9.8.

For installing HAProxy 1.9.8 you can follow these steps:

$ wget https://www.haproxy.org/download/1.9/src/haproxy-1.9.8.tar.gz
$ tar xzvf ~/haproxy.tar.gz -C ~/
$ cd haproxy-1.9.8
# Note below flag "USE_OPENSSL=1". It's crucial to support https traffic
$ make -j 4 TARGET=linux2628 USE_OPENSSL=1 USE_PCRE=1 USE_ZLIB=1
$ sudo make install
$ ln -s /usr/local/sbin/haproxy /usr/sbin/haproxy

We need to configure HAProxy to forward the incoming traffic to the Kubernetes cluster, more precisely to the NGINX service node ports. Open /etc/haproxy/haproxy.cfg and paste the below snippet:

global
log /dev/log local0
log /dev/log local1 notice
chroot /var/lib/haproxy
stats timeout 30s
user haproxy
group haproxy
daemon
tune.ssl.default-dh-param 2048
defaults
log global
mode http
option httplog
option dontlognull
timeout connect 5000
timeout client 50000
timeout server 50000
frontend http_front
bind *:80
bind *:443 ssl crt /etc/haproxy/certs/my-domain.tld.pem
option http-server-close
redirect scheme https if !{ ssl_fc }
http-request set-header X-Forwarded-Proto https if { ssl_fc }
stats uri /haproxy?stats
default_backend kubernetes_backend
backend kubernetes_backend
server web_kubernetes 127.0.0.1:32080

This line is very important…

    http-request set-header X-Forwarded-Proto https if { ssl_fc }

…because it lets WordPress know that it is behind a reverse proxy. Do you remember wp-config.php ? Well, here is the match…

...
// If we're behind a proxy server and using HTTPS, we need to alert Wordpress of that fact
// see also http://codex.wordpress.org/Administration_Over_SSL#Using_a_Reverse_Proxy
if (isset($_SERVER['HTTP_X_FORWARDED_PROTO']) && $_SERVER['HTTP_X_FORWARDED_PROTO'] === 'https') {
$_SERVER['HTTPS'] = 'on';
}
...

There is another line which is important in our HAProxy config:

    bind *:443 ssl crt /etc/haproxy/certs/my-domain.tld.pem

This line specifies what SSL certificate should be used by HAProxy.

So don’t start HAProxy yet! Unless you already have the certificate /etc/haproxy/certs/my-domain.tld.pem in place. If you don’t have it, let’s generate it with Let’s Encrypt.

4. Generate a SSL certificate with Let’s Encrypt

Let’s say that your domain is my-domain.tld.

In order to create a SSL certificate for your domain freely with Let’s Encrypt we need certbot and run it. ( If you want to understand better what will happen next, please refer to https://poweruphosting.com/blog/secure-certbot-haproxy/. I will be quick about it.)

$ yum install certbot
$ certbot certonly

There are multiple methods of creating a certificate and certbot will ask you to pick one. I chose to Spin up a temporary webserver (standalone). This creates a temporary server on :80 so nothing should be listening on it.

After certbot does its job, you can see the generated files here:

$ ls /etc/letsencrypt/live/my-domain.tld
cert.pem chain.pem fullchain.pem privkey.pem README

Create /etc/haproxy/certs :

$ mkdir -p /etc/haproxy/certs

Combinefullchain.pem and privkey.pem and put the result in /etc/haproxy/certs/my-domain.tld.pem:

$ DOMAIN='my-domain.tld' sudo -E bash -c 'cat /etc/letsencrypt/live/$DOMAIN/fullchain.pem /etc/letsencrypt/live/$DOMAIN/privkey.pem > /etc/haproxy/certs/$DOMAIN.pem'

Last thing to do is to restrict the permission to the newly combined file and start HAProxy:

$ sudo chmod -R go-rwx /etc/haproxy/certs$ sudo systemctl haproxy start

Now you should be able to access your website with your browser showing happily that Connection is secure.

--

--