Table of Contents
Get the latest news

Why Ditching NGINX in K8S is a Traefik Choice

Liran Haimovitch | Co-Founder & CTO

4 minutes

Table of Contents

A Kubernetes Ingress is a collection of rules that allow inbound connections to reach cluster services. It can be configured to give services externally-reachable URLs, load balancer traffic, terminate SSL, offer name-based virtual hosting, and more.

NGINX ingresses are pretty much the default choice for cloud-agnostic ingresses, and it was our choice as well. That is until we decided to move to Traefik to terminate HTTP(S) traffic.

As you probably know, replacing ingresses is a tricky and time-consuming process. So what drove us to do that? What was our motivation to replace NGINX with Traefik? Stay tuned, because that’s exactly what I’m going to discuss in this post.

1. Defaults

The NGINX default configuration is not suited for modern REST and WebSocket APIs. After installing NGINX with Helm, our site-reliability engineers had to further tweak the configuration, resulting in the waste of precious time and resources.

For example, let’s look at configuring NGINX as a proxy. This requires the following additional settings:

 proxy-read-timeout: "900"
 proxy-body-size: 100m
 proxy-buffering: "on"
 proxy-buffer-size: "16k"

2. Configuration

When you have to configure your ingress for more advanced stuff, doing it with NGINX can become a nightmare. NGINX lacks proper documentation, so you usually end up relying on Google and StackOverflow. Minutes turn to hours as you scroll through obscure and often outdated answers to your issues.

Note: NGINX configuration files, like <em>nginx.conf</em>, uses a domain-specific language unique to NGINX, but it’s very intuitive.

Traefik, on the other hand, is much easier to use and you can find extensive documentation on its website. Activating simple features with Traefik does not require multiple complex settings as it does with NGINX, and the configuration itself tends to be a lot quicker and more concise as well.

While NGINX settings end up in huge config maps that are hard to read and manage, it’s not an issue with Traefik. This is because Traefik allows most configurations to be set using Helm values or Kubernetes Ingress annotations.

Let’s compare for example the configurations for turning on gzip compression in NGINX vs Traefik, for example.

NGINX gzip Configuration

gzip on;
    gzip_disable "msie6";
    gzip_vary on;
    gzip_proxied any;
    gzip_comp_level 6;
    gzip_buffers 16 8k;
    gzip_http_version 1.1;
    gzip_min_length 256;
    gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript application/vnd.ms-fontobject application/x-font-ttf font/opentype image/svg+xml image/x-icon

Traefik gzip Configuration

compress = true

Here’s another example of NGINX vs Traefik. Configuring the web servers to return JSON logs requires the following configurations:

NGINX – JSON Logs Config

log-format-escape-json: 'true'
log-format-upstream: '{
    "proxy_protocol_addr": "$proxy_protocol_addr",
    "remote_addr": "$remote_addr", 
    "proxy_add_x_forwarded_for": "$proxy_add_x_forwarded_for",
    "remote_user": "$remote_user", 
    "time_local": "$time_local", 
    "request" : "$request",
    "status": "$status", 
    "body_bytes_sent": "$body_bytes_sent", 
    "http_referer":  "$http_referer",
    "http_user_agent": "$http_user_agent", 
    "request_length" : "$request_length",
    "request_time" : "$request_time", 
    "proxy_upstream_name": "$proxy_upstream_name",
    "upstream_addr": "$upstream_addr",  
    "upstream_response_length": "$upstream_response_length",
    "upstream_response_time": "$upstream_response_time", 
    "upstream_status": "$upstream_status"
  }'

Traefik – JSON Logs Config

format: json

3. Protocol Support

Traefik has the best HTTP/2 and gRPC support we have tested. Some of our requirements include TLS termination, header-based routing, high performance, and stability, on a scale of over 10k concurrent connections. Traefik has performed much better than NGINX and Istio for this use case.

4. Monitoring

The importance of monitoring your ingresses cannot be overstressed. They are the face of your application as seen by the world and are the main, and possibly the only place you can discern your app’s health.

The free open-source NGINX version does not support proper monitoring, and this is a huge disadvantage. To be fair, NGINX Plus offers much better monitoring features. Its price tag, however, simply could not be justified by our needs.

NGINX Ingress vs. Traefik in Summary

People are creatures of habit, and as it happens, the startups we create inherit that quality from us as well. As a startup, you often find yourself setting up your infrastructure with the good old tools you’ve been using in a former life. However, it’s important to question your choices and see if better options are available.

We arrived at the conclusion that NGINX didn’t age well. It couldn’t align with our monitoring and observability needs as well as protocol support and ease of use. We saw that putting in some time and effort into moving to Traefik will be worth it in the long run, and so we did it. If your conclusion is similar, making this move should be a worthwhile investment for you as well.

Rookout Sandbox

No registration needed

Play Now