In this post I describe a problem I had running IdentityServer 4 behind an Nginx reverse proxy. In my case, I was running Nginx as an ingress controller for a Kubernetes cluster, but the issue is actually not specific to Kubernetes, or IdentityServer - it's an Nginx configuration issue.
Initially, the Nginx ingress controller appeared to be configured correctly. I could view the IdentityServer home page, and could click login, but when I was redirected to the authorize endpoint (as part of the standard IdentityServer flow), I would get a
502 Bad Gateway error and a blank page.
Looking through the logs, IdentityServer showed no errors - as far as it was concerned there were no problems with the authorize request. However, looking through the Nginx logs revealed this gem (formatted slightly for legibility):
2018/02/05 04:55:21 [error] 193#193: *25 upstream sent too big header while reading response header from upstream, client: 192.168.1.121, server: example.com, request: "GET /idsrv/connect/authorize/callback?state=14379610753351226&nonce=9227284121831921&client_id=test.client&redirect_uri=https%3A%2F%2Fexample.com%2Fclient%2F%23%2Fcallback%3F&response_type=id_token%20token&scope=profile%20openid%20email&acr_values=tenant%3Atenant1 HTTP/1.1", upstream: "http://10.32.0.9:80/idsrv/connect/authorize/callback?state=14379610753351226&nonce=9227284121831921&client_id=test.client&redirect_uri=https%3A%2F%2Fexample.com%2F.client%2F%23%
Apparently, this is a common problem with Nginx, and is essentially exactly what the error says. Nginx sometimes chokes on responses with large headers, because its buffer size is smaller than some other web servers. When it gets a response with large headers, as was the case for my IdentityServer OpenID Connect callback, it falls over and sends a
The solution is to simply increase Nginx's buffer size. If you're running Nginx on bare metal you could do this by increasing the buffer size in the config file, something like:
proxy_buffers 8 16k; # Buffer pool = 8 buffers of 16k proxy_buffer_size 16k; # 16k of buffers from pool used for headers
However, in this case, I was working with Nginx as an ingress controller to a Kubernetes cluster. The question was, how do you configure Nginx when it's running in a container?
Luckily, the Nginx ingress controller is designed for exactly this situation. It uses a
ConfigMap of values that are mapped to internal Nginx configuration values. By changing the
ConfigMap, you can configure the underlying Nginx
The Nginx ingress controller only supports changing a subset of options via the
ConfigMap approach, but luckily
proxy‑buffer‑size is one such option! There's two things you need to do to customise the ingress:
- Deploy the
ConfigMapcontaining your customisations
- Point the Nginx ingress controller
I'm just going to show the template changes in this post, assuming you have a cluster created using
ConfigMap is one of the simplest resources in kubernets; it's essentially just a collection of key-value pairs. The following manifest creates a
nginx-configuration and sets the
"16k", to solve the
502 errors I was seeing previously.
kind: ConfigMap apiVersion: v1 metadata: name: nginx-configuration namespace: kube-system labels: k8s-app: nginx-ingress-controller data: proxy-buffer-size: "16k"
If you save this to a file nginx-configuration.yaml then you can apply it to your cluster using
kubectl apply -f nginx-configuration.yaml
However, you can't just apply the
ConfigMap and have the ingress controller pick it up automatically - you have to update your Nginx
Deployment so it knows which
ConfigMap to use.
In order for the ingress controller to use your
ConfigMap, you must pass the
ConfigMap name (
nginx-configuration) as an argument in your deployment. For example:
args: - /nginx-ingress-controller - --default-backend-service=$(POD_NAMESPACE)/default-http-backend - --configmap=$(POD_NAMESPACE)/nginx-configuration
Without this argument, the ingress controller will ignore your
ConfigMap. The complete deployment manifest will look something like the following (adapted from the Nginx ingress controller repo)
apiVersion: extensions/v1beta1 kind: Deployment metadata: name: nginx-ingress-controller namespace: ingress-nginx spec: replicas: 1 template: metadata: labels: app: ingress-nginx annotations: prometheus.io/port: '10254' prometheus.io/scrape: 'true' spec: initContainers: - command: - sh - -c - sysctl -w net.core.somaxconn=32768; sysctl -w net.ipv4.ip_local_port_range="1024 65535" image: alpine:3.6 imagePullPolicy: IfNotPresent name: sysctl securityContext: privileged: true containers: - name: nginx-ingress-controller image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.10.2 args: - /nginx-ingress-controller - --default-backend-service=$(POD_NAMESPACE)/default-http-backend - --configmap=$(POD_NAMESPACE)/nginx-configuration env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace ports: - name: http containerPort: 80 - name: https containerPort: 443 livenessProbe: failureThreshold: 3 httpGet: path: /healthz port: 10254 scheme: HTTP initialDelaySeconds: 10 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 readinessProbe: failureThreshold: 3 httpGet: path: /healthz port: 10254 scheme: HTTP periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1
While deploying a local Kubernetes cluster locally, the Nginx ingess controller was returning
502 errors for some requests. This was due to the headers being too large for Nginx to handle. Increasing the
proxy_buffer_size configuration parmeter solved the problem. To achieve this with the ingress controller, you must provide a
ConfigMap and point your ingress controller to it by passing an additional
arg in your