Limiting available memory for a Docker container using docker-compose

Today I had to reproduce an issue where Kubernetes was killing a PHP job with message:

1
Reason:OOMKilling Message:Memory cgroup out of memory: Killed process 217006 (php)

I generated what I believed to be enough data to trigger the issue, ran the job and it wasn’t killed. I checked docker stats:

1
2
CONTAINER ID   NAME                                 CPU %     MEM USAGE / LIMIT     MEM %     NET I/O           BLOCK I/O   PIDS
4d6032f0de92 notifications-queue-worker-primary 0.00% 32.97MiB / 15.64GiB 0.21% 2.14kB / 6.43kB 0B / 0B 2

Ah yes, need to limit the memory as in production, so I added the following to the appropriate service in docker-compose.yaml:

1
2
3
4
deploy:
resources:
limits:
memory: 60M

docker stats showed it was applied correctly:

1
2
CONTAINER ID   NAME                                 CPU %     MEM USAGE / LIMIT   MEM %     NET I/O           BLOCK I/O   PIDS
40fb4051ba62 notifications-queue-worker-primary 0.16% 32.98MiB / 60MiB 54.96% 2.13kB / 6.43kB 0B / 0B 2

I ran the job again and it wasn’t killed this time either. Looking at stats I noticed that the usage quickly went up to the limit and was going between 99.9% and 100%. So the memory was full but the container wasn’t killed? Looked like swap was interfering. After some research I found out that if you don’t set --memory-swap then Docker assigns the same amount for swap as you set for memory. So my 60 MiB was actually 120 MiB including swap. It’s currently impossible to set swap limit in docker-compose.yaml so I set memory: 30M and the job got killed as expected:

1
2
3
Killed
Exited with code #137
Terminating ...

TLDR

To limit available memory for a Docker container using docker-compose, add the following to the appropriate service:

1
2
3
4
deploy:
resources:
limits:
memory: 60M

Make sure it’s actually a half of the target value to account for swap.