Limiting available memory for a Docker container using docker-compose
Today I had to reproduce an issue where Kubernetes was killing a PHP job with message:
1 | Reason:OOMKilling Message:Memory cgroup out of memory: Killed process 217006 (php) |
I generated what I believed to be enough data to trigger the issue, ran the job and it wasn’t killed. I checked docker stats
:
1 | CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS |
Ah yes, need to limit the memory as in production, so I added the following to the appropriate service in docker-compose.yaml
:
1 | deploy: |
docker stats
showed it was applied correctly:
1 | CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS |
I ran the job again and it wasn’t killed this time either. Looking at stats I noticed that the usage quickly went up to the limit and was going between 99.9% and 100%. So the memory was full but the container wasn’t killed? Looked like swap was interfering. After some research I found out that if you don’t set --memory-swap
then Docker assigns the same amount for swap as you set for memory. So my 60 MiB was actually 120 MiB including swap. It’s currently impossible to set swap limit in docker-compose.yaml
so I set memory: 30M
and the job got killed as expected:
1 | Killed |
TLDR
To limit available memory for a Docker container using docker-compose, add the following to the appropriate service:
1 | deploy: |
Make sure it’s actually a half of the target value to account for swap.