Running pip-tools in docker
There are a number of great CLI tools that help us to manage our packages, e.g.
pip-tools
or pipenv
for python and npm
for nodejs, that provide some useful
functionality, including the ability to snapshot the exact versions of packages
installed, with versions and hashes of the code, from a high level specification of the
requirements.
Typically, one wants to save the file that stores these versions in a version control system (e.g. git) with the code, so that the runtime environment of the code is reproducible.
When running your application in docker, this becomes non-trivial, because one can not export files while building a docker image (i.e. there is no inverse of the COPY or ADD instructions). It has to be extracted after the images is build and ran as a container, which can be a little complicated to work into a development or CI/CD workflow.
pip-tools in docker
The following illustrates a minimal example setup for running pip-tools (pip-compile, pip-sync) within docker, while writing the file back to the host system, where it could be committed to git
Dockerfile
:
FROM python:3.10-slim-bullseye AS build_requirements
ENV PYTHONUNBUFFERED=1
WORKDIR /opt/requirements
# TODO: remove when fixed https://github.com/jazzband/pip-tools/issues/1558
RUN pip install --upgrade --no-cache-dir \
'pip<22' \
setuptools \
wheel \
pip-tools
ENV CUSTOM_COMPILE_COMMAND='docker run <>'
ENTRYPOINT ["pip-compile", "--generate-hashes"]
# importantly the pip compile environment (OS, python versions, etc) should be the same
# as the container you are trying to build, so
FROM build_requirements
ARG REQUIREMENTS_FILE='requirements.txt'
WORKDIR /opt/work
COPY ${REQUIREMENTS_FILE} .
RUN pip-sync ${REQUIREMENTS_FILE} --pip-args '--no-cache-dir --no-deps'
# make sure to overwrite the entrypoint
ENTRYPOINT []
Makefile
:
MAKEFILE_PATH := $(abspath $(lastword $(MAKEFILE_LIST)))
MAKEFILE_DIR := $(patsubst %/,%,$(dir $(MAKEFILE_PATH)))
.PHONY: default
default:
@echo "This make file won't do anything by default, run a specific target"
.PHONY: docker-pip-compile-build
docker-pip-compile-build:
docker build \
--target build_requirements \
--tag build-requirements \
.
.PHONY: requirements.txt
requirements.txt: docker-pip-compile-build
docker run --rm \
--mount source=$(MAKEFILE_DIR),target=/opt/requirements,type=bind \
--env CUSTOM_COMPILE_COMMAND='make requirements.txt' \
build-requirements \
--upgrade --verbose requirements.in
.PHONY: docker-build
docker-build:
docker build \
--tag actual \
.
with this set up one can write a requirements.in
and create (or upgrade) the
requirements.txt
file with
make requirements.txt
and then build the docker image with it
make docker-build
Alternative implementation
One could have used docker cp
to copy the requirements.txt
file out of a container after the image has been created
from a dockerfile that COPY’s in the requirements.in
file and runs pip-compose
and
pip-sync
. This approach though would have given less flexibility for manual adjustment
of the pip-compile
command run.
Controlling the flags and config of pip-compile
can allow for more modular modification of the snapshot requirements, which is a
desirable property as it can help manage the amount of changes that need to be tested,
or code that needs to be adapted to breaking changes