Activity 5: Running GPU-accelerated Linux containers with the NVIDIA Container Toolkit
- What you’ll need
- Running containers using the NVIDIA Container Toolkit
- Displaying graphical output on the host system
What you’ll need
To build and run GPU-accelerated Linux containers, you’ll need the following system configuration:
- A 64-bit version of one of Docker’s supported Linux distributions (CentOS 7+, Debian 7.7+, Fedora 26+, Ubuntu 14.04+) with Docker Community Edition (CE) and the NVIDIA Container Toolkit installed, non-root access enabled so commands don’t need to be prefixed with sudo, and an X11-based graphical session
Running containers using the NVIDIA Container Toolkit
Pull the NVIDIA base image for OpenGL (there are also CUDA and CUDA+OpenGL base images available):
# Pull the OpenGL runtime base image docker pull "nvidia/opengl:1.0-glvnd-runtime-ubuntu18.04"
The base image contains the runtime libraries required to run OpenGL applications, but not the NVIDIA drivers or command-line tools. If you start a container with the image using the default Docker options then the container will not be able to see any GPU devices from the host:
# Start a container using the default runtime options docker run --rm -ti "nvidia/opengl:1.0-glvnd-runtime-ubuntu18.04" bash
You can confirm the lack of GPU access by attempting to run the
nvidia-smi command inside the container:
# nvidia-smi bash: nvidia-smi: command not found
Stop the container and start a new one, this time passing the flags to activate the hooks for the NVIDIA Container Toolkit:
# Start a container using the NVIDIA Container Toolkit docker run --rm -ti --gpus=all "nvidia/opengl:1.0-glvnd-runtime-ubuntu18.04" bash
If you run the
nvidia-smi command inside the new container then you should see it successfully print the details of any NVIDIA GPUs that the host system has access to. This works because the NVIDIA Container Toolkit injects the NVIDIA drivers and command-line tools into the container when it starts, and also exposes the GPU devices and driver capabilities that the container image is configured to support.
Displaying graphical output on the host system
Create a file called
Dockerfile with the following contents:
FROM nvidia/opengl:1.0-glvnd-runtime-ubuntu18.04 # Install the glxgears demo application RUN apt-get update && apt-get install -y --no-install-recommends mesa-utils && \ rm -rf /var/lib/apt/lists/* # Create a non-root user so we can run X11 applications without needing to authenticate with Xauthority RUN useradd --create-home --home /home/nonroot --shell /bin/bash --uid 1000 nonroot && \ usermod -a -G audio,video nonroot USER nonroot
Run the docker build command to build a container image from the Dockerfile:
docker build -t "opengl-test" .
It is important to note that Docker will use the default runtime options when building the container image, so the
RUN directives in the Dockerfile cannot rely on GPU access. Although older versions of NVIDIA Container Toolkit (known then as “NVIDIA Docker”) made it possible to enable GPU access at build-time, this meant that any container images built could be rendered non-portable. For this reason, it was strongly recommended that you used the default runtime when building container images to ensure maximum portability, and this option is not present in newer versions of the NVIDIA Container Toolkit.
Once the image has been built, start a container with the flags for bind-mounting the host system’s X11 socket and propagating the value of the
DISPLAY environment variable from the host:
# Start a container and bind-mount the X11 socket from the host system docker run --rm -ti --runtime=nvidia -v/tmp/.X11-unix:/tmp/.X11-unix:rw -e DISPLAY "opengl-test" bash
This will allow X11-based graphical applications running inside the container to display their windows on the host system’s display. Once the container has started, run the glxgears demo application:
If everything is working correctly, you should see a window appear on the host system and display three spinning gears rendered with OpenGL.