TensorWorks Training Logo

CN001 - Containers and container orchestration

Activity 2: Exploring namespace sharing

Contents

What you’ll need

To build and run Linux containers, you’ll need one of the following system configurations:

Creating containers with shared namespaces

Create a Dockerfile with the following contents:

# Use the filesystem layers and default configuration from the Ubuntu 18.04 base image
FROM ubuntu:18.04

# Install compilers, utility tools, and Python
RUN apt-get update && \
	apt-get install -y --no-install-recommends \
		build-essential \
		curl \
		g++ \
		python3

# Build an example application that creates and accesses shared memory
COPY shared-memory.cpp /tmp/shared-memory.cpp
RUN g++ -std=c++11 -o /bin/shared-memory /tmp/shared-memory.cpp -lrt

Create a file called shared-memory.cpp in the same directory as the Dockerfile, with the following contents:

//POSIX headers
#include <fcntl.h>
#include <sys/mman.h>
#include <unistd.h>

//Standard C++ headers
#include <cstring>
#include <cerrno>
#include <iostream>
#include <string>
#include <stdexcept>

//The name of our shared memory segment
#define SHARED_MEMORY_NAME "/shared-memory"

//The size (in bytes) of our shared memory segment
#define SHARED_MEMORY_SIZE 1024

//The file mode of our shared memory segment
#define SHARED_MEMORY_MODE 0666


//Reports a fatal error
void reportError(const std::string& prefix) {
	throw std::runtime_error(prefix + " (" + std::string(strerror(errno)) + ")");
}

//Reports a non-fatal warning
void reportWarning(const std::string& prefix) {
	std::clog << "Warning: " << prefix << " (" << strerror(errno) << ")" << std::endl;
}


int main(int argc, char* argv[])
{
	try
	{
		//Determine if we are the producer process or the consumer process
		bool isProducer = (argc > 1 && std::string(argv[1]) == std::string("--producer"));
		std::cout << "We are the " << (isProducer == true ? "producer" : "consumer") << " process." << std::endl;
		
		//If we are the producer, create and configure the shared memory segment
		int descriptor = -1;
		if (isProducer == true)
		{
			//Attempt to create and open the shared memory segment
			descriptor = shm_open(SHARED_MEMORY_NAME, O_CREAT | O_RDWR, SHARED_MEMORY_MODE);
			if (descriptor == -1) {
				reportError("error creating shared memory");
			}
			
			//Set the size of the shared memory segment
			if (ftruncate(descriptor, SHARED_MEMORY_SIZE) == -1) {
				reportWarning("error resizing shared memory");
			}
		}
		else
		{
			//Attempt to open the shared memory segment created by the producer
			descriptor = shm_open(SHARED_MEMORY_NAME, O_RDONLY, SHARED_MEMORY_MODE);
			if (descriptor == -1) {
				reportError("error opening shared memory");
			}
		}
		
		//If we reach this stage then we've opened the file descriptor for the shared memory segment successfully
		std::cout << "Shared memory segment opened successfully." << std::endl;
		
		//Map the shared memory segment into our address space
		int flags = (isProducer == true ? PROT_READ | PROT_WRITE : PROT_READ);
		void* memory = mmap(nullptr, SHARED_MEMORY_SIZE, flags, MAP_SHARED, descriptor, 0);
		if (memory == MAP_FAILED) {
			reportWarning("error mapping shared memory");
		}
		
		//Pause for user input
		std::cout << "Enter any character to close the shared memory segment and exit..." << std::endl;
		char input = 0;
		std::cin >> input;
		
		//Unmap the shared memory segment from our address space
		if (memory != nullptr)
		{
			if (munmap(memory, SHARED_MEMORY_SIZE) == -1) {
				reportWarning("error unmapping shared memory");
			}
		}
		
		//Close the file descriptor for the shared memory segment
		close(descriptor);
		
		//If we are the producer, delete the shared memory segment
		if (isProducer == true)
		{
			if (shm_unlink(SHARED_MEMORY_NAME) == -1) {
				reportError("error deleting shared memory");
			}
		}
		
		return 0;
	}
	catch (std::runtime_error& err)
	{
		std::clog << "Error: " << err.what() << std::endl;
		return 1;
	}
}

Build the container image using the docker build command:

# Build an image using the Dockerfile in the current directory and tag it as "sharing-is-caring:latest"
docker build -t "sharing-is-caring" .

Now start three different containers, each in a separate command prompt or terminal window:

# Start a container with a shareable IPC namespace
docker run --name="container-1" --ipc="shareable" --rm -ti "sharing-is-caring" bash

# Start a container that shares the first container's IPC namespace
docker run --name="container-2" --ipc="container:container-1" --rm -ti "sharing-is-caring" bash

# Start a container that shares the second container's networking namespace
docker run --name="container-3" --network="container:container-2" --rm -ti "sharing-is-caring" bash

These three containers will be used in the subsequent sections for testing each type of shared namespace.

Using the shared IPC namespace

In the first container, run the producer process that creates a POSIX shared memory segment:

shared-memory --producer

Once the shared memory segment has been created, switch to the second container and run the consumer process that accesses the POSIX shared memory segment created by the producer process:

shared-memory --consumer

You should see that the consumer process is able to access the shared memory successfully, and is now waiting for user input before it exits. However, if you attempt to run the consumer process in the third container, you will see that it cannot access the shared memory because that container uses a different IPC namespace:

# shared-memory --consumer

We are the consumer process.
Error: error opening shared memory (No such file or directory)

You can close the producer and consumer processes in the first and second containers by entering any input and pressing the enter/return key.

Using the shared network namespace

In the second container, start a Python webserver:

python3 -m http.server 80

Once the server has started, switch to the third container and perform a HTTP request to retrieve the index page from the server:

curl http://127.0.0.1/

You should see that the HTML for the index page is displayed successfully in the third container’s output and a server log message reporting the HTTP request is displayed in the second container’s output. However, if you attempt to perform the same HTTP request in the first container, you will see that it cannot access the server via the loopback address 127.0.0.1 because that container uses a different network namespace:

# curl http://127.0.0.1/

curl: (7) Failed to connect to 127.0.0.1 port 80: Connection refused

You can stop the Python webserver in the second container by pressing Crtl-C.