Activity 6: Deploying a simple Kubernetes service
Contents
- Acknowledgements
- What you’ll need
- Deploying and exposing a simple service
- Triggering service self-healing
- Scaling the number of replicas
- Performing a rolling upgrade
- Reverting an upgrade
Acknowledgements
Several sections of this activity are based on materials from the following pages of the Kubernetes documentation:
What you’ll need
To deploy services to a local Kubernetes cluster, you’ll need one of the following system configurations:
-
A 64-bit version of one of Docker’s supported Linux distributions (CentOS 7+, Debian 7.7+, Fedora 26+, Ubuntu 14.04+) with Docker Community Edition (CE) installed, non-root access enabled so commands don’t need to be prefixed with sudo, and Minikube installed and running
-
A 64-bit version of Windows 10 Pro/Enterprise/Education (Version 1607 or newer) with Docker Desktop for Windows installed, configured to use Linux containers, and Kubernetes enabled in Docker Desktop
-
A 64-bit version of Windows 10 Home (Version 2004 or newer) with the Windows Subsystem for Linux (WSL2) and Docker Desktop for Windows installed, Docker Desktop configured to use the WSL2 backend for Linux containers, and Kubernetes enabled in Docker Desktop
-
macOS 10.14.0 Mojave or newer with Docker Desktop for Mac installed and Kubernetes enabled in Docker Desktop
Deploying and exposing a simple service
Create a file called deployment.yml
with the following contents:
# Our Deployment, running 3 nginx web server replicas
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.18
ports:
- containerPort: 80
---
# The Service exposing our Deployment on port 30080 of each worker node
# (This will be http://127.0.0.1:30080 when running Kubernetes locally)
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
type: NodePort
ports:
- port: 80
nodePort: 30080
protocol: TCP
name: http
selector:
app: nginx
Use the kubectl command to submit the deployment configuration to the Kubernetes API server:
kubectl apply -f deployment.yml
You can monitor the progress of the deployment by watching the status of its pods:
kubectl get pods -l app=nginx --watch
Wait until all three replicas list 1/1
containers as being ready, then press Ctrl-C to stop monitoring the pod status list. If everything is working correctly, you should see the nginx welcome page displayed when you make a HTTP request to port 30080 of the worker node’s IP address. The exact URL depends on your system configuration:
-
If you are using Minikube, run the command
minikube ip
to determine the IP address of the Minikube node and open the URL http://IP:30080 in a web browser (where IP is the Minikube node IP address). -
If you are using Kubernetes with Docker Desktop for Windows or Docker Desktop for Mac, open the URL http://127.0.0.1:30080 in a web browser.
Triggering service self-healing
You can observe the self-healing functionality provided by Kubernetes controllers by artificially terminating a pod to simulate a scenario where that particular replica crashed. First, retrieve the list of replica pods for the nginx service:
kubectl get pods -l app=nginx
Attach an interactive shell to one of the pods (replace nginx-deployment-ABC-DEF
with the name of one of the pods listed in the output of the previous step):
kubectl exec -ti "nginx-deployment-ABC-DEF" -- /bin/bash
Kill the nginx server process to simulate a crash:
kill `pidof nginx`
The interactive shell should close automatically, as the container to which it was attached will be killed and restarted by Kubernetes in response to the nginx server “crash”. You can verify that this has worked by listing the pods for the service again:
kubectl get pods -l app=nginx
You should now see a value of 1 listed under the RESTARTS
column for the pod that was terminated and restarted.
Scaling the number of replicas
Modify the first section of the deployment.yml
file to have the following updated contents:
# Our Deployment, running 3 nginx web server replicas
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 6 # Update the replicas from 3 to 6
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.18
ports:
- containerPort: 80
# The Service object is unmodified from the previous code
Use the kubectl command to submit the updated deployment configuration to the Kubernetes API server:
kubectl apply -f deployment.yml
You can observe Kubernetes starting the additional replicas by watching the status of the deployment’s pods:
kubectl get pods -l app=nginx --watch
You should see that a total of six pods are now displayed, instead of the previous three. Once all of the replicas are up and running, press Ctrl-C to stop monitoring the pod status list.
Performing a rolling upgrade
Modify the first section of the deployment.yml
file to have the following updated contents:
# Our Deployment, running 3 nginx web server replicas
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 6
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.19 # Update the version of nginx from 1.18 to 1.19
ports:
- containerPort: 80
# The Service object is unmodified from the previous code
Use the kubectl command to submit the updated deployment configuration to the Kubernetes API server:
kubectl apply -f deployment.yml
You can observe Kubernetes terminating the old pods and starting new ones by watching the pod status:
kubectl get pods -l app=nginx --watch
You should see that new replicas are created with the updated configuration and the old replicas are terminated once their replacements are running. It is worth noting that Kubernetes only terminates the old replicas once a sufficient number of replacements are started, to ensure zero downtime during the upgrade. Once all of the new replicas are up and running, press Ctrl-C to stop monitoring the pod status list.
Reverting an upgrade
Kubernetes maintains a history of revisions for every deployment to facilitate rolling back to previous revisions in the event of problematic upgrades. You can use the kubectl rollout command to list the revision history for a deployment:
kubectl rollout history "deployment.v1.apps/nginx-deployment"
It is worth noting that modifications to the number of replicas do not count as revisions because these are simply scaling operations rather than upgrades. If you want to undo an upgrade, you can revert to any previous revision of the deployment by specifying the revision number:
# Revert our deployment to revision 1, which used nginx version 1.18
kubectl rollout undo "deployment.v1.apps/nginx-deployment" --to-revision=1
You can verify that the original revision has been restored by inspecting the details of the current deployment:
kubectl describe deployment "nginx-deployment"
You should see that the container image field now lists nginx:1.18
once again. It is also worth noting that the revision number has also been increased (deployment.kubernetes.io/revision=3
), since reverting to a previous revision also counts as a new revision when recording the overall history of the deployment.