1. Home
  2. Get started with kubernetes | Part 3 – A simple application

Get started with kubernetes | Part 3 – A simple application

In the third part of our Kubernetes guide, we will publish an application that is accessible from the web.

In the previous guides part 1 – Cluster and part 2 – Access, we configured a cluster with magnum, configured kubectl so that we can manage the cluster and also configured the ingress nodes to run traefik. In this part, we will distribute a simple application in our cluster and make it accessible from the internet.

Introduction

There are several different ways to run an application in kubernetes, the easiest is to start a deployment directly from the command line with kubectl.

A deployment is an abstraction layer that makes it easy to explain a desired state of our application in the kubernetes cluster. You can read more about deployments here, in short, a deployment creates a replica set that ensures that the number of pods (see below) stays as configured.

Deployment


Here we start a deployment called nginx-demo with the docker image nginxdemos/hello. As kubernetes uses docker as a container engine, the image is downloaded by default from https://hub.docker.com, information on how to download an image from a private register can be found here

$ kubectl create deployment nginx-demo --image=nginxdemos/hello
deployment.apps/nginx-demo created


We can now check that our deployment was created by running

$ kubectl get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-demo 1/1 1 1 20s

We can also check that kubernetes has started a pod:

$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-demo-69689f78d-xvm6p 1/1 Running 0 40s

A pod is the smallest distributable unit that can be created in kubernetes, a pod can consist of one or more containers with shared network and storage.

We can also try deleting our newly created pod:

$ kubectl delete pod nginx-demo-69689f78d-xvm6p


If we check our pods again immediately afterwards, we see that the pod has been created again, this is because our deployment expects that there will always be 1 pod

$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-demo-69689f78d-j2lmk 1/1 Running 0 3s


We can also scale the number of podcasts easily by running

kubectl scale deployment nginx-demo --replicas=NUM_PODS

where NUM_PODS is the number of pods we want kubernetes to run.

Now we have taken the first step to have a working application, our image runs in kubernetes but we still can’t access it. In order for it to be reachable via the web, we need to create a service which is a kind of abstraction that exposes one or more pods as a network service.

Service


To create a service, we create a file named service.yml and insert the following.

apiVersion: v1
kind: Service
metadata:
 labels:
 app: nginx-demo
 name: nginx-demo
spec:
 ports:
 - name: http
 targetPort: 80
 port: 80
 selector:
 app: nginx-demo

And apply it with

kubectl apply -f service.yml

A service is by default ClusterIP, which means that it is only accessible within the cluster.
We also tag this service with app=nginx-demo so that our ingress later knows which service it should send traffic to.

Ingress

To make the application accessible from the internet, we also need to create an ingress.
An ingress in kubernetes handles the external traffic that comes from outside the cluster.
In the previous guides we chose to use traefik for this.
To configure our ingress, we create a yaml file ingress.yml, in this example it looks like this:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
 name: nginx-demo
 annotations:
 kubernetes.io/ingress.class: traefik
spec:
 rules:
 - host: nginx-demo.binero.com
 http:
 paths:
 - path: /
 backend:
 serviceName: nginx-demo
 servicePort: http

We specify that we want to start a traefik-ingress with the name nginx-demo, we configure it to send traffic that matches the hostname nginx-demo.binero.com to our service which is also called nginx-demo.
To apply this configuration we run

kubectl apply -f ingress.yml

The only thing we have left to do now is to configure a load balancer in front of the 2 ingress-nodes so that external traffic can reach our application, which we will describe in a future guide. Until then, the openstack documentation is available https://docs.openstack.org/octavia/train/user/guides/basic-cookbook.html

Updated on 2020-11-12