How To Run Applications on Top Of Kubernetes

In our series on Kubernetes so far, we have covered an entire gamut of topics. We started with a comprehensive guide to help you get started on Kubernetes. From there, we got into the details of Kubernetes architecture and why Kubernetes remains a pivotal tool in any cloud infrastructure setup

We also covered other concepts in Kubernetes that are useful to know, namely, namespaces, workloads, and deployment. We also discussed a popular Kubernetes command, kubectl, in detail.

We also covered how Kubernetes differs from another popular container orchestration tool, Docker Swarm

In this blog, we will consolidate all our learnings and discuss how to run applications on Kubernetes. Although we covered this topic in bits, we feel this topic deserves a blog of its own. 

We will cover two kinds of applications that are commonly run on Kubernetes-based cloud infrastructure:

  1. Stateless application, and 
  2. Stateful application.

In both cases, we will use an Nginx application. Let’s get started.

Running a Stateless application

A stateless application basically stores no past data or state when new pods or containers are created. This is good for applications that do not rely on persistent storage of any kind. Many microservices applications, printing services, or CDNs would fall into this category.

Since Stateless applications do not require persistent storage, running them on Kubernetes is much more straightforward. Let’s see an example to understand.

Kubernetes infrastructure

To run this example, you would need a Kubernetes setup to run the application on. If you do not have one, you can build one on Taikun for free. Taikun is a complete management and monitoring solution for Kubernetes of any size across any cloud infrastructure.

r5wgxvABRgoDCHMx3sAFOOeLMOGIegkC7gvwfg5myq8M8fpDssQETIDev3PB311VkVd7EMNVCesob510e993bB9s5XYFolV r1aH9lHZ5m4CYTtHwoMNH5QROI

If not, you can set up one locally using minikube or use Killercoda to run one on the browser itself.

Nginx deployment on K8s

To deploy Nginx on Kubernetes, we first need to write a YAML file that describes the desired state as a Deployment object. We can use a sample provided by Kubernetes documentation itself: 

Vq99jErF BseWn4F hzB5rgOg7V7L04vr457xkoHCrt rxfEP43Z HookOm76lBKxxrVfMY3uFBxwkPZxbP7cq9KEfioP94PaRy14Ndips6GM oc0KK DDiR8TPw XHiCbc2SW

You have to apply the file using the kubectl command. This tells kubectl about the desired state of the system. You can read more about what kubectl does in the background in our kubectl blog.

You will see an output similar to the one below:

0A4TRiUZ0NCgvQOdsVIofQlyZnRdma1x4BsGOhBkW5CiqBxlHrgsJ aYTL82B5zELXgcwsrWoFDgUnqfxq99W hvCfmoc9JIQueQ nxMKTFC 3npl8TCYBLquC5kGFT3fK5 wHiYQnSDkSfbCZpM W0Og UV9vsllC7g8IOzEoWSSH9Sg8iVmRDSqLZdhg

You can check if the deployment was successful with a describe sub-command. You should see an output similar to the one below:

qZIWX8W u1fbBa70WqsNKxAzDEEPpoZp6fwzA3Uf WKuG1nI8l7aToBpH4fPp3xjAsu1N2e3x1BOkIbv2vUXGfsbwMwnUD1QAudh7XpLIuwIBpvBri4VCnDOIgYOfDqzwihZAjkBbzsqbz6kj62dIi ASnJFymC5P2jnrZ zYnnVUOKXXPym3 IiByPFyA

You can also list the pods in the deployment with a “get pod” sub-command.

nLcilqB2cMU8VgrO ih2UxjvEYZtxoZIyrpr15D6D6oYYcH2Z4o Om70GVkdcyztjh0LuhI3SUnRNnK453qTxerhTFQborBzqMBEK3eIqbo6bkfGe7Y VsIS9hjQnb hh9BHP3stu6ov7V7hyH

You can even get more information on each of the pods. Run the command: 

$ kubectl describe pod <name-of-a-pod-from-lastcommand>

wvDCua4E88UItKSEZ5m0FDkaFtxKE9PMCZBJdB6bjZ uhmNg8 GswtGUPanl3xMni3URyoNuo9ik1sAuQTcWjgnXx21 fDtF19Y46s7NIvEhZC0RjfxdkMfeeqTFYW8D MSrS0SYVF0Jx 04qhPcRlIpYHLnstqvg3p99 Hdra1Ea2Rw6xe6 0tiM0kG7w

Note that this is a stateless application. This means that once the deployment is stopped, none of the states in the pods persist. 

Updating a deployed application 

To update the application, in this case, Nginx – we just need to edit the YAML file. In the below sample, we update the version of Nginx from 1.14.2 to 1.16.1.

usL6lrk6Cec884c53Th4N8S4xW2vP mpGsZ12U7 7a6ygA63w6TRxrnUjM7KtbZXhWbSIBRfPU8lDEt92XaTLUxilHhjcSeuwR41SHthsYS a9tG6c7z24STINWwgjiv

To apply the update, run the kubectl apply command again.

Kjre60rv 4lsuIDoiynCzv fdwTghqo00sVeFsXcn4KXn1NhNTNunMA5XDwd OJyiNaXdVc4qRAUJphn3xurHoCgj HhBRhX7bpyRg46xiyf4DHd 16rsUn2m1yDAO5mFqCcLb evpE uh9HzW6F7vztB iuj0lbiCbECmSQJdMidvInywVvhqcAiJ Cg

This time you would see new pods being created and old pods terminated. 

hCkIGdYt8QDlWb9mtB7QH svJsBrX4 jtHKw gnTVkNxIBFJMnSplBVZvzYrCdA ydJN0aLelPb2s4UrET4hIgWnD FUGvNcsYIoPvkrM2kvTGHVoSOy8hVFXIg1wydBWPN1 BRvmw92lC

If we run the get pods command immediately after the update of the configuration, we can see the status of old pods as “Terminating”. This is the status of the pods when we applied the new configuration again.

Q2mGDycZqt pyfw7 U61c EeiNcRusbQ 1nodl9tQCdbJGuXbBQo9V22XfBcpG eQHCLBUkE CHMd5bHkYPJlrd LBHMmxnzF5eYQfPXlcv2orQyI03WhA3NWb3ITmgk DC1 WkkLpbEAi77HyNG0kLM3FO06fXFttV5fnVlQDOErbsfaAKQMblc EbA

Scaling the application

To increase the number of pods working for a particular application, you can edit the configuration and increase the number of replica sets. Here is a sample of that:

TEkH7ItdJgj Z2LLZMeXOnQVUb76Wh6DtVBUjm0pFp7FEUvO2PJLx8AohgaHaDP9I9iThkNKwnk7yPSxJCxL3iQxIlxMgNKr9 SkXMxRy0UuJca4zbB8lS d13VIBlOi45gLJ2dCQb 7P7OLWtIF917oF

Applying the new configuration is similar to earlier examples: “kubectl apply”. You can then check the pods running with “get pods” and you should see the new count of pods as mentioned in the configuration file.

6gvmw6lIJV90hGkVHhGv4 PGjt0ii8CUAqh7cl1c1bAKZZOguq6Z5nfMWuIIl1V AZkmGgNLSMqKo9arIV0nzirRaPojx8gSoajDB1Zd9fN 6ZaFddApNFXswS59VNRJKxWldutG7vrF3J9b24FpioQL0rbVnHIVUywsIK 7L6bu w3Dv6uWs4GDDFsg

Delete the application

To delete the application, just run the “delete deployment” sub-command. 

ioPC wSFKXowITZ2VCCgnNATd0i5ib7seIsqFLAp2Tohpp7aSnw n8spLDWm4Kwykp otfWNeQ xcQjAr8mcn gIpS8nvgTQAbUEx6jsnZrGwSKW88h Id6fambnRnpHsthQYM YWcY8LGn7YiEK98EFzJ03TH3jOkmXa6Rko

Running a Stateful application

Stateful applications have persistent storage. This helps maintain the state of pods and store data in storage across the lifecycle of pods. 

Basic storage building blocks in Kubernetes are called volumes. A volume is attached to a pod and behaves like a local storage. But it has no persistence to it. This means when a pod is destroyed, the volume (i.e. the storage) also gets released. 

But there are ways to make storage persistent. Many applications, like database applications, need this kind of persistent storage. 

Persistence in Kubernetes

Kubernetes has a concept of PersistentVolume (PV) and PersistentVolumeClaim (PVC) to implement persistence in storage.  

A PersistentVolume (PV) is a piece of storage in the cluster. It is a cluster resource like any node in a cluster. A PersistentVolumeClaim (PVC) is a request for storage by a user. 

Just like a Pod consumes node resources, PVCs consume PV resources. Pods request compute resources like CPU and memory. PVCs request sizes of volume and access modes (like ReadWriteOnce, ReadOnlyMany or ReadWriteMany). 

Let’s see an example of a pod using a PersistentVolume as storage. 

Example: Pod using PersistentVolume 

In this example, we will create an Nginx cluster similar to our previous example, but the Nginx will host an index.html that is stored on the node.

Thus, irrespective of the state of the pod, the file index.html will persist. The directory on the node will be mounted on a directory in the pod. Let’s see how that happens.

Step 1: Create an index.html 

Login to your node and create a directory /mnt/data with a simple index.html file inside it. 

Here is one way you would do it:

0RWyMVdDWnlJWsbcN20w6 3OdiUX9kNCLiZduBnibQZD1kP3eYdt2gNpJa5bSEg9ttZfRxNoveOiyG cXKQmInLQJ1ibMKvWJcP OnDSqx91JeM4bGfzg8ZwECE5E1KcXJkI1Nbr6WRC

Step 2: Create a PersistentVolume

We will now use this /mnt/data as the hostPath PersistentVolume. The configuration file for the hostPath /mnt/data would look as follows: 

PkkKoa5byaWTn PAGVw9t5NMTH AvZ976v3roZllXVeacYvhs1Dh2iLhmTN5WblDIVAAseHmjDcGcom7ztTYj5YcbyOK86fhuDq7DZ2KbGBMnbsRknMyNNejumFs2H1clYWTddH8UGT4y9az0oqPJxfqWDf6sd0E7U70FZtBNRZF1yykTOJLpfJNa O0Ng

The above config file basically describes a Persistent at /mnt/data on the node. It mentions that the size of it is 10 Gb and has an access mode of ReadWriteOnce. ReadWriteOnce means, the volume will have read-write access for a single node.  

You can apply this sample with kubectl apply command, and that will create the PersistentVolume.

2CK71pj9 Lj0HQoKU1WYdwsh rrqgJ5CMNUw0oqdxRBCKYEy1Pwb50ZjeByb62XIJ5UIYupWTrKhyVk tk

You can check the status of the PV with the below command. The name of the PV is what you gave to metadata.name in the config file.

zQmxPhWLvhS9B8Zcd1lgm7wDB89T4DxWS7WpjhgPee295Vx leJamAM0lg nLDw5H1X QWSyechXORX8V68AGyGvQqaBbcR64ud2QTigl8F9z5MCXeAZpBLn0GfJ223byfqyrcY5NKVEwkJBiOmkaAU2 rTejJ251XPZ3v4lzJ2xLuKGSrz fmSqkAVr9g

Step 3: Create a PersistentVolumeClaim

Next, we need a PersistentVolumeClaim. Pods use PVCs to request physical storage from the PV. A sample config file to create a PersistentVolumeClaim will look as below:

hrdZIWuQYwQYRTu4n2P6qDkF7 OlydRb 0L FwdacAvcVkXoog5StnHKlJ5SQMAvvrWteZJY02RnXJyjKMiUDQ9Kk7N 4tluSl676mZtogV0uQ9yZsJoY4G6Pqx G28

The above file is requesting a volume of size 3Gb with read-write access for at least one node. 

To create the PVC, run the kubectl apply command on the config file. The output file will look like this:

OIiOtvgWANq XTa6VdWUOUWSyNO5lpZifu4T4GqmpWr xHvlPhb8Dk84dxSYwyF83c3yFkVkVahDL0OSwmPe8GWPUFHS3aJaq3ECluhrMLyn amKxNIewAeH0LV0w9h2sRW lUfLikB QDWjzb6K9swhJUwoiCPGbhjHA6XwYD0EKtGMJlyH0 jIYM ow

You can check the PV and PVC status with a kubectl get command.

ULuW viabhu1iZkIOWNflZFnj7uP1McNON4 owTt5wXiEdKeRPlanlFPWJD4GBFww Gb2VF7eFfvhzV3DSwIQqz5ALZtTJoJ5Q0qpirLetdt KrT5J sRx3EijPVsqgSqyIsvTBFhSihEpH1TohCWTEq JFpLWPf3AhHM1Jqejdp QCWfKpBPje7E7ZKA

Step 4: Create a Pod with PersistentVolumeClaim

Now we are ready to create a pod with the PVC attached to it. Here is a sample config file for such a pod:

g6k3HO7UERyiA1MqHVZFA044T3EpFpoSFnem72l6ofTvfiBjNtTJF86LgN4SPpgW4dwO3dIFM6ULw nIgAM ZL4xsfHorPctlf6pbUtjKuPZU1 1wqseGvzJQJlkOIUbAB lMwwN

To create the pod, apply the configuration file using the kubectl apply command as follows:

gzPS8

You should see the pod as “running” when you check the status using “kubectl get” command.

9MRmpyQOFJbHRWL9wWL3PGfmAwKT3XjlXDY0b6v8ImnqiLrXPpuc1jDSgaj6pIvHhBjvvECTHHi7sbPazaImTj8B kvXQrAvSSfeRCqhZn eLHylZrO45DrFGKrP8eODruPlKnfDmsYq5Ac8x6cdhzcihwUJeb3Lwe0y5UVvfVtlTi Ei55EbsIxqoKA g

Step 5: Run Nginx with a file from PersistentVolume

Everything is now set. Nginx in the container inside the pod should use the index.html from the /mnt/data/ in the node. The volume was mounted to /usr/share/nginx/html in the container.

To check this, let’s log in to the container and do a curl http://localhost.

We can run a kubectl exec command on the pod to log in to the shell. Once you login to the shell on the container, you can run the apt command to install curl.

Here’s a sample of how it would look:

Running an apt install command to set up curl.

ha6 wTUA CWYEBgGmq1JVo9G64AlpAvJSzJF7liyrlNLOYwcYFBmNRtT3TL

Once curl is installed successfully, the curl http://localhost command should show the contents of index.html that we created on /mnt/data on our node. 

A successful output would look as follows: 

xDPrmIRPZ3wQjgJVJ Crmh8MOGIOQxHxQ0O4GxmyZW6nFauouuDX7owkiUM TtBvh3k4H228Q4jn6WK6q oYs9Sf3zq6xxAboSBFM05Y

Step 6: Delete the PV and PVC

If you wish to delete the volumes and pods created, you can do them in reverse order. Delete the pod first and then the PVC. Once both are deleted, you can go about deleting the PV. 

Here is how the command outputs would look like for our example:

NG1YPwyfQIVCxmLN HTwbP1xIz N6A1PyGeVE 2p8E8u3fmipq27C4Ytcqtu7 0sNzU8OMUDZXpp8deNCzuM fTY7QZ5UeoTlw8OqK nfLfkO ZZ fchnzBlcFgMwTMPjGBtennxMQDKhdRZzbXoBa XuoH1PsOFH 4YoRiD XEbhOvFLW F7ge4 b3eyQ

And that is how you run a Stateful application on Kubernetes. 

Taikun – Run and manage applications on Kubernetes easily

We have seen how an application can be run on Kubernetes. But it is not always easy for teams to manage Kubernetes on the command line and then monitor it separately for each cluster. This is where Taikun can help. 

Taikun is a cloud-based application that can help you manage and monitor your Kubernetes infrastructure. Taikun works with all major cloud infrastructures and provides a simple seamless interface to deal with your Kubernetes setup. 

t FFlzujmZBR V9L6p4gtCf8aar812RyfSGirjM2N7fM0yDh RO9oqB8EXiwfD4moeAtM sHe3aDNjCnNbDfYIXN86SMKKt2adxWb1dK30WNZgPBGEQZyixLWhK J1yAuSBUIvgGdGApvPkiUtqn iSZaskzQCXz3IhpOM LaiGTqBukimePchfou2f MQ

Taikun abstracts underlying complexities for the user

Taikun is developed by Itera.io, which is a silver member of CNCF. This means all the cloud deployments are CNCF-certified. This enables interoperability and consistency from one Kubernetes installation to the next and helps the team to stay largely cloud-agnostic.

CNCF-certified Kubernetes installations also ensure that the latest stable release of Kubernetes will be available for use in the clusters. Such a setup makes the entire setup very stable and predictable for all cloud use cases.

If you wish to try Taikun, you can try our cloud-based solution for free or contact us to explore a custom in-house solution. 

Try it now for FREE         Schedule a call with us