taikun.cloud

Taikun Logo

Demystifying WASM on Kubernetes: A Deep Dive into Container Runtimes and Shims

Table of Contents

To understand how to set up and work with WASM on Kubernetes, we must first understand how a container works within the Kubernetes ecosystem.

In this blog, we will explore Container Runtimes and how they help run a container in Kubernetes. We will also explore the concept of a shim in containers. Finally, we will look at how all this helps us run WebAssemly apps in a Kubernetes ecosystem. 

Kubernetes and Container Runtimes

As you know, Kubernetes is a container orchestration tool that helps manage hundreds of thousands of containers to run efficiently in a system.

Containers help run applications independent of the platform underneath. Container Runtimes help run these Containers. Here is how the layers are for a popular containerization tool, Docker:

The “container runtime” acts as an interface between the Container and the OS. Let us now delve a little deeper into container runtimes.

Low-level and High-level Container Runtimes

As the name of the title indicates, there are two types of container runtimes: Low-level and High-level. 

Low-level runtimes are responsible for interacting with the OS and running the containers. 

High-level container runtimes provide additional features and abstract the low-level container. For example, they provide APIs for remote applications to interact with low-level container runtimes. High-level runtimes also manage container images. For example, they help transport and unpack an image. 

High-level runtimes run as demon applications. The Open Container Interface (OCI) has standardized the specification for a low-level container runtime, and there are many such runtimes. 

Containerd is a popular high-level container runtime, while runc is an often-used low-level container runtime. 

Container Workflow

Now that we understand container runtimes, let’s consider them in the context of a Kubernetes system. 

As you know, kubelets are the primary node agent. They are present in every node in the Kubernetes cluster. 

Container Runtime Interface (CRI)

Every kubelet works based on a PodSpec, which is just a description of the pod in YAML or JSON format. Kubelets use an interface called Container Runtime Interface or CRI to interact with the container runtimes in the node.

Here is how Container runtimes, CRI, and kubelets fit in the Kubernetes ecosystem:

RuntimeClass in Kubernetes

A RuntimeClass resource helps select specific runtime configuration for a container in Kubernetes and schedule Pods to specific nodes. The runtime is set by the property named “handler” which mentions the CRI implementation to be used for the RuntimeClass resource.

Here is an example of defining a RuntimeClass resource:

In this example, a new RuntimeClass named “myclass” is defined with a CRI configuration (a specific runtime is linked to this CRI config) named “myconfiguration”.

To use this specific runtime in any node, we attach the runtime to the node in the Pod spec. 

For e.g., to attach the RuntimeClass we defined above (named “myclass”), the Pod spec would look as below:

Notice the value for the property “spec.runtimeClassName”.

Container Manager

In a Kubernetes setup, the CRI is part of a larger Container manager module. Here is how a node in a Kubernetes system would look like with CRI within the Container Manager module.

Container runtime shims

Over time, as different pods and containers are created and deleted, the container manager may malfunction. It may sometimes perform less efficiently and may have to be restarted. 

In such cases, we do not want the containers to fail. So, there must be a way to keep the container process running and keep their I/O and error streams intact while the container manager restarts. This is where a container shim plays a key role. 

A container runtime shim preserves the STDIN, STDOUT, and STDERR channels regardless of the status of the container manager.

Here is how the communication channels are maintained between the container and the Kubernetes control plane.

A shim is a separate lightweight demon that is bound to a running container’s process. The shim process is completely detached from the container manager’s process. Shim is responsible for all communication between the container and the manager. 

An example of a popular shim is containerd-runtime-shim. Here is how the setup would look in a Kubernetes cluster node:

Summarizing Container workflow in Kubernetes

Let’s now bring it all together and look at a typical container workflow of deploying an application in Kubernetes:

  1. Kubernetes schedules work to deploy the application on a set of cluster nodes. Kubernetes is responsible for node management and scheduling work on clusters. 
  1. Every node has a kubelet, which hands over the work to the high-level container runtime, like containerd. Containerd manages container lifecycles i.e., creating, running, and deleting containers.
  1. Containerd asks the low-level container runtime, runc, to create a container and start the application. Runc is responsible for dealing with the underlying operating OS.
  1. Runtime shims act as a channel of communication between runc and containerd. They ensure no communication is lost in case containerd fails and has to restart. 

Let’s now take this understanding to demystify how WASM is implemented in Kubernetes. 

WASM in Kubernetes

For WASM workloads to run in Kubernetes, we need to have WASM runtimes and a  WASM-specific shim to manage the interface between containerd and the WASM runtime. 

One of the popular WASM-specific containerd libraries is called runwasi. Runwasi is a Containerd project under CNCF.  Runwasi provides the interface for containerd to send instructions and helps WASM runtime execute the WebAssembly application. 

WebAssembly Containerd shims

So, a WebAssembly shim will have a runwasi library for containerd and a WASM runtime to execute the WASM application on the host.

There are many popular WASM shims that you can explore for your Kubernetes setup. Here are a few: 

  1. Fermyon Spin
  2. Wasmedge
  3. Wasmtime

A Kubernetes cluster with WASM and other regular apps would look as below:

Attaching WASM runtimes to containerd

To make containerd access the WASM runtimes on worker nodes, you need to install the wasm runtime binaries and register it with containerd. 

The binary executables must be installed in a folder that is visible to containerd like /bin/. The binary name also has to be in line with the containerd runtime naming convention

For eg. 

If you are installing wasmtime runtime, then the name must be something like: containerd-shim-wasmtime-v1. 

If you are installing spin runtime in containerd, then the name of that binary must be like: containerd-shim-spin-v1.

Once installed, they need to be registered with containerd by configuring them in the config.toml file (/etc/containerd/config.toml). Here is how the configuration file would look:

Labeling WASM nodes

Lastly, the nodes that would run Wasm workloads need to be identified. This can be done using labels. You can label the nodes using the kubectl command or Pod specs.  

With the above command, a node named wrkr3 gets a label “wasmtime-enabled=yes”. You can look at the labels of all nodes with the same kubectl command:

Scheduling WASM runtime setup with RuntimeClass

You can use a similar RuntimeClass definition, as we did earlier in the blog, to schedule a wasm runtime to be set up on every node with a certain label. 

The YAML file for such a RuntimeClass definition would look something like this:

Let’s understand this yaml definition:

A RuntimeClass called wasm1 (using the property “metadata.name”) is defined, which has runtime wasmtime (using the property “handler”). The runtime is scheduled to be installed on nodes with the label “wasmtime-enabled=yes” (using the property “scheduling.nodeSelector”).

Let’s extend this example further and see how we can set up nodes for WASM workloads in a Kubernetes cluster.

An example

In the example above, let’s start with the bottom section first. We see three worker nodes in a Kubernetes cluster. worker1 and worker3 are running runc runtime, while worker2 node has wasm runtimes called wasmtime and spin. So, all WASM workloads are to be executed on worker2.

Note that the worker2 node also has two labels on it: “wasmtime-enabled=yes” & “spin-enabled=yes”.

Now, let us see how this is defined in the k8 cluster specifications. Let’s start with the top-most section i.e. the Deployment specs. The property “spec.runtimeClassName” calls the runtimeClass definition called “wasm1”. 

Now, we need to define the runtimeClass with the name “wasm1” and that definition is what you see in the middle section of the example.

The property “metadata.name” for the above RuntimeClass has that name i.e. “wasm1”. The property “handler” mentions the specific runtime which should be tied to this RuntimeClass. 

Further, the property “scheduling.nodeSelector” tells the RuntimeClass which nodes should have the runtime installed. In this example, the specs on the left side mention the value “wasmtime-enabled=yes”. 

So, nodes with that label i.e. worker2 in this example, will get the runtime specified in the RuntimeClass.

In the same way, the specs on the right side of the image install a wasm runtime called spin on the same worker2 node to handle WASM workloads. 

Taikun CloudWorks & WASM Support

From Jan 2024, Taikun CloudWorks support WASM workloads natively. This means now you get the simplified UI of Taikun while enjoying the benefit of speed and efficiency with WASM applications.

If you’re looking to simplify Kubernetes management and easily enable WebAssembly (Wasm), Taikun CloudWorks provides an integrated solution. 

With Taikun CloudWorks, you can effortlessly manage your Kubernetes clusters and take advantage of WASM capabilities, saving both time and effort in your operations. 

Request for a free demo today!