Batcycle - An intro to implementing Sidecar Pattern in K8s

Pod is the basic — smallest and simplest, object of a Kubernetes deployment (application). I’ve been working and pushing out applications in my Kubernetes Cluster with the “one-Container-one-Pod” model. You can actually also deploy multiple Containers in a single Pod.

There are three common design patterns for running multiple containers in a Pod. These are Sidecar pattern, Adapter pattern, and the ambassador pattern.

In this blog post, I will be focusing on the Sidecar Pattern.

In Sidecar Pattern, you have a your main application and a helper container running on a single Pod. The functions of the helper container is essential to the main application but it’s not necessarily part of the application. The most common example of this is having a Web application running on a container AND a helper monitoring / logging application on another separate container.

Main application and sidecar application can be independently written in different languages. The sidecar application can access the same resources as the primary application. Latency between the applications running on the same pod should be low. Code and dependencies between the main application and the side car application can be managed independently.

Using the simple application in my example, I deployed the containers using the following:

apiVersion: apps/v1
kind: Deployment
metadata:  
  name: myapp  
  labels:    
    app: myapp
spec:  
  replicas: 1  
  selector:    
    matchLabels:      
      app: myapp  
  template:    
    metadata:      
      labels:        
        app: myapp    
    spec:      
      containers:
      # Main application container      
      - name: myapp        
        image: myapp:latest        
        ports:        
        - containerPort: 8080      
     # Sidecar application container
      - name: mysidecar       
        image: mysidecar:latest
[me@devops resources]# kubectl create -f deploy.yml -n fusion
deployment.apps/myapp created
[me@devops resources]#

Let’s check the newly created pods

[me@devops resources]# kubectl get pods -n fusion
NAME                  READY STATUS  RESTARTS AGE
myapp-fb6b9f85d-f89md 2/2   Running 0        19s
[me@devops resources]#

Listing out all the containers in this pod, we could see the two containers.

[me@devops resources]# kubectl get pods — all-namespaces -o=jsonpath=’{range .items[*]}{“\n”}{.metadata.name}{“:\t”}{range .spec.containers[*]}{.image}{“, “}{end}{end}’ | sort
myapp-fb6b9f85d-f89md: gcr.io/kube-cluster-234414/myapp:latest, gcr.io/kube-cluster-234414/mysidecar:latest,
[me@devops resources]#

Let’s go inside both containers. First myapp,

[me@devops resources]# kubectl exec -ti myapp-fb6b9f85d-f89md -c myapp -n fusion /bin/bash
root@myapp-fb6b9f85d-f89md:/# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
 inet 127.0.0.1/8 scope host lo
 valid_lft forever preferred_lft forever
 inet6 ::1/128 scope host
 valid_lft forever preferred_lft forever
2: tunl0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1000
 link/ipip 0.0.0.0 brd 0.0.0.0
4: eth0@if18: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1440 qdisc noqueue state UP group default
 link/ether 26:96:93:cd:da:df brd ff:ff:ff:ff:ff:ff
 inet 10.244.0.105/32 scope global eth0
 valid_lft forever preferred_lft forever
 inet6 fe80::2496:93ff:fecd:dadf/64 scope link
 valid_lft forever preferred_lft forever
root@myapp-fb6b9f85d-f89md:/#

For mysidecar

[me@devops resources]# kubectl exec -ti myapp-fb6b9f85d-f89md -c mysidecar -n fusion /bin/bash
root@myapp-fb6b9f85d-f89md:/# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
 inet 127.0.0.1/8 scope host lo

valid_lft forever preferred_lft forever
 inet6 ::1/128 scope host
 valid_lft forever preferred_lft forever
2: tunl0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1000
 link/ipip 0.0.0.0 brd 0.0.0.0
4: eth0@if18: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1440 qdisc noqueue state UP group default
 link/ether 26:96:93:cd:da:df brd ff:ff:ff:ff:ff:ff
 inet 10.244.0.105/32 scope global eth0
 valid_lft forever preferred_lft forever
 inet6 fe80::2496:93ff:fecd:dadf/64 scope link
 valid_lft forever preferred_lft forever
root@myapp-fb6b9f85d-f89md:/#

As you can see from the above, they are actually using the same IP Address (10.244.0.105)

Let’s try our application. My main application is just a simple web application that runs on port 8080. When the index resource of my main application (right pane) is hit, it sends an HTTP POST request to the sidecar application. The sidecar application receives the request (left pane). Think that the sidecar application can further process what it received ( i.e. forwards it to log aggregator, writes it to a file etc).

Summary

This was just a simple attempt to deploy a multi-container application. Before pushing functionality using the Sidecar pattern, consider carefully if the process works better as a separate service. Or you could look into implementing it as a Daemonset. Also consider inter-process communication mechanism that you will be using between the main application and the sidecar application. Use language/framework agnostic technologies as much as possible.