Kubernetes - App Development

by Matthew Palmer

A cluster is a collection of computers coordinated to work as a single unit. In Kubernetes this consists of a master node and worker nodes.

share

etcd A distributed key-value store used in Kubernetes to store configuration data for the cluster.

share

Controller Controllers are responsible for updating resources in Kubernetes based on changes to data in etcd.

share

scheduler A module in the Kubernetes master that selects which worker node a pod should run on, based on resource requirements.

share

node A worker machine in the Kubernetes cluster, responsible for actually running pods.

share

takes responsibility for containers that run on that node.

share

kubelet A process that runs on each worker node, and takes responsibility for containers that run on that node.

share

containers to be colocated and share filesystem and network resources.

share

Pod The smallest object that Kubernetes interacts with. It is a layer of abstraction around the container, allowing for containers to be colocated and share filesystem and network resources.

share

Service A Kubernetes object used to expose a dynamic set of pods to the network behind a single interface.

share

Deployment A Kubernetes object that manages a set of pods as a single unit, controlling their replication, update, and rollback.

share

Liveness Probe A process that checks if a container is still alive or if it needs to be restarted.

share

Readiness Probe A process that checks if a container has started successfully and is ready to start receiving requests.

share

Sidecar pattern A multi-container design pattern where another container runs alongside your main application and performs some task non-essential to the application container.

share

Adapter pattern A multi-container design pattern where an adapter container massages the output or formatting of your main application so that it can be consumed by another party.

share

Ambassador pattern A multi-container design pattern where the ambassador container proxies network requests to a third party. The main application makes requests to localhost, and the ambassador is responsible for forwarding those requests to the external service.

share

Volume mount The mechanism by which a container gains access to a volume. The container declares a volume mount, and then it can read or write to that path as though it were a symbolic link to the volume.

share

Volume A piece of storage in a Kubernetes pod that lives for (at least) as long as the pod is alive. Analogous to a directory in your filesystem.

share

Second—how do I get to the ideal state? Kubernetes

share

Kubernetes has a group of controllers whose job it is to make the actual cluster state match the ideal state.

share

gives access to the Kubernetes API via a HTTP REST API.

share

Kubernetes is told to run a new pod, scale a deployment, or maybe to add more storage. The request is made through the API server (kube-apiserver), a process inside the master that gives access to the Kubernetes API via a HTTP REST API.

share

kube-scheduler determines which node should run a pod. It finds new pods that don't have a node assigned, looks at the cluster's overall resource utilisation, hardware and software policies, node affinity, and deadlines, and then decides which node should run that pod.

share

The master maintains the actual and desired state of the cluster using etcd, lets users and nodes change the desired state via the kube-apiserver, runs controllers that reconcile these states, and the kube-scheduler assigns pods to a node to run.

share

Every node in a Kubernetes cluster has a container runtime, a Kubernetes node agent, a networking proxy, and a resource monitoring service.

share

The container runtime is responsible for actually running the containers that you specify.

share

kubelet is a process that runs on each node that takes responsibility for the state of that node. It starts and stops containers as directed by the master, and ensures that its containers remain healthy. It will also track the state of each of its pods, and if a pod is not in its desired state, it will be redeployed. kubelet must also relay its health to the master every few seconds. If the master sees that a node has failed (i.e. kubelet has failed to report that the node is healthy), controllers will see this change and relaunch pods on healthy nodes.

share

Every Kubernetes node also requires a networking proxy (kube-proxy) so that it can communicate with other services in the cluster and the master node. This process is responsible for routing networking traffic to the relevant container and other networking operations.

share

Every Kubernetes node also runs cAdvisor, a simple agent that monitors the performance and resource usage of containers on the node.

share

When should you combine multiple containers into a single pod? When the containers have the exact same lifecycle, when the containers share filesystem resources, or when the containers must run on the same node.

share

application to always connect to localhost, and let the responsibility of mapping this connecting to the right database fall to an ambassador container.

share

You can think of selectors as the WHERE part of a SELECT * from pods WHERE <labels> = <values>. You use label selectors from the command line or in an object’s YAML when it needs to select other objects.

share

one of the most common uses of labels and selectors is to group pods into a service. The selector field in a service's spec defines which pods receive requests sent to that service.

share

Deployments in Kubernetes let you manage a set of identical pods

share

RollingUpdate The preferred and more commonly used strategy is RollingUpdate. This gracefully updates pods one at a time to prevent your application from going down. The strategy gradually brings pods with the new configuration online, while killing old pods as the new configuration scales up.

share

there are two useful fields you can configure.   maxUnavailable effectively determines the minimum number of pods you want running in your deployment as it updates. For example, if we have a deployment currently running ten pods and a maxUnavailable value of 4. When an update is triggered, Kubernetes will immediately kill four pods from the old configuration, bringing our total to six. Kubernetes then starts to bring up the new pods, and kills old pods as they come alive. Eventually the deployment will have ten replicas of the new pod, but at no point during the update were there fewer than six pods available.   maxSurge determines the maximum number of pods you want running in your deployment as it updates. In the previous example, if we specified a maxSurge of 3, Kubernetes could immediately create three copies of the new pod, bringing the total to 13, and then begin killing off the old versions.

share

When updating your deployment with RollingUpdate, there are two useful fields you can configure.   maxUnavailable effectively determines the minimum number of pods you want running in your deployment as it updates. For example, if we have a deployment currently running ten pods and a maxUnavailable value of 4. When an update is triggered, Kubernetes will immediately kill four pods from the old configuration, bringing our total to six. Kubernetes then starts to bring up the new pods, and kills old pods as they come alive. Eventually the deployment will have ten replicas of the new pod, but at no point during the update were there fewer than six pods available.   maxSurge determines the maximum number of pods you want running in your deployment as it updates. In the previous example, if we specified a maxSurge of 3, Kubernetes could immediately create three copies of the new pod, bringing the total to 13, and then begin killing off the old versions.

share

A deployment's entire rollout and configuration history is tracked in Kubernetes, allowing for powerful undo and redo functionality. You can easily rollback to a previous version of your deployment at any time.

share

Services let you define networking rules for pods based on their labels. Whenever a pod with a matching label appears in the Kubernetes cluster, the service will detect it and start using it to handle network requests made to the service.

share