Skip to content

yingdongzhang/learning-docker-and-kubernetes

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

19 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

k8s learning notes

  1. Components
    1. Scheduler
    2. Node
      1. Worker Node
      2. Master Node
    3. Replica Sets
    4. Controller
    5. Container Runtime
    6. Cluster
    7. API Server
    8. #ectd
    9. kubelet
  2. Kubectl
    1. Imperative Commands
    2. Tips
    3. Edit Pod
    4. Edit Deployment
  3. Pod
  4. Deployment
  5. Namespaces
  6. Services
  7. Commands and Arguments
  8. Environment Variables
    1. ConfigMap
    2. SecretKeys
  9. Security Contexts
    1. Service Accounts
  10. Resource Requirements and Limites
  11. Taints and Tolerations
  12. Node Selectors
  13. Node Affinity
  14. Observability
    1. Monitoring and Debugging
  15. Pod Design
  16. Updates and Rollbacks
  17. Jobs
  18. Networking
  19. Stateful Set

Components

Scheduler

Distributes work across nodes

Node

Worker node

  • Hosts containers

  • kubelet

  • pod

    • a single instance of an application

    • the smallest unit in the cluster

Master node

  • kube-apiserver

  • etcd

  • controller

  • scheduler

ReplicaSets

  • Create multiple instances of pods

  • Can monitor existing pods

Replication Controller(older)

Controller

Brain behind orchestration

Container Runtime

Cluster

A group of nodes

API Server

FE/Terminal of the k8s cluster

etcd

Distributed across all nodes key-value store to store data

kubelet

Agent on each node, control containers


kubectl

Run container with interactive mode: kubectl exec -it <pod-name> -- <command>

run <name> --image <image>

get nodes

cluster-info

get pods -o wide

create -f <definition.yaml>

describe pods/<name>

delete pods/<name>

get pod <name> -o yaml > pod.yaml

edit pod <name>

apply -f

get all

Tip: Imperative Commands

While you would be working mostly the declarative way - using definition files, imperative commands can help in getting one time tasks done quickly, as well as generate a definition template easily. This would help save a considerable amount of time during your exams.

Before we begin, familiarize with the two options that can come in handy while working with the below commands:

--dry-run: By default as soon as the command is run, the resource will be created. If you simply want to test your command, use the --dry-run=client option. This will not create the resource, instead, tell you whether the resource can be created and if your command is right. -o yaml: This will output the resource definition in YAML format on the screen.

Use the above two in combination to generate a resource definition file quickly, that you can then modify and create resources as required, instead of creating the files from scratch.

Tip: Formatting Output with kubectl

The default output format for all kubectl commands is the human-readable plain-text format.

The -o flag allows us to output the details in several different formats.

kubectl [command] [TYPE] [NAME] -o <output_format>

Here are some of the commonly used formats:

-o json Output a JSON formatted API object.

-o name Print only the resource name and nothing else.

-o wide Output in the plain-text format with any additional information.

-o yaml Output a YAML formatted API object.

Here are some useful examples:

Output with JSON format:

master $ kubectl create namespace test-123 --dry-run -o json
{
  "kind": "Namespace",
  "apiVersion": "v1",
  "metadata": {
    "name": "test-123",
      "creationTimestamp": null
  },
  "spec": {},
  "status": {}
}
master $

Output with YAML format:

master $ kubectl create namespace test-123 --dry-run -o yaml
apiVersion: v1
kind: Namespace
metadata:
  creationTimestamp: null
  name: test-123
spec: {}
status: {}

Output with wide (additional details):

Probably the most common format used to print additional details about the object:

master $ kubectl get pods -o wide
NAME      READY   STATUS    RESTARTS   AGE     IP          NODE     NOMINATED NODE   READINESS GATES
busybox   1/1     Running   0          3m39s   10.36.0.2   node01   <none>           <none>
ningx     1/1     Running   0          7m32s   10.44.0.1   node03   <none>           <none>
redis     1/1     Running   0          3m59s   10.36.0.1   node01   <none>           <none>
master $

Edit a POD

Remember, you CANNOT edit specifications of an existing POD other than the below.

  • spec.containers[*].image

  • spec.initContainers[*].image

  • spec.activeDeadlineSeconds

  • spec.tolerations

For example you cannot edit the environment variables, service accounts, resource limits (all of which we will discuss later) of a running pod. But if you really want to, you have 2 options:

  1. Run the kubectl edit pod command. This will open the pod specification in an editor (vi editor). Then edit the required properties. When you try to save it, you will be denied. This is because you are attempting to edit a field on the pod that is not editable.

A copy of the file with your changes is saved in a temporary location as shown above.

You can then delete the existing pod by running the command:

kubectl delete pod webapp

Then create a new pod with your changes using the temporary file

kubectl create -f /tmp/kubectl-edit-ccvrq.yaml

  1. The second option is to extract the pod definition in YAML format to a file using the command

kubectl get pod webapp -o yaml > my-new-pod.yaml

Then make the changes to the exported file using an editor (vi editor). Save the changes

vi my-new-pod.yaml

Then delete the existing pod

kubectl delete pod webapp

Then create a new pod with the edited file

kubectl create -f my-new-pod.yaml

Edit Deployments

With Deployments you can easily edit any field/property of the POD template. Since the pod template is a child of the deployment specification, with every change the deployment will automatically delete and create a new pod with the new changes. So if you are asked to edit a property of a POD part of a deployment you may do that simply by running the command

kubectl edit deployment my-deployment


Pod

Create an NGINX Pod

kubectl run nginx --image=nginx

Generate POD Manifest YAML file (-o yaml). Don't create it(--dry-run)

kubectl run nginx --image=nginx --dry-run=client -o yaml


Deployment

Create a deployment

kubectl create deployment --image=nginx nginx

Generate Deployment YAML file (-o yaml). Don't create it(--dry-run)

kubectl create deployment --image=nginx nginx --dry-run=client -o yaml

IMPORTANT:

kubectl create deployment does not have a --replicas option. You could first create it and then scale it using the kubectl scale command.

Save it to a file - (If you need to modify or add some other details)

kubectl create deployment --image=nginx nginx --dry-run=client -o yaml > nginx-deployment.yaml

You can then update the YAML file with the replicas or any other field before creating the deployment.


Namespaces

Pods, deployments and services live in a namespace.

K8s includes:

  • a default namespace
  • a kube-system namesapce for internal components
  • a kube-public namespace, resources made available to all users

Resources within the same namespace can refer to each other simply by using their names.

To refer to resources in other namespaces use the full DNS name in the format: [service-name].[namespace].[service].[domain] for exampledb-service.dev.svc.cluster.local

To list pods in a namespace: kubectl get pods -n <namespace> or --namespace

To create an object with a custom namespace, include the namespace in the metadata:

apiVersion: v1
kind: Pod
metadata:
  name: mypod
  namespace: dev
  labels:
    app: myapp
spec:
  containers:
    - name: my-app-container
      image: nginx

Switch to namespace

kubectl config set-context $(kubectl config current-context) --namespace=dev

List all pods in all namespaces

kubectl get pods --all-namespaces


Services

Enable communication between components within and outside of the application. Enable loose coupling between microservices.

  • NodePort: listens to a port on the node and forward the request to a port on the pod running the application
  • ClusterIP: virtual IP for internal communication
  • LoadBalancer: distributes loads
apiVersion: v1
kind: Service
metadata:
  name: my-service
spec:
  type: NodePort
  ports:
    - nodePort: 3008
      targetPort: 80
      port: 80
  selector:
    app: myapp
    type: front-end

kubectl create -f service.yaml

NodePort

Maps a port on the node to a port on the pod.

3 ports involved:

  • TargetPort: port on the node
  • Port: port of the Service
  • NodePort: port of the node

When multiple pods have the same labels, the Service select all the pods as endpoints to receive requests, it uses random algorithm to distribute loads across pods.

When pods are distributed across multiple nodes, k8s automatically creates a Service that spans across all the nodes and map the targetPort to the same node port on all the nodes in the cluster.

ClusterIP

Enables communication inside the cluster, enables loose coupling between components.

apiVersion: v1
kind: Service
metadata:
  name: back-end-service
spec:
  type: ClusterIP
  ports:
    - targetPort: 80
      port: 80
  selector:
    app: my-app
    type: back-end

kubectl create -f service.yaml

Create a Service named redis-service of type ClusterIP to expose pod redis on port 6379

kubectl expose pod redis --port=6379 --name redis-service --dry-run=client -o yaml

(This will automatically use the pod's labels as selectors)

Or

kubectl create service clusterip redis --tcp=6379:6379 --dry-run=client -o yaml (This will not use the pods labels as selectors, instead it will assume selectors as app=redis. You cannot pass in selectors as an option. So it does not work very well if your pod has a different label set. So generate the file and modify the selectors before creating the service)

Create a Service named nginx of type NodePort to expose pod nginx's port 80 on port 30080 on the nodes:

kubectl expose pod nginx --port=80 --name nginx-service --type=NodePort --dry-run=client -o yaml

(This will automatically use the pod's labels as selectors, but you cannot specify the node port. You have to generate a definition file and then add the node port in manually before creating the service with the pod.)

Or

kubectl create service nodeport nginx --tcp=80:80 --node-port=30080 --dry-run=client -o yaml

(This will not use the pods labels as selectors)

Both the above commands have their own challenges. While one of it cannot accept a selector the other cannot accept a node port. I would recommend going with the kubectl expose command. If you need to specify a node port, generate a definition file using the same command and manually input the nodeport before creating the service.


Commands and Arguments

apiVersion: v1
kind: Pod
metadata:
  name: ubuntu-sleeper-pod
spec:
  containers:
    - name: ubuntu-sleeper
      image: ubuntu-sleeper
      command: ["sleep"] # -> ENTRYPOIND in Dockerfile
      args: ["10"] # -> CMD in Dockerfile

Environment Variable

Key value pair

apiVersion: v1
kind: Pod
metadata:
  name: ubuntu-sleeper-pod
spec:
  containers:
    - name: ubuntu-sleeper
      image: ubuntu-sleeper
      env:
        - name: APP_COLOR
          value: pink

ConfigMap

  1. Create ConfigMap
    1. Imperative: kubectl create configmap <config-name> --from-literal=<key>=<value>, --from-file=<path-to-file>
    2. Declarative: kubectl create -f config.yaml
      apiVersion: v1
      kind: Config
      metadata:
        name: app-config
      data:
        APP_COLOR: blue
        APP_MODE: prod
  2. Inject into the pod
# 1. Using configMapRef
envFrom:
  - configMapRef:
    name: app-config # name of the ConfigMap
# 2. Inject single value using configMapKeyRef
env:
  - name: APP_COLOR
    valueFrom:
      configMapKeyRef:
        name: app-config
        key: APP_COLOR
# 3. Inject as file in the volume
volumes:
  - name: app-config-volue
    configMap:
      name: app-config

View ConfigMaps

kubectl get configmaps

kubectl describe configmaps

Secret Keys

For storing sensitive data

  1. Create Secret
    1. Imperative: kubectl create secret generic <secret-name> --from-literal=<key>=<value>, --from-file=<path-to-file>
    2. Declarative: kubectl create -f
    apiVersion: v1
    kind: Secret
    metadata:
      name: app-secret
    data:
      DB_Host: bXlzcWw=
      DB_User: cm9vdA==
      DB_Password: cGFzd3Jk
    echo -n 'password' | base64
  2. Inject into pod
# 1. Inject using as variables
envFrom:
  - secretRef:
      name: app-secret
# 2. Inject using as single env variable
env:
  - name: DB_Password
    valueFrom:
      secretKeyRef:
        name: app-secret
        key: DB_Password
# 3. Inject as volumnes
volumes:
  - name: app-secret-volume
    secret:
      secretName: app-secret

View Secrets

kubectl get secrets

kubectl describe secrets


Security Contexts

Docker Security

Process running in the container can be seen in the host too

Users

Docker run process within the containers as the root user. The root user within the container runs with limited capabilities.

Can run docker container as a different user, or use the USER command

K8s Security Contexts

# Pod level
apiVersion: v1
kind: Pod
metadata:
  name: web-pod
spec:
  securityContext:
    runAsUser: 1000
  containers:
    ...
# Container level
apiVersion: v1
kind: Pod
metadata:
  name: web-pod
spec:
  containers:
    - name: ubuntu
      image: ubuntu
      command: ["sleep", "5000"]
      securityContext:
        runAsUser: 1000
# Add capabilities
apiVersion: v1
kind: Pod
metadata:
  name: security-context-demo-4
spec:
  containers:
    - name: sec-ctx-4
      image: gcr.io/google-samples/node-hello:1.0
      securityContext:
        capabilities:
          add: ["NET_ADMIN", "SYS_TIME"]

Service Account

kubectl create serviceaccount <service-account-name>

kubectl get serviceaccount

kubectl describe serviceaccount <service-account-name>

Service accounts are generated with a token, which will be used as an authentication bearer token to interact with k8s api

automountServiceAccountToken: false

Resource Requirements & Limits

  • CPU
  • MEM
  • Disk
# pod spec
spec:
  containers:
    - name: my-pod
      resources:
        requests:
          memory: "1Gi" # 256Mi, 1G(Gigabyte), 1Gi(Gibibyte)
          cpu: 1 # 1 count of CPU = 1 AWS vCPU, 1 GCP Core, 1 Azure Core, 1 Hyperthread
        limits: # set for each container in the pod
          memory: "2Gi"
          cpu: 2

A container cannot use the CPU more than the limit. A container can use more memory than the limit but will be terminated if it goes beyond limits constantly.


Taints and Tolerations

Taints

Restrict nodes to accept certain pods

kubectl taint nodes <node-name> <key>=<value>:<taint-effect>

Taint Effect

Defines what happens to the pod that do not meet the taint

  • NoSchedule: pod will not be scheduled in the node
  • PreferNoSchedule: system will try to avoid place the pod on the node
  • NoExecute: new pod will not be scheduled on the node and existing pods will be evicted if they do not tolerate the taint

kubectl taint nodes node1 app=blue:NoSchedule

Tolerations

Set on pods to allow the pod to be accepted to a node

# pod spec
spec:
  containers:
    - name: my-pod
  tolerations:
    - key: "app"
      operator: "Equal"
      value: "blue"
      effect: "NoSchedule"

Note: the Master node does not schedule any pods


Node Selectors

Labels for node

nodeSelector:
  size: Large

kubectl label nodes <node-name> <key>=<value>

Node Affinity

Ensure that pods are hosted on a particular node

Adavance capabilities

affinity:
  nodeAffinity:
    requiredDuringSchedulingIgnoredDuringExecution:
      nodeSelectorTerms:
        - matchExpressions:
          - key: size # label key
            operator: In # or NotIn, Exists
            values:
            - Large # label value

Node Affinity Types

Available:

  • requiredDuringSchedulingIgnoredDuringExecution: mandate the pod to be placed on a node that matches the given affinity rule. If it cannot find one the pod will not be scheduled. DuringExecution: once scheduled changes to affinity rules will not affect the pod.
  • preferredDuringSchedulingIgnoredDuringExecution: in case the matching node is not found, the scheduler will place the pod on any available node

Planned:

  • requiredDuringSchedulingRequiredDuringExecution: changes to affinity rules will affect pods that have been scheduled.

Taints & Tolerations vs Node Affinity

  • Use Taints & Tolerations to prevent other pods being placed on to certain nodes.
  • Use Node Affinity to prevent certain pods on certain nodes.

Observability

Readiness and Liveness Probes

Readiness Probes

To connec the application's readiness with the container's readiness use the

containers:
  - name: simple-webapp
    readinessProbe:
      httpGet: # http
        path: /api/ready
        port: 8080
      initialDelaySeconds: 10
      periodSeconds: 5
      failureThreshold: 8
    readinessProbe:
      tcpSocket: # tcp
        port: 3306
    readinessProbe:
      exec: # custom command
        command:
          - cat
          - /app/is_ready

Liveness Probes

Periodically test whether the application within the container is healthy.

containers:
  - name: simple-webapp
    livenessProbe:
      httpGet: # http
        path: /api/ready
        port: 8080
      initialDelaySeconds: 10
      periodSeconds: 5 # frequency
      failureThreshold: 8
    livenessProbe:
      tcpSocket: # tcp
        port: 3306
    livenessProbe:
      exec: # custom command
        command:
          - cat
          - /app/is_ready

Container Logging

kubectl logs -f <pod-name> <container-name>

Monitoring and Debugging

Metric Server

minikube: minikube addons enable metrics-server others: git clone https://github.com/kubernetes-incubator/metrics-server.git && kubectl create -f deployment/1.8+

kubectl top node - performance metrics of nodes

kubectl top pod - performance metrics of pods


Pod Design

Labels, Selectors and Annotations

Attach labels as per your needs

Specify conditions to filter specific objects

Lables

Group things together

metadata:
  name: simple-webapp
  labels:
    app: App1
    function: front-end

Selector

Filter things

kubectl get pods --selector <key>=<value>

spec:
  selector: # labels of pod to match
      matchLabels:
        app: App1
        function: front-end
  template: # pod templates
    metadata:
      labels:
        app: App1
        function: front-end
# service config
spec:
  selector:
    app: App1

Annotations

Record other details for informatory purpose, may be used for some kind of integration purpose

metadata:
  name: test
  annotations:
    buildversion: 1.34

Rolling Updates & Rollbacks in Deployments

Rollout and Versioning

When you first create a deployment it triggers a new rollout. A new rollout creates a new deployment revision. Future rollouts create new deployment revisions.

Deployment Strategy

  1. Rolling strategy (default)
  • Take down old version and bring up new version one by one
  1. Recreate strategy
  • Cause application downtime

Commands

  • Create: kubectl create -f deployment.yaml
  • Get: kubectl get deployments
  • Update: kubectl apply -f deployment.yaml, kubectl set image deployment/myapp-deployment nginx=nginx:1.9.1
  • Status: kubectl rollout status deployment/myapp-deployment, kubectl rollout history deployment/myapp-deployment
  • Rollback: kubectl rollout undo deployment/myapp-deployment

Updating a Deployment

Some handy examples related to updating a Kubernetes Deployment:

Creating a deployment, checking the rollout status and history:

In the example below, we will first create a simple deployment and inspect the rollout status and the rollout history:

master $ kubectl create deployment nginx --image=nginx:1.16
deployment.apps/nginx created
 
master $ kubectl rollout status deployment nginx
Waiting for deployment "nginx" rollout to finish: 0 of 1 updated replicas are available...
deployment "nginx" successfully rolled out 
 
master $ kubectl rollout history deployment nginx
deployment.extensions/nginx
REVISION CHANGE-CAUSE
1     <none>

Using the --revision flag:

Here the revision 1 is the first version where the deployment was created.

You can check the status of each revision individually by using the --revision flag:

master $ kubectl rollout history deployment nginx --revision=1
deployment.extensions/nginx with revision #1
 
Pod Template:
 Labels:    app=nginx    pod-template-hash=6454457cdb
 Containers:  nginx:  Image:   nginx:1.16
  Port:    <none>
  Host Port: <none>
  Environment:    <none>
  Mounts:   <none>
 Volumes:   <none>

Using the --record flag:

You would have noticed that the "change-cause" field is empty in the rollout history output. We can use the --record flag to save the command used to create/update a deployment against the revision number.

master $ kubectl set image deployment nginx nginx=nginx:1.17 --record
deployment.extensions/nginx image updated

master $ kubectl rollout history deployment nginx
deployment.extensions/nginx
 
REVISION CHANGE-CAUSE
1     <none>
2     kubectl set image deployment nginx nginx=nginx:1.17 --record=true

You can now see that the change-cause is recorded for the revision 2 of this deployment.

Let's make some more changes. In the example below, we are editing the deployment and changing the image from nginx:1.17 to nginx:latest while making use of the --record flag.

master $ kubectl edit deployments. nginx --record
deployment.extensions/nginx edited
 
master $ kubectl rollout history deployment nginx
REVISION CHANGE-CAUSE
1     <none>
2     kubectl set image deployment nginx nginx=nginx:1.17 --record=true
3     kubectl edit deployments. nginx --record=true

master $ kubectl rollout history deployment nginx --revision=3
deployment.extensions/nginx with revision #3

Pod Template: Labels:    app=nginx
    pod-template-hash=df6487dc Annotations: kubernetes.io/change-cause: kubectl edit deployments. nginx --record=true
 
 Containers:
  nginx:
  Image:   nginx:latest
  Port:    <none>
  Host Port: <none>
  Environment:    <none>
  Mounts:   <none>
 Volumes:   <none>

Undo a change:

Lets now rollback to the previous revision:

master $ kubectl rollout undo deployment nginx
deployment.extensions/nginx rolled back
 
master $ kubectl rollout history deployment nginx
deployment.extensions/nginxREVISION CHANGE-CAUSE
1     <none>
3     kubectl edit deployments. nginx --record=true
4     kubectl set image deployment nginx nginx=nginx:1.17 --record=true

master $ kubectl rollout history deployment nginx --revision=4
deployment.extensions/nginx with revision #4Pod Template:
 Labels:    app=nginx    pod-template-hash=b99b98f9
 Annotations: kubernetes.io/change-cause: kubectl set image deployment nginx nginx=nginx:1.17 --record=true
 Containers:
  nginx:
  Image:   nginx:1.17
  Port:    <none>
  Host Port: <none>
  Environment:    <none>
  Mounts:   <none>
 Volumes:   <none>
 
master $ kubectl describe deployments. nginx | grep -i image:
  Image:    nginx:1.17

With this, we have rolled back to the previous version of the deployment with the image = nginx:1.17


Jobs

Pods are meant to be running forever. By defaulk k8s the restartPolicy for pods is "Always".

A job is used to run a set of pods to perform given task to completion.

Job Definition

apiVersion: batch/v1
kind: Job
metadata:
  name: math-add-job
spec:
  completions: 3 # desired successful executions
  parallelism: 3 # execution in parallelism
  template:
    spec:
      containers:
        - name: match-add
          image: ubuntu
          command: ['expr', '3', '+', '2']
      restartPolicy: Never

CronJobs

apiVersion: batch/v1beta1
kind: CronJob
metadata:
  name: my-cron-job
spec:
  schedule: "*/1 * * * *"
  jobTemplate:
    spec:
      completions: 3 # desired successful executions
      parallelism: 3 # execution in parallelism
      template:
        spec:
          containers:
            - name: match-add
              image: ubuntu
          restartPolicy: Never

Ingress Networking

Ingress helps your users access your application using a single externally accessible URL that you can configure to route to different services within your cluster based on the URL part and at the same time implement SSL security as well.

Route traffice to different services based on url path and implement ssl security. Layer 7 load balancer that is built-in to k8s cluster.

Ingress Controller

K8s does not come with a ingress controller by default. There are different options of ingress controllers such as Nginx Ingress Controller.

  • A NodePort service is required to expose it.
  • A ConfigMap is required to feed Nginx service config data.
  • A ServiceAccount is required to config the right roles and permissions.

To setup an Ingress Controller, we need a Service(NodePort) to expose it, a config(ConfigMap) to feed configuration data and a ServiceAccount with the right permissions to access all of these objects.

Ingress Resources

A set of rules and permissions that are set on the ingress controller created using definition files.

kubectl get ingress --all-namespaces

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: ingress-a
spec:
  backend:
    serviceName: my-service
    servicePort: 80

Example of rules based on path:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: ingress-b
spec:
  rules:
    - http:
        paths:
          - path: /clothing
            backend:
              serviceName: clothing-service
              servicePort: 80
          - path: /watch
            backend:
              serviceName: watch-service
              servicePort: 80

Example of rules based on domain name:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: ingress-b
spec:
  rules:
    - host: clothing.my-store.com
      http:
        paths:
          - path: /clothing
            backend:
              serviceName: clothing-service
              servicePort: 80
    - host: watch.my-store.com
      http:
        paths:
        - path: /watch
          backend:
            serviceName: watch-service
            servicePort: 80

kubectl get ingress

Network Policy

Ingress & Egress rules

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: db-policy
spec:
  podSelector:
    matchLabels:
      role: db
  policyTypes:
    - Ingress
  ingress:
    - from:
      - podSelector:
        matchLabels:
          name: api-pod
      ports:
        - protocol: TCP
          port: 3306

Stateful Set

Similar to deployment sets, but pods are created in a sequential order. After the first pod is deployed it is must be ready and in a running state before the next pod is deployed.

In a main-replica set up, we always want the main instance to be set up first, then set up replica 1, then replica 2, etc.

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: mysql
  labels:
    app: mysql
spec:
  template:
    metadata:
      labels:
        app: mysql
    spec:
      containers:
        - name: mysql
          image: mysql
  replicas: 3
  selector:
    matchLabels:
      app: mysql
  serviceName: mysql-h # name of a headless service

Default behaviour is sequential, can be changed to parallel.

Headless Services

Scenario: write only to the main database instance not replicas.

A service that does not load balance requests but gives us a DNS entry to reach each pod. Created as a normal service but does not have an IP by its own, but creates a DNS entry for the pod: podname.headless-servicename.namespace.svc.cluster-domain.example

apiVersion: v1
kind: Service
metadata:
  name: mysql-h
spec:
  ports:
    - port: 3306
  selector:
    app: mysql
  clusterIP: None # headless service

Within a Deployment

subdomain or hostname must be set

pod:

...
spec:
  containers:
    - name: mysql
      image: mysql
  subdomain: mysql-h # must be present for headless service to create DNS entry
  hostname: mysql-pod # must be present for headless service to create DNS entry

Within a StatefulSet

When creating a StatefulSet you do not need to specifcy the subdomain or hostname, the StatefulSet automatically assigns the right hostname for each pod based on the pod name and it automatically assigns the right subdomain based on the headless service name.

You must specify the serviceName in the definition.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published