Contents

Kubernetes: techworld nana

What is K8s

官方定義

  • Open source container orchestration tool 開源的容器調度工具

  • Developed by Google

  • Helps you manage containerized applications in different deployment environments 在不同部署環境下,協助管理容器化應用程式

    e.g. physical, virtual, cloud, hybrid deployment

Kubernetes解決什麼問題?

  • 容器調度工具的需求
    • 從單體式架構轉向微服務的趨勢
    • 容器使用的增加
    • 管理數百個容器的一個合適的方式

調度工具提供哪些功能?

  1. 高可用 (HA), or no downtime
  2. 可擴展性 (Scalability), or high performance
  3. 災難復原 - backup and restore

Main K8s Components

  1. Node: a simple server, a physical or virtual machine

  2. Pod:

    • Basic component or the smallest unit of kubernetes
    • Abstraction over container
    • Pod is usually meant to run one application container inside of it (usually 1 application per Pod)
      • You can run multiple containers inside one pod, but it’s usually if you have one main application container and a helper container or some side service that has to run inside of that pod.
    • Each Pod gets its own IP address
      • my-app container可以透過ip address與db container互通有無
      • pod components in k8s is ephemeral. The pod will die, if we lose a database container because the container crashed, or the server that I am running them on ran out of resources.
    • New IP address on re-creation
  3. Service

    • A static IP address orpermanent IP address that can be attached to each pod
    • Lifecycle of Pod and Service NOT connected
    • Service has two functionalities:
      1. Permanent IP
      2. Load balancer: The service will catch the request and forwarded it to whichever pod is least busy.
  4. Ingress

    • External Services - let your app be accessible through browser
  5. Volumes

    • 如果 db container 或 pod 重啟了,原有資料消失會造成很大麻煩

    • 使用 Volumes 可以使 log data or database data stored persistedly, reliably long term

    • Attached a physical storage on a hard drive to your pod

    • Storage could be either on local machine, or remote (outside of the K8s cluster)

    • Regardless of whether it’s local or remote storage, think of a storage as an external hard drive plugged-in to the Kubernetes cluster.

      把volumes想像成外接至K8s cluster的外部硬碟

    • K8s cluster explicitly doesn’t manage data persistence.

    • Users or Administrator are responsible for backing up the data, replicating and managing it, making sure that it’s kept on a proper hardware. 使用或管理者自己要負責備份資料/複製或管理data

  6. Secrets

    • It’s also external configuration component (similar to ConfigMap), but Secret is used to store secret data
    • Base64 encoded
    • Note that the built-in security mechanism is NOT enabled by default
    • Use it as environment variable or as a properties file
  7. ConfigMap

    • 假設現在有個情況需要變更 database URL,而這個URL通常是寫在 build application,如果 service endpoint 或者 service name 變更為 mongo-db,開發就會需要修改 built application 的 URL,re-build 此程式 (w/ new version),push to repository, pull the new image in your pod, and restart the whole thing.
    • 簡單的變更需要一連串繁瑣的動作太麻煩了,所以有了ConfigMap這個component的出現
      • external configuration of your application
      • 只要將ConfigMap連接上pod即可
    • 注意不可將身分驗證資訊 plain text format 寫在 ConfigMap
  8. Deployment

    • 情境:假設在一個正常的production環境,application其中一個 pod crashed 了,或者因為 build 了一個新的 container image 而需要 restart,這樣會有user無法 reach this application 的 downtime,這在 production environment 會是很糟糕的情況
    • 與其只依賴於 one application pod 以及 one data base pod,會將全部都複製 replicate 到多個 server 上:cloned node / replica-node
    • The replica is connected to the same Service.
    • 要建立 replica 不只是單純建立第二個pod而已,而是要 define blueprints for pods
    • 定義出的blueprint就稱為 Deployment :
      • blueprint for my-app pods
      • you create Deployments
      • Pod is an layer of abstraction on top of containers, and deployment is another abstraction on top of pods
    • 但是如果 database pod died 也會需要 db replica 囉?
      • NO ❌. DB can’t be replicated via Deployment, because database has state which is its data.
      • If we have clones of database, they will all need to access the same shared data storage.
      • There you would need some kind of mechanism that manages which pods are currently writing to that storage, or which pods are reading from the storage in order to avoid data inconsistencies.
      • 這樣的機制可以透過 StatefulSet 達成
  9. StatefulSet

    • This component is meant for STATEFUL apps, such as elastic, mongoDB, MySQL.
    • Those stateful app or databases should be created using StatefulSet and not deployments.
    • However, deploying database application using StatefulSet in Kubernetes cluster can be somewhat tedious.
    • Therefore, it’s also a common practice to host database applications OUTSIDE of the kubernetes cluster and just have the deployments or stateless applications that replicate and scale with no problem inside of Kubernetes cluster and communicates with external databases.

K8s architecture

  • Two types of nodes that Kubernetes operates on: Master, Slave
  • What are the differences between those, and which role each one of them has inside of the cluster.

Worker machine in K8s cluster

  • Each node has multiple Pods on it

  • Three processes must be installed on every node

  • Worker Nodes do the actual work

  • Kubelet interacts with both - the container and node

  • Kublet starts the pod with a container inside

  • Usually Kubernetes cluster is made up of multiple nodes which also must have container runtime, kubelet service installed

  • Kube Proxy forwards the requests

  • Three Node Processes:

    1. Kubelet
    2. Kube Proxy
    3. Container runtime
  • How do you interact with this cluster?

  • How to schedule pod? monitor? re-schedule/re-start pod? join a new Node?

    All these managements are done by master processes.

Master processes

  • 4 processes run on every master node:

    1. API server

      • interact with API Server using some client (such as UI, commandline tool, or Kubenetes API)

      • It’s like a cluster gateway, which gets any request, updates, queries into the cluster.

      • It also acts as a gatekeeper for authentication:

        [some request]➡️[API Server]➡️[validates request]➡️[other processes]➡️[Pod]

    2. Scheduler

      • [Schedule new Pod]➡️[API Server]➡️[Scheduler]➡️[Where to put the Pod?]➡️[Kubelet]
      • Scheduler just decides on which node new Pod should be scheduled.
    3. Controller Manager

      • Detects cluster state changes that a node has died and reschedule those pods as soon as possible
      • [Controller Manager]➡️[Scheduler]➡️[Kubelet]➡️[Pod]
    4. etcd

      • A key-value store of a cluster state

      • You can think of it as the cluster brain

      • Cluster changes get stored (changed or updated) in the key value store

      • All these mechanism with scheduler and controller manager work because of etcd’s data.

        • e.g. How does a scheduler know what resources are available?

        • e.g. How does a controller manager know that the a cluster state changed in some way? (A pods died, or that the kubelet restarted new pods upon the request of the scheduler.)

        • e.g. Is the cluster healthy?

Example Cluster Set-Up

  • 2 master nodes
  • 3 worker nodes
  • Note that the hardware resources for master and node server actually differ.
    • Master has less load of work: need less resources like CPU, RAM or STORAGE.
    • Worker nodes do the actual job of running those pods with containers inside and need more resources.
  • You can add new master/node server pretty easily:
  1. Get new bare server
  2. Install all the master/worker node processes (such as container runtime, kubelet, kube proxy unit)
  3. Add it to the Kubernetes cluster

Minikube and kubectl - Local Setup

Minikube 是什麼

Usually in K8s world, when you are setting up a production cluster -

  • multiple Masters: at least two in production setting

  • multiple Worker nodes

  • Master and Worker have separate responsibilities:

    • separate virtual or physical machines that each represent a node

If you want to test on local machine, or try something out very quickly, setting up a production level cluster would be pretty difficult or even impossible if you don’t have enough resources like memory and CPU etc.

Minikube - open source tool
  • One node cluster where the master processes and the worker processes both run on one node.

  • This node will have a Docker container runtime pre-installed.

  • You will be able to run the containers or the pods with containers on this node.

  • The way it’s going to run on your laptop is through a virtual box or some other hypervisor.

  • Minikube will create a virtual box on your laptop, and the nodes that you see here of this node will run in that virtual box.

  • Summary: Minikube is a one node K8s cluster that runs in a virtual box on your laptop, which you can use for testing kubernetes on your local setup.

kubectl是什麼

  • A way to interact with the minikube (i.e. create pods and other K8s components on the node)
  • A command line tool for K8s cluster
  • Minikube runs both master and worker processes, so one of the master processes called API Server is actually the main entry point into the K8s cluster.
  • If you want to configure anything, create any component, you first had to talk to the API server.
  • The way to talk to the API Server is through different clients
    • An UI like a dashboard
    • Kubernetes API
    • A CLI kubectl: the most powerful of all the three clients
  • Important to note that kubectl isn’t just for minikube cluster, if you have a cloud cluster or hybrid cluster, kubectl is the tool to interact with any type of kubernetes cluster setup.

minikube安裝步驟

  1. 用 brew 安裝 virtual box/hypervisor

    [2024/2/26] 這邊安裝hyperkit之前還須要裝 xcode.app

    因為 minikube 自己有 kubectl dependency, 安裝minikube就會順便安裝kubectl

    brew update
    brew install hyperkit
    brew install minikube
    
  2. 用以下cmd確認安裝成功

    kubectl
    minikube
    
  3. Tell MiniKube to use hyperkit hypervisor to start this virtual MiniKube cluster

    minikube start --vm-driver=hyperkit
    

    minikube stop, minikube delete

    [2024/2/26]

    查它的 –help 指示說 --vm-driver是已經deprecated的指令,改用 --driver即可

    另外,筆者無法用 hyperkit driver 啟動 minikube

    The driver 'hyperkit' is not supported on darwin/arm64

    所以最後改用 docker 安裝

    1. 先刪除本地全部的 minikube cluster minikube delete --all --purge

    2. 打開docker desktop,並用這個命令啟動

      minikube start --driver=docker --force --extra-config=kubelet.cgroup-driver=systemd --cni calico --container-runtime=containerd --registry-mirror=[https://registry.docker-cn.com](https://registry.docker-cn.com/)

    結果如下:

    https://i.imgur.com/oMz1A9l.png

  4. 查看 nodes status

    [2024/2/26]

    這裡實作時得到錯誤訊息 couldn't get current server API group list: Get "https://127.0.0.1:52807/api?timeout=32s": EOF

    在Docker Desktop打開的情況下重啟 minikube start --driver=docker,不放額外的設定

    結果:(有人知道為何一開始會有 EOF error嗎)

    https://i.imgur.com/LqNQHQI.png

    https://i.imgur.com/KOAUspr.png

    kubectl get nodes
    
  5. 查看狀態

  • host is running
  • kubelet (a service that actually runs the pods using container runtime)
  • apiserver
  • kubeconfig
minikube status
  1. 查看已安裝的 Kubernetes 版本

    https://i.imgur.com/8RCWPGT.png

    [2024/2/26] 為何實作看不到 Server Version, 也沒有 version.Info{} (including major, minor, gitVersion, gitCommit, gitTreeState, buildDate, goVersion, compiler, platform)

    kubectl version
    
    • kubectl CLI : for configuring the MiniKube cluster (大部分都用這個)
    • minikube CLI: for start up/deleting the cluster

Main kubectl commands - K8s CLI

Basic kubectl commands

  • Status of different K8s components

kubectl get [nodes | pod | services]

https://i.imgur.com/m6eQhjf.png

  • CRUD commands

    1. 因為現在沒有pod,CREATE的指令:

      • 查詢create相關指令

        kubectl create -h
        

      Pod is the smallest unit of the K8s cluster, but usually in practice, you are not working with pods directly. There is a Deployment, which is an abstraction over Pods.

      kubectl create deployment <NAME> --image=image
      
      • 例如:建立一個 NGINX deployment,以下的指令會從dockerhub下載最新版本的nginx image

        kubectl create deployment nginx-depl --image=nginx
        

        https://i.imgur.com/aQ5tZD0.png

        READY 若顯示為 0/1,則表示其 pod ContainerCreating

        當執行以上的建立 kubectl,會執行以下動作:

        1. Deployment has all the information or blueprint for creating pods
        2. This is the minimum/most basic configuration for deployment (name and image to use)
        3. Rest is just defaults
      • Between deployment and pod, there is another layer which is automatically managed by K8s deployment — ReplicaSet

        kubectl get replicaset
        

        https://i.imgur.com/Ubq38ZW.png

        可以發現 pod 的命名是 {deployment的NAME}-{ReplicaSet的ID}-{Pod自己的ID}

      • Replicaset is managing the replicas of a Pod, practically, you will never have to create , delete or update a Replicaset in any way.

      Layers of Abstraction

      • Deployment manages a ReplicaSet.
      • ReplicaSet manages all the replicas of that pod.
      • Pod is an abstraction of a container.
      • Everything below Deployment is and should be handled by Kubernetes.
    2. UPDATE

      • 修改某一個 deployment by its name
      kubectl edit deployment [name]
      
      • 會得到一個 auto-generated configuration file with default values

      • 在 image section 限定使用映像檔版本

        https://i.imgur.com/1r8o4Ho.png

      • 存檔之後可以看到多了一個pod

        https://i.imgur.com/8GqDvOR.png

        6777bffb6f will be terminated and disappeared from above list once the 6bdcdf7f5 starts running.

      • kubectl get replicaset: The old one has no pods in it, and a new one has been created as well.

      • 僅只是修改 deployment,就可以發現 pod、deployment 以及 replicaset 都自動由 Kubernetes 異動完了

    3. DELETE

      kubectl delete deployment [name]
      

      https://i.imgur.com/AsQKUbv.png

  • Debugging pods

    1. log to console

      • basically shows you what the application running inside the Pod actually locked
      kubectl logs [pod name]
      

      https://i.imgur.com/ZXkF4th.png

    2. get interactive terminal

      • -it stands for interactive terminal
      kubectl exec -it [pod name] -- bin/bash
      

      Example:

      https://i.imgur.com/5a26IUe.png

  • All the CRUD operations happen on the deployment level, and everything underneath just follows automatically (pod, replicaset, etc.).

  • 但是在 kubectl create deployment name image option1 option2 有很多配置,可以改用 configuration file 定義,只要叫 k8s 執行該配置檔即可

    kubectl apply -f config-file.yaml
    

    Example:

    kubectl apply -f nginx-depl.yaml
    touch nginx-depl.yaml
    
    ❯ cat nginx-deployment.yaml
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: nginx-deployment
      labels:
        app: nginx
    spec:
      replicas: 1
      selector:
        matchLabels:
          app:nginx
      template:
        metadata:
          labels:
            app: nginx
        spec:
          containers:
          - name: nginx
            image: nginx:1.16
            ports:
            - containerPort: 80
    
    1. spec.replicas:

      • How many replicas of the the pod I want to create
      • specification for the deployment
    2. Blueprint for a pod: spec.template.spec section

      • Per above example, we just want one container inside the pod, with nginx image, and bind that on port 80.

    [2024/2/27] 這裏在實作上遇到以下錯誤

    error when creating “nginx-deployment.yaml”: Deployment in version “v1” cannot be handled as a Deployment: json: cannot unmarshal string into Go struct field LabelSelector.spec.selector.matchLabels of type map[string]string

    最後是刪除了value後面的空白,最後一行以下的空白,還有:之後必須空一格

    https://i.imgur.com/M7Qntuc.png

    1. 如果變更以上yaml檔,在 apply 時,kubenetes 會自動辨別要 create 還是 update

      Example: update replicaset number from 1 to 2

      https://i.imgur.com/euXAybl.png

Wrap-up

kubectl create deployment [name]
kubectl edit deployment [name]
kubectl delete deployment [name]

kubectl get nodes | pod | services | replicaset | deployment

kubectl logs [pod name]
kubectl exec -it [pod name] -- bin/bash
kubectl describe pod [pod name]

kubectl apply -f [file name]
kubectl delete -f [file name]

K8s YAML configuration file

Overview

  • The three parts of configuration file

    1. metadata

    2. specification

      spec:
        replicas: ...
        selector: ...
        template: ...
        ports: ...
      
    3. status - to be automatically generated and added by Kubernetes

      那K8s是從何處取得status data?從etcd取得

  • Connecting deployments to Service to Pods

  • Demo

YAML configuration files

  • Human friendly data serialization, standard for all programming languages”

  • Syntax: strict indentation

    • 格式簡單,非常重視縮排,如果有任何格式錯誤就會導致 invalid file

    • 當你的 yaml 有兩百多行這麼長的話,可以使用網路上的工具 yaml validator,另外也有 code editors 提供 YAML syntax validation 的插件 plugins

  • Store the config file with your code or own git repository

    • Usually it would be a part of the whole infrastructure as a code concept (基礎架構即程式碼的概念)
    • Or you can have it’s own repository just for the configuration

Blueprint for pods (Template)

  • Deployment manage pods.
  • When you expand a spec.template, you see it also has it’s own metadata and spec section. This configuration applies to a Pod.
  • 提供了像是 which image it should be based on, which port it should open, what is gonna be the name of the container 等等之類的定義

Connecting components (Labels & Selectors & Ports)

  • The metadata part contains labels, and the spec part contains selectors

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    
    # nginx-deployment.yaml
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: nginx-deployment
      labels:
        app: nginx
    spec:
      replicas: 2
      selector:
        matchLabels:
          app: nginx
      template:
        metadata:
          labels:
            app: nginx
        spec:
          containers:
            - name: nginx
              image: nginx:1.16
              ports:
                - containerPort: 8080
    
  • In the metadata, you give components like deployment or pod a key-value pair. It could be any key-value pair for component.

  • Pods get the label through the template blueprint.

  • This label is matched by the selector.

Connect Services to Deployments

  • Deployment has its own label (e.g. app: nginx) used by the service selector.

  • In the specification of a service, we define a selector which basically makes a connection between the service (spec.selector.app) and the deployment or its pods (metadata.labels.app), becuase service must know which pods are kind of registered with it, i.e. which pods belong to that service. Service 和 Pod 之間的連接性是透過 selector of the label 達成

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    
    # nginx-service.yaml
    apiVersion: v1
    kind: Service
    metadata:
      name: nginx-service
    spec:
      selector:
        app: nginx
      ports:
        - protocol: TCP
          port: 80
          targetPort: 8080
    
  • 以上的 service yaml 檔案可發現,service 有它自己的 ports configuration

  • 而 deployment yaml 也有定義 containerPort

Ports in Service and Pod

  • Service has a port where the service itself is accessible at. If other services sends request to nginx service here, it needs to send it on Port 80.

  • However, this service needs to know to which pod it should forward the request, but also at which Port is that pod listening — That is the Target Port.

  • 因此 service spec.ports.targetPort 的值必須與 deployment spec.template.spec.containers.ports.containerPort 的值吻合

  • Example:

    https://i.imgur.com/Apq9V4q.png

  • 要如何驗證service的pods可以正確的轉發請求到指定的port號?使用以下cmd

    kubectl describe service [the_service_name]
    

    https://i.imgur.com/MqGocYk.png

    • 可以看到列出了所有的 status information 像是定義在 yaml 檔裡面的 selector、TargetPort 以及 Endpoints。
    • Endpoints 必須為 Pod 的 IP 位址 + port 號
  • 如何查看 Pod 的 IP 位址?

    -o for output

    wide for more information

    kubectl get pod -o wide
    

    查看以下欄位的 IP 就能驗證上面的Endpoints是否正確

    https://i.imgur.com/VOewNVS.png

3rd Part of Configuration File : Status

  • 如何查看 automatically generated 的 Status?

    • 用 yaml 格式查看某一個 deployment 的 output
    • 用以下的 command 可以取得最終或更新後的 deployment configuration 檔,這個檔案其實放在 etcd底下,因爲 etcd存的是整個 cluster 的狀態,包括每一個 component
    kubectl get deployment nginx-deployment -o yaml
    
    • Result:
    ...
    status:
      availableReplicas: 2
      conditions:
      - lastTransitionTime: "2024-02-27T06:49:06Z"
        lastUpdateTime: "2024-02-27T06:49:06Z"
        message: Deployment has minimum availability.
        reason: MinimumReplicasAvailable
        status: "True"
        type: Available
      - lastTransitionTime: "2024-02-27T06:49:05Z"
        lastUpdateTime: "2024-02-27T06:49:06Z"
        message: ReplicaSet "nginx-deployment-7b965f675d" has successfully progressed.
        reason: NewReplicaSetAvailable
        status: "True"
        type: Progressing
      observedGeneration: 1
      readyReplicas: 2
      replicas: 2
      updatedReplicas: 2
    
    • 將 Result 另存成一個檔案

      kubectl get deployment nginx-deployment -o yaml > nginx-deployment-result.yaml
      
    • All this status is automatically edited and updated constantly by K8s.

    • 可以看到 how many replicas are running, what the state of those replicas, etc.

    • 在除錯時查看這段 status 有助於 trouble shooting

  • 除了 Status,還可以在這個 yaml 看到:

    1. K8s 在 metadata 加了其它東西,像是

      creationTimestamp: 這個 component 建立的時間

    2. specification 也是

  • 如果要自動化腳本,替既有的 deployment 複製一份新的 yaml,須要刪除這些k8s自動產生的內容

    1. clean that deployment configuration file first
    2. create another deployment from that blueprint
  • 用 configuration file 刪除 deployment 與 service

    kubectl delete -f nginx-deployment.yaml
    

    可以發現 nginx-deployment 以及 nginx-service 都被刪掉看不到了

    https://i.imgur.com/LQzzJXf.png

Demo: MongoDB n MongoExpress

使用 K8s components 設定完整的 application setup,這一段會部署兩個 applications:mongodb 以及 mongoExpress。

選擇原因:it demostrates really well a typical setup of a web application and its database.

K8s components概觀

  • 2 Deployment / Pod
  • 2 Services
  • 1 ConfigMap
  • 1 Secret

Steps

  1. 建立一個 MongoDB pod

  2. 要能夠與這個Pod溝通的話,需要一個 Service — 建立 internal service

    • No external request are allowed to the Pod, only components inside the same cluster can talk to it.
  3. 建立一個 Mongo Express deployment

    • 需要一個 mongodb 的 database URL,這樣 MongoExpress 才能連上 DB
    • 需要 credentials:資料庫的 username, password
      • 藉由deployment.yaml的環境變數,將上述資訊傳遞給 Mongo Express
  4. 建立一個 ConifgMap,存放 db url

  5. 建立一個 Secret,存放 DB 帳密驗證資料

  6. 再由 Deployment.yaml 引用這兩個 components

Browser Request Flow through the K8s components

  • The request comes from the browser and it goes to the external service of the Express (Mongo Express External Service), which will then forward to the Express pod.
  • The Pod will then connect to internal service of MongoDB (MongoDB Internal Service), that is basically the database URL (as stored in ConfigMap).
  • It will forward. them to MongoDB Pod where it will authenticate the request using the credentials.

Mongo DB Deployment

  • 取得 cluster 裡面的所有 components setup

    kubectl get all
    

    https://i.imgur.com/WpPCLDQ.png

  1. 建立一個 MongoDB deployment

    Pod Blueprint:

    • name: mongodb

    • image: mongo

    • specify what port I want to expose, using ports.containerPort

    • 環境變數的name 參照 dockerhub_mongodb

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    
    # mongo.yaml
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: mongodb-deployment
      labels:
        app: mongodb
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: mongodb
      template:
        metadata:
          labels:
            app: mongodb
        spec:
          containers:
          - name: mongodb
            image: mongo
            ports: 
            - containerPort: 27017
            env:
            - name: MONGO_INITDB_ROOT_USERNAME
              value: 
            - name: MONGO_INITDB_ROOT_PASSWORD
              value: 
    
  2. Note that Deployment Config File is checked into repository. So usually you wouldn’t write username and password inside the configuration file. 因此要建立一個 Secret 用來參照 config file 的 value

    • The Secret lives in K8s, not in the repository.

Create Secret

  • kind: Secret

  • metadata.name: a random name

  • type:

    • Opaque - default for arbitrary key-value pairs
  • data:

    • the actual contents - in key-value pairs

    • key: the name you come up with

    • value: not plain text, MUST be base64 encoded

      注意一定要有-n

      ❯ echo -n 'username' | base64
      dXNlcm5hbWU=
      ❯ echo -n 'passowrd' | base64
      cGFzc293cmQ=
      
  • Secret must be created before the Deployment if you want to reference secret in deployment.

1
2
3
4
5
6
7
8
apiVersion: v1
kind: Secret
metadata:
    name: mongodb-secret
type: Opaque
data:
    mongo-root-username: dXNlcm5hbWU=
    mongo-root-password: cGFzc293cmQ= 
  • 建立一個目錄,並將這兩個 yaml 放在同目錄下

    mkdir k8s-configuration
    mv mongo.yaml k8s-configuration
    mv mongo-secret.yaml k8s-configuration
    cd k8s-configuration/
    
  • 在這個目錄裡,輸入以下指令以新建 secret 並查看結果

    ❯ kubectl apply -f mongo-secret.yaml
    secret/mongodb-secret created
    ❯ kubectl get secret
    NAME             TYPE     DATA   AGE
    mongodb-secret   Opaque   2      34s
    
  • deployment config file 的 value,與其用 value 改用 valueFrom

    • valueFrom.secretKeyRef.name: 這裏放 secret yaml 的 metadata.name 值
    • valueFrom.secretKeyRef.key: 這裏放 secret yaml 裡面 data key 的命名
     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: mongodb-deployment
      labels:
        app: mongodb
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: mongodb
      template:
        metadata:
          labels:
            app: mongodb
        spec:
          containers:
          - name: mongodb
            image: mongo
            ports:
            - containerPort: 27017
            env:
            - name: MONGO_INITDB_ROOT_USERNAME
              valueFrom:
                secretKeyRef:
                  name: mongodb-secret
                  key: mongo-root-username
            - name: MONGO_INITDB_ROOT_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: mongodb-secret
                  key: mongo-root-password
    
  • 修改完後,建立這個 mongo deployment

    ❯ kubectl apply -f mongo.yaml
    deployment.apps/mongodb-deployment created
    
  • 接著就可以用 kubectl get all 查看剛建立好的 deployment, pod, replicaset

  • 如果用 kubectl get pod --watch 看到 STATUS 一直在 ContainerCreating,可以用 describe 除錯

    • --watch: 如果這個pod有狀態變更,shell會繼續印新的一行

      ❯ kubectl get pod --watch
      NAME                                 READY   STATUS    RESTARTS   AGE
      mongodb-deployment-699744c7d-6qp25   1/1     Running   0          6m36s
      
    • trouble shooting kubectl describe pod [the_pod_name]

      可以在 Events table, message 欄位看到目前進度

      kubectl describe pod mongodb-deployment-699744c7d-6qp25
      

MongoDB Internal Service

  • 建立一個 service,通常 deployment 跟 service 一起放在同一個檔案中

    • 使用 ---做 yaml document separation
    • kind: Service
    • metadata.name: a random name
    • selector: to connect to Pod through label
    • ports.port: the service port
    • ports.targetPort: containerPort of Deployment
     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    
    # ... configs of deployment
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: mongodb-service
    spec:
      selector:
        app: mongodb
      ports:
        - protocol: TCP
          port: 27017
          targetPort: 27017
    
  • kubectl apply -f mongo.yaml建立 deployment

    ❯ kubectl apply -f mongo.yaml
    deployment.apps/mongodb-deployment unchanged
    service/mongodb-service created
    
    ❯ kubectl get service
    NAME              TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)     AGE
    kubernetes        ClusterIP   10.96.0.1       <none>        443/TCP     3d3h
    mongodb-service   ClusterIP   10.99.162.252   <none>        27017/TCP   62s
    
  • 驗證 service is attached to the K8s pod

    • Endpoints: IP address of the pod and the port where the application inside the pod is listening
    • kubectl get pod -o wide: to get additional output
    ❯ kubectl describe service mongodb-service
    Name:              mongodb-service
    Namespace:         default
    Labels:            <none>
    Annotations:       <none>
    Selector:          app=mongodb
    Type:              ClusterIP
    IP Family Policy:  SingleStack
    IP Families:       IPv4
    IP:                10.99.162.252
    IPs:               10.99.162.252
    Port:              <unset>  27017/TCP
    TargetPort:        27017/TCP
    Endpoints:         10.244.0.9:27017
    Session Affinity:  None
    Events:            <none>
    
    ❯ kubectl get pod -o wide
    NAME                                 READY   STATUS    RESTARTS   AGE   IP           NODE       NOMINATED NODE   READINESS GATES
    mongodb-deployment-699744c7d-6qp25   1/1     Running   0          24m   10.244.0.9   minikube   <none>           <none>
    
  • To see all the components for one application:

 ❯ kubectl get all | grep mongodb

https://i.imgur.com/TGPEA9g.png

Mongo Express Deployment & Service & ConfigMap

接著要建立 Mongo Express Deployment 以及 Service,還有 external configuration (ConfigMap, 用來放DB Url)

  • 以下是 mongo-express 的 yaml

    • 需要三個環境變數,命名參考 dockerhub_mongo_express
      1. Which DB to connect? MongoDB Address / Internal Service
        • ME_CONFIG_MONGODB_SERVER
      2. Which credentials to authenticate?
        • ME_CONFIG_MONGODB_ADMINUSERNAME
        • ME_CONFIG_MONGODB_ADMINPASSWORD
    • Need to open ports (you can have multiple ports in a pod)
      • add environment names, username 與 password 跟之前的一樣
      • Server 使用 ConfigMap 目的:
        • external configuration
        • centralization 存放在同一位置
        • other components can use it 其它元件也可共用
     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: mongo-express
      labels:
        app: mongo-express
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: mongo-express
      template:
        metadata:
          labels:
            app: mongo-express
        spec:
          containers:
          - name: mongo-express
            image: mongo-express
            ports:
            - containerPort: 8081
            env:
            - name: ME_CONFIG_MONGODB_ADMINUSERNAME
              valueFrom:
                secretKeyRef:
                  name: mongodb-secret
                  key: mongo-root-username
            - name: ME_CONFIG_MONGODB_ADMINPASSWORD
              valueFrom:
                secretKeyRef:
                  name: mongodb-secret
                  key: mongo-root-password
            - name: ME_CONFIG_MONGODB_SERVER
              value: 
    
  • 接著是 ConfigMap Configuration File

    • yaml結構基本上與 secret 一樣
    • kind: ConfigMap
    • metadata.name: a random name
    • data: the actual contents - in key-value pairs
    1
    2
    3
    4
    5
    6
    
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: mongodb-configmap
    data:
      database_url: mongodb-service
    
  • 跟 Secret 一樣,執行建立ConfigMap 還有 MongoExpress 的順序,ConfigMap要先執行,這樣 MongoExpress 在建立時才能夠參照

  • 在 MongoExpress 的 yaml 引用 configmap 的方式也跟 Secret 差不多

    • 只有 configMapKeyRef 對應 secretKeyRef
    • 其它 configMapKeyRef.name 是 configmap 的 data.name
    • 而 configMapKeyRef.key 是 configmap 定義的 data key 名字
    1
    2
    3
    4
    5
    
            - name: ME_CONFIG_MONGODB_SERVER
              valueFrom:
                configMapKeyRef:
                  name: mongodb-configmap
                  key: database_url
    
  • 依順序建立 configmap 和 deployment

    ❯ kubectl apply -f mongo-configmap.yaml
    configmap/mongodb-configmap created
    
    ❯ kubectl apply -f mongo-express.yaml
    deployment.apps/mongo-express created
    
    ❯ kubectl get pod
    NAME                                 READY   STATUS    RESTARTS   AGE
    mongo-express-859f75dd4f-qr59v       1/1     Running   0          77s
    mongodb-deployment-699744c7d-6qp25   1/1     Running   0          63m
    
  • 查看 log kubectl logs [name_of_the_pod]

    ❯ kubectl logs mongo-express-859f75dd4f-qr59v
    Waiting for mongo:27017...
    ...
    Mongo Express server listening at http://0.0.0.0:8081
    Server is open to allow connections from anyone (0.0.0.0)
    basicAuth credentials are "admin:pass", it is recommended you change this in your config.js!
    
  • mongo-express.yaml裡,建立 service 的 config file

    • Because in practice, you never have deployment without service, so it makes sense to keep them together.
    • How to make it an External Service?
      • type: LoadBalancer
        • A bad name because it can be confusing. Internal service also acts as a load balancer.
        • What this loadbalancer does is that it accepts external requests by assigning the service an external IP address and so accepts external requests
      • ports.nodePort:
        • Port for external IP address
        • Port you need to put into browser
        • Range must be between 30000 ~ 32767
        • The port where this external IP address will be open.
     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    
    # mongo express deployment
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: mongo-express-service
    spec: 
      selector:
        app: mongo-express
      type: LoadBalancer
      ports:
        - protocol: TCP
          port: 8081
          targetPort: 8081
          nodePort: 30100
    
  • 啟動 mongo express service

    ❯ kubectl apply -f mongo-express.yaml
    deployment.apps/mongo-express unchanged
    service/mongo-express-service created
    
  • 可以看到

    • type ClusterIP:

      • Internal Service or Cluster IP is default, you don’t have to define it when you are creating internal service.
      • 與 loadbalancer 的差別:cluster ip will give the service an internal IP address
    • type LoadBalancer:

      • LoadBalancer will also service an internal IP address, and i
❯ kubectl get service
NAME                    TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE
kubernetes              ClusterIP      10.96.0.1        <none>        443/TCP          3d5h
mongo-express-service   LoadBalancer   10.110.159.160   <pending>     8081:30000/TCP   58s
mongodb-service         ClusterIP      10.99.162.252    <none>        27017/TCP        86m

To be continued …



## Organize your components with K8s namespaces



## K8s ingress explained



## Helm - Package Manager



## Persisting Data in K8s with Volumes



## Deploying Stateful Apps with StatefulSet



## K8s Services explained
comments powered by Disqus