クラスタ構築
kubeadm init (マスターノードで実行)
コントロールプレーンの初期化をします。 下記コマンドを実行します。
sudo kubeadm init
下記が実行結果です。
wurly@k8s1:~$ sudo kubeadm init I0203 16:34:26.708499 23499 version.go:256] remote version is much newer: v1.29.1; falling back to: stable-1.28 [init] Using Kubernetes version: v1.28.6 (中略) Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config Alternatively, if you are the root user, you can run: export KUBECONFIG=/etc/kubernetes/admin.conf You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 192.168.1.201:6443 --token mu8v4r.5e7v4uibqkplwmxx \ --discovery-token-ca-cert-hash sha256:de9e4adece0fbdcbd04b0e21282b2cb30d8060bbb972777294489891100bf960
コマンド実行後に出力されたテキストには、内部で実行されたログと、この後に行うべきアクションが含まれます。
まず、指示通りこちらを実行します。
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
ここで、nodesの状態は下記のようになっており、ワーカーノードはまだ参加していない状態です。
wurly@k8s1:~$ kubectl get nodes NAME STATUS ROLES AGE VERSION k8s1 NotReady control-plane 4m25s v1.28.2
Calico のインストール (マスターノードで実行)
You should now deploy a pod network to the cluster. Run “kubectl apply -f [podnetwork].yaml” with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/
次に、下記のリストからいずれかの “Networking and Network Policy” をインストールする必要があります。
https://kubernetes.io/docs/concepts/cluster-administration/addons/
数多くの選択肢がありますが、多くの手順書の内容を見習い、calico をインストールします。
インストール前のpodの状態です。
wurly@k8s1:~$ kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE coredns-5dd5756b68-2dfmx 0/1 Pending 0 7m32s coredns-5dd5756b68-z2pxd 0/1 Pending 0 7m32s etcd-k8s1 1/1 Running 0 7m37s kube-apiserver-k8s1 1/1 Running 0 7m36s kube-controller-manager-k8s1 1/1 Running 0 7m35s kube-proxy-kkv4r 1/1 Running 0 7m32s kube-scheduler-k8s1 1/1 Running 0 7m37s
実行するコマンドは下記です。
kubectl apply -f https://raw.githubusercontent.com/projectcalico/calico/v3.25.0/manifests/calico.yaml
実行結果は下記の通りです。
wurly@k8s1:~$ kubectl apply -f https://raw.githubusercontent.com/projectcalico/calico/v3.25.0/manifests/calico.yaml poddisruptionbudget.policy/calico-kube-controllers created serviceaccount/calico-kube-controllers created serviceaccount/calico-node created configmap/calico-config created customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created clusterrole.rbac.authorization.k8s.io/calico-node created clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created clusterrolebinding.rbac.authorization.k8s.io/calico-node created daemonset.apps/calico-node created deployment.apps/calico-kube-controllers created
直後に確認したところ calico-node が生成中でした。
wurly@k8s1:~$ kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE calico-kube-controllers-658d97c59c-gjb8r 0/1 Pending 0 30s calico-node-fhglf 0/1 Init:1/3 0 30s coredns-5dd5756b68-2dfmx 0/1 Pending 0 9m42s coredns-5dd5756b68-z2pxd 0/1 Pending 0 9m42s etcd-k8s1 1/1 Running 0 9m47s kube-apiserver-k8s1 1/1 Running 0 9m46s kube-controller-manager-k8s1 1/1 Running 0 9m45s kube-proxy-kkv4r 1/1 Running 0 9m42s kube-scheduler-k8s1 1/1 Running 0 9m47s
しばらく待つとpodが立ち上がります。
wurly@k8s1:~$ kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE calico-kube-controllers-658d97c59c-gjb8r 1/1 Running 0 115s calico-node-fhglf 1/1 Running 0 115s coredns-5dd5756b68-2dfmx 1/1 Running 0 11m coredns-5dd5756b68-z2pxd 1/1 Running 0 11m etcd-k8s1 1/1 Running 0 11m kube-apiserver-k8s1 1/1 Running 0 11m kube-controller-manager-k8s1 1/1 Running 0 11m kube-proxy-kkv4r 1/1 Running 0 11m kube-scheduler-k8s1 1/1 Running 0 11m
kubeadm join (ワーカーノードで実行)
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.1.201:6443 –token mu8v4r.5e7v4uibqkplwmxx \ –discovery-token-ca-cert-hash sha256:de9e4adece0fbdcbd04b0e21282b2cb30d8060bbb972777294489891100bf960
マスターノードに出力されたメッセージに基づき、ワーカーノードとなるマシン側で kubeadm join を実行します。
wurly@k8s2:~$ kubeadm join 192.168.1.201:6443 --token mu8v4r.5e7v4uibqkplwmxx \ --discovery-token-ca-cert-hash sha256:de9e4adece0fbdcbd04b0e21282b2cb30d8060bbb972777294489891100bf960 > accepts at most 1 arg(s), received 3 To see the stack trace of this error execute with --v=5 or higher
wurly@k8s3:~$ kubeadm join 192.168.1.201:6443 --token mu8v4r.5e7v4uibqkplwmxx \ --discovery-token-ca-cert-hash sha256:de9e4adece0fbdcbd04b0e21282b2cb30d8060bbb972777294489891100bf960 > accepts at most 1 arg(s), received 3 To see the stack trace of this error execute with --v=5 or higher
よくわからないメッセージが表示されてしまいました。 テキストのコピペの問題がありそうなので、改行部分を除去して実行しました。
wurly@k8s2:~$ kubeadm join 192.168.1.201:6443 --token mu8v4r.5e7v4uibqkplwmxx --discovery-token-ca-cert-hash sha256:de9e4adece0fbdcbd04b0e21282b2cb30d8060bbb972777294489891100bf960
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR IsPrivilegedUser]: user is not running as root
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
ルート権限が必要です。sudo を使います。
k8s2 での実行結果は下記の通りです。
wurly@k8s2:~$ sudo kubeadm join 192.168.1.201:6443 --token mu8v4r.5e7v4uibqkplwmxx --discovery-token-ca-cert-hash sha256:de9e4adece0fbdcbd04b0e21282b2cb30d8060bbb972777294489891100bf960 [sudo] password for wurly: [preflight] Running pre-flight checks [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Starting the kubelet [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
k8s3 での実行結果は下記の通りです。
wurly@k8s3:~$ sudo kubeadm join 192.168.1.201:6443 --token mu8v4r.5e7v4uibqkplwmxx --discovery-token-ca-cert-hash sha256:de9e4adece0fbdcbd04b0e21282b2cb30d8060bbb972777294489891100bf960 [sudo] password for wurly: [preflight] Running pre-flight checks [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Starting the kubelet [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
クラスタが構成されていることを確認 (マスターノードで実行)
kubectl get nodes にてノードの状態を確認します。 直後に get nodes するとワーカーノードが追加されてReadyになる様子が見られました。
wurly@k8s1:~$ kubectl get nodes NAME STATUS ROLES AGE VERSION k8s1 Ready control-plane 16m v1.28.2 k8s2 NotReady <none> 31s v1.28.2 k8s3 NotReady <none> 8s v1.28.2 wurly@k8s1:~$ kubectl get nodes NAME STATUS ROLES AGE VERSION k8s1 Ready control-plane 16m v1.28.2 k8s2 Ready <none> 54s v1.28.2 k8s3 NotReady <none> 31s v1.28.2 wurly@k8s1:~$ kubectl get nodes NAME STATUS ROLES AGE VERSION k8s1 Ready control-plane 17m v1.28.2 k8s2 Ready <none> 72s v1.28.2 k8s3 Ready <none> 49s v1.28.2
また、ワーカーノード側のcalico-nodeが生成される様子も見られます。
wurly@k8s1:~$ kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE calico-kube-controllers-658d97c59c-gjb8r 1/1 Running 0 7m55s calico-node-2xp2w 0/1 Init:2/3 0 64s calico-node-fhglf 1/1 Running 0 7m55s calico-node-zdnm7 0/1 Running 0 87s coredns-5dd5756b68-2dfmx 1/1 Running 0 17m coredns-5dd5756b68-z2pxd 1/1 Running 0 17m etcd-k8s1 1/1 Running 0 17m kube-apiserver-k8s1 1/1 Running 0 17m kube-controller-manager-k8s1 1/1 Running 0 17m kube-proxy-6db5z 1/1 Running 0 64s kube-proxy-kkv4r 1/1 Running 0 17m kube-proxy-llj5q 1/1 Running 0 87s kube-scheduler-k8s1 1/1 Running 0 17m wurly@k8s1:~$ kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE calico-kube-controllers-658d97c59c-gjb8r 1/1 Running 0 8m30s calico-node-2xp2w 1/1 Running 0 99s calico-node-fhglf 1/1 Running 0 8m30s calico-node-zdnm7 1/1 Running 0 2m2s coredns-5dd5756b68-2dfmx 1/1 Running 0 17m coredns-5dd5756b68-z2pxd 1/1 Running 0 17m etcd-k8s1 1/1 Running 0 17m kube-apiserver-k8s1 1/1 Running 0 17m kube-controller-manager-k8s1 1/1 Running 0 17m kube-proxy-6db5z 1/1 Running 0 99s kube-proxy-kkv4r 1/1 Running 0 17m kube-proxy-llj5q 1/1 Running 0 2m2s kube-scheduler-k8s1 1/1 Running 0 17m
お試しで、kubectl run で nginxイメージでシェルを起動してみます。
wurly@k8s1:~$ kubectl run --image=nginx:1.16 --restart=Never --rm -it sample-debug --command -- /bin/sh
podが立ち上がりました。
wurly@k8s1:~$ kubectl get pod -A NAMESPACE NAME READY STATUS RESTARTS AGE default sample-debug 1/1 Running 0 75s kube-system calico-kube-controllers-658d97c59c-gjb8r 1/1 Running 1 (3h48m ago) 20h kube-system calico-node-2xp2w 1/1 Running 2 (3h47m ago) 20h kube-system calico-node-fhglf 1/1 Running 1 (3h48m ago) 20h kube-system calico-node-zdnm7 1/1 Running 1 (3h48m ago) 20h kube-system coredns-5dd5756b68-2dfmx 1/1 Running 1 (3h48m ago) 21h kube-system coredns-5dd5756b68-z2pxd 1/1 Running 1 (3h48m ago) 21h kube-system etcd-k8s1 1/1 Running 1 (3h48m ago) 21h kube-system kube-apiserver-k8s1 1/1 Running 1 (3h48m ago) 21h kube-system kube-controller-manager-k8s1 1/1 Running 1 (3h48m ago) 21h kube-system kube-proxy-6db5z 1/1 Running 1 (3h48m ago) 20h kube-system kube-proxy-kkv4r 1/1 Running 1 (3h48m ago) 21h kube-system kube-proxy-llj5q 1/1 Running 1 (3h48m ago) 20h kube-system kube-scheduler-k8s1 1/1 Running 1 (3h48m ago) 21h
これで、Kubernetesクラスターの構築は完了です。
(参考)Kubenetesの理解を深める
行ったこと
最低限の作業で構築されたクラスターについてファイルを参照したり、kubectl を叩くことで学びを得ます。
これらの情報を、Kubernetes完全ガイド 第2版 (青山 真也 著) 19章 「Kubenetes のアーキテクチャを知る」を読みながら参照することで非常に理解が深まりました。
kubeadm init の実行結果
コマンド実行後に出力されたテキストには、内部で実行されたログと、この後に行うべきアクションが含まれます。 ログの内容から設定ファイルの場所等が読み取れます。
wurly@k8s1:~$ sudo kubeadm init I0203 16:34:26.708499 23499 version.go:256] remote version is much newer: v1.29.1; falling back to: stable-1.28 [init] Using Kubernetes version: v1.28.6 [preflight] Running pre-flight checks [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' W0203 16:35:06.461263 23499 checks.go:835] detected that the sandbox image "registry.k8s.io/pause:3.6" of the container runtime is inconsistent with that used by kubeadm. It is recommended that using "registry.k8s.io/pause:3.9" as the CRI sandbox image. [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "ca" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [k8s1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.1.201] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [k8s1 localhost] and IPs [192.168.1.201 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [k8s1 localhost] and IPs [192.168.1.201 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [apiclient] All control plane components are healthy after 19.518434 seconds [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster [upload-certs] Skipping phase. Please see --upload-certs [mark-control-plane] Marking the node k8s1 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers] [mark-control-plane] Marking the node k8s1 as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule] [bootstrap-token] Using token: mu8v4r.5e7v4uibqkplwmxx [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy
設定ファイル(マスターノード)
設定ファイルは /etc/kubernetes に作成されます。
wurly@k8s1:~$ ls -la /etc/kubernetes total 44 drwxr-xr-x 4 root root 4096 Feb 3 16:35 . drwxr-xr-x 98 root root 4096 Feb 3 16:30 .. -rw------- 1 root root 5649 Feb 3 16:35 admin.conf -rw------- 1 root root 5681 Feb 3 16:35 controller-manager.conf -rw------- 1 root root 1961 Feb 3 16:36 kubelet.conf drwxr-xr-x 2 root root 4096 Feb 3 16:35 manifests drwxr-xr-x 3 root root 4096 Feb 3 16:35 pki -rw------- 1 root root 5629 Feb 3 16:35 scheduler.conf
マニフェストファイル(マスターノード)
wurly@k8s1:~$ ls -la /etc/kubernetes/manifests total 24 drwxr-xr-x 2 root root 4096 Feb 3 16:35 . drwxr-xr-x 4 root root 4096 Feb 3 16:35 .. -rw------- 1 root root 2395 Feb 3 16:35 etcd.yaml -rw------- 1 root root 3887 Feb 3 16:35 kube-apiserver.yaml -rw------- 1 root root 3279 Feb 3 16:35 kube-controller-manager.yaml -rw------- 1 root root 1463 Feb 3 16:35 kube-scheduler.yaml
ルートCAを含む各種証明書、鍵(マスターノード)
wurly@k8s1:~$ ls -la /etc/kubernetes/pki total 68 drwxr-xr-x 3 root root 4096 Feb 3 16:35 . drwxr-xr-x 4 root root 4096 Feb 3 16:35 .. -rw-r--r-- 1 root root 1155 Feb 3 16:35 apiserver-etcd-client.crt -rw------- 1 root root 1679 Feb 3 16:35 apiserver-etcd-client.key -rw-r--r-- 1 root root 1164 Feb 3 16:35 apiserver-kubelet-client.crt -rw------- 1 root root 1679 Feb 3 16:35 apiserver-kubelet-client.key -rw-r--r-- 1 root root 1277 Feb 3 16:35 apiserver.crt -rw------- 1 root root 1679 Feb 3 16:35 apiserver.key -rw-r--r-- 1 root root 1107 Feb 3 16:35 ca.crt -rw------- 1 root root 1675 Feb 3 16:35 ca.key drwxr-xr-x 2 root root 4096 Feb 3 16:35 etcd -rw-r--r-- 1 root root 1123 Feb 3 16:35 front-proxy-ca.crt -rw------- 1 root root 1675 Feb 3 16:35 front-proxy-ca.key -rw-r--r-- 1 root root 1119 Feb 3 16:35 front-proxy-client.crt -rw------- 1 root root 1675 Feb 3 16:35 front-proxy-client.key -rw------- 1 root root 1675 Feb 3 16:35 sa.key -rw------- 1 root root 451 Feb 3 16:35 sa.pub
admin.conf(マスターノード)
wurly@k8s1:~$ sudo cat /etc/kubernetes/admin.conf apiVersion: v1 clusters: - cluster: certificate-authority-data: LS0tL...LS0K server: https://192.168.1.201:6443 name: kubernetes contexts: - context: cluster: kubernetes user: kubernetes-admin name: kubernetes-admin@kubernetes current-context: kubernetes-admin@kubernetes kind: Config preferences: {} users: - name: kubernetes-admin user: client-certificate-data: LS0tL...LS0tCg== client-key-data: LS0tL...LQo=
設定ファイル(ワーカーノード)
設定ファイルは /etc/kubernetes に作成されます。
wurly@k8s2:~$ ls -la /etc/kubernetes/ total 20 drwxr-xr-x 4 root root 4096 Feb 3 16:52 . drwxr-xr-x 98 root root 4096 Feb 3 16:30 .. -rw------- 1 root root 1962 Feb 3 16:51 kubelet.conf drwxr-xr-x 2 root root 4096 Jan 28 21:36 manifests drwxr-xr-x 2 root root 4096 Feb 3 16:51 pki
マニフェストファイル(ワーカーノード)
wurly@k8s2:~$ ls -la /etc/kubernetes/manifests total 8 drwxr-xr-x 2 root root 4096 Jan 28 21:36 . drwxr-xr-x 4 root root 4096 Feb 3 16:52 ..
ルートCAを含む各種証明書、鍵(ワーカーノード)
wurly@k8s2:~$ ls -la /etc/kubernetes/pki total 12 drwxr-xr-x 2 root root 4096 Feb 3 16:51 . drwxr-xr-x 4 root root 4096 Feb 3 16:52 .. -rw-r--r-- 1 root root 1107 Feb 3 16:51 ca.crt
namespace、podの状態など
wurly@k8s1:~$ kubectl get namespace NAME STATUS AGE default Active 19h kube-node-lease Active 19h kube-public Active 19h kube-system Active 19h
wurly@k8s1:~$ kubectl get pod -A NAMESPACE NAME READY STATUS RESTARTS AGE kube-system calico-kube-controllers-658d97c59c-gjb8r 1/1 Running 1 (128m ago) 19h kube-system calico-node-2xp2w 1/1 Running 2 (127m ago) 19h kube-system calico-node-fhglf 1/1 Running 1 (128m ago) 19h kube-system calico-node-zdnm7 1/1 Running 1 (128m ago) 19h kube-system coredns-5dd5756b68-2dfmx 1/1 Running 1 (128m ago) 19h kube-system coredns-5dd5756b68-z2pxd 1/1 Running 1 (128m ago) 19h kube-system etcd-k8s1 1/1 Running 1 (128m ago) 19h kube-system kube-apiserver-k8s1 1/1 Running 1 (128m ago) 19h kube-system kube-controller-manager-k8s1 1/1 Running 1 (128m ago) 19h kube-system kube-proxy-6db5z 1/1 Running 1 (128m ago) 19h kube-system kube-proxy-kkv4r 1/1 Running 1 (128m ago) 19h kube-system kube-proxy-llj5q 1/1 Running 1 (128m ago) 19h kube-system kube-scheduler-k8s1 1/1 Running 1 (128m ago) 19h
wurly@k8s1:~$ kubectl get ConfigMap -A NAMESPACE NAME DATA AGE default kube-root-ca.crt 1 19h kube-node-lease kube-root-ca.crt 1 19h kube-public cluster-info 2 19h kube-public kube-root-ca.crt 1 19h kube-system calico-config 4 19h kube-system coredns 1 19h kube-system extension-apiserver-authentication 6 19h kube-system kube-apiserver-legacy-service-account-token-tracking 1 19h kube-system kube-proxy 2 19h kube-system kube-root-ca.crt 1 19h kube-system kubeadm-config 1 19h kube-system kubelet-config 1 19h
参考にしたサイト
- How to Install Kubernetes Cluster on Ubuntu 22.04 (Step-by-Step Guide) | by Hakan Bayraktar | Medium
- おうちKubernetes2024実況
- 仮想化するときのブリッジにはフィルタリングを無効化しよう | mishimaの日記 | スラド
- home-kubernetes-2020/how-to-create-cluster-logical-kubeadm at master · CyberAgentHack/home-kubernetes-2020
- 自宅Kubernetesクラスタはじめました – えびサブレ
- kubeadm / containerd で Kubernetes #kubernetes – Qiita
- Installing kubeadm | Kubernetes
- Raspberry Pi 4でKubernetesクラスターを構築する【ソフトウェア編】 #Docker – Qiita
- おうちで「おうち Kubernetes インターン」を実施しました | CyberAgent Developers Blog
- net-toolsのインストール – 非推奨だがとりあえずifconfig、arp、route、netstatを使いたい場合 – Ubuntuサーバー構築入門 – Ubuntuサーバーでゼロから環境構築
- netplan/examples/static.yaml at main · canonical/netplan
- 【2023年最新版】Ubuntuで固定IPアドレスを使う設定をする
- Ubuntu 22.04 に Kubernetes をインストールして自宅クラウド | Rabbit Note
- kubeadmのインストール | Kubernetes
- home-kubernetes-2020/how-to-create-cluster-logical-hardway at master · CyberAgentHack/home-kubernetes-2020
おわりに
私の「おうちKubernetes」の目的は、まずは技術のキャッチアップであったので、逐次、作業の内容の意味を調べて確認し、その内容を記載しながら、進めました。
このように作業したおかげで特に途中で躓くことはありませんでした。
ほぼ週末しか作業できなかったので数週間かかりましたが、kubeadmを使った構築は実際のところ作業は平易でした。
一方で、自分で手を動かしてやってみることで思っていた以上にかなりKubernetesに対する理解が深まりました。
普段Kubernetesを使用しているが、実際のマシンを使って構築を行ったことが無い方には、おすすめできる内容です。(それなりにお金はかかりますが・・・)
この後、この環境を使ってデプロイなどを試していきます。