概要
2053.org からの続きで、おうちKubernetesクラスターを v1.30.14 から v1.31.14 にアップグレードします。
事前作業
バージョン整理、依存関係確認
wurly@rockers-ubuntu:~$ kubectl version --output=yaml clientVersion: buildDate: "2025-06-17T18:36:17Z" compiler: gc gitCommit: 9e18483918821121abdf9aa82bc14d66df5d68cd gitTreeState: clean gitVersion: v1.30.14 goVersion: go1.23.10 major: "1" minor: "30" platform: linux/amd64 kustomizeVersion: v5.0.4-0.20230601165947-6ce0bf390ce3 serverVersion: buildDate: "2025-06-17T18:29:20Z" compiler: gc gitCommit: 9e18483918821121abdf9aa82bc14d66df5d68cd gitTreeState: clean gitVersion: v1.30.14 goVersion: go1.23.10 major: "1" minor: "30" platform: linux/arm64
$ kubeadm version
cilium version
kubectl -n kube-system get ds cilium -o jsonpath='{.spec.template.spec.containers[0].image}{"\n"}'
kubectl -n rook-ceph get deploy rook-ceph-operator -o jsonpath='{.spec.template.spec.containers[0].image}{"\n"}'
kubectl -n metallb-system get deploy metallb-controller -o jsonpath='{.spec.template.spec.containers[0].image}{"\n"}'
kubectl -n ingress-system get deploy ingress-nginx-controller -o jsonpath='{.spec.template.spec.containers[0].image}{"\n"}'
kubeadm version: &version.Info{Major:"1", Minor:"30", GitVersion:"v1.30.14", GitCommit:"9e18483918821121abdf9aa82bc14d66df5d68cd", GitTreeState:"clean", BuildDate:"2025-06-17T18:34:53Z", GoVersion:"go1.23.10", Compiler:"gc", Platform:"linux/arm64"}
cilium-cli: v0.16.10 compiled with go1.22.4 on linux/arm64
cilium image (default): v1.15.5
cilium image (stable): v1.19.1
cilium image (running): 1.17.13
quay.io/cilium/cilium:v1.17.13@sha256:1e3907ba8815e2e474ea8da25876911af2da0ae07c04eaa87a326ba4343aa539
rook/ceph:v1.14.8
quay.io/metallb/controller:v0.14.9
registry.k8s.io/ingress-nginx/controller:v1.11.3@sha256:d56f135b6462cfc476447cfe564b83a45e8bb7da2774963b00d12161112270b7
apt repoのアップデート
全ノードで実行します。
echo "deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.31/deb/ /" \ | sudo tee /etc/apt/sources.list.d/kubernetes.list sudo apt-get update
upgrade plan
ctrl1にて実行します。
sudo apt-get install -y kubeadm kubeadm version sudo kubeadm upgrade plan
wurly@k8s-ctrl1:~$ sudo kubeadm upgrade plan [preflight] Running pre-flight checks. [upgrade/config] Reading configuration from the cluster... [upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' [upgrade] Running cluster health checks [upgrade] Fetching available versions to upgrade to [upgrade/versions] Cluster version: 1.30.14 [upgrade/versions] kubeadm version: v1.31.14 I0228 20:33:56.405283 110686 version.go:261] remote version is much newer: v1.35.2; falling back to: stable-1.31 [upgrade/versions] Target version: v1.31.14 [upgrade/versions] Latest version in the v1.30 series: v1.30.14 Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply': COMPONENT NODE CURRENT TARGET kubelet k8s-ctrl1 v1.30.14 v1.31.14 kubelet k8s-ctrl2 v1.30.14 v1.31.14 kubelet k8s-ctrl3 v1.30.14 v1.31.14 kubelet k8s-worker1 v1.30.14 v1.31.14 kubelet k8s-worker2 v1.30.14 v1.31.14 kubelet k8s-worker3 v1.30.14 v1.31.14 Upgrade to the latest stable version: COMPONENT NODE CURRENT TARGET kube-apiserver k8s-ctrl1 v1.30.14 v1.31.14 kube-apiserver k8s-ctrl2 v1.30.14 v1.31.14 kube-apiserver k8s-ctrl3 v1.30.14 v1.31.14 kube-controller-manager k8s-ctrl1 v1.30.14 v1.31.14 kube-controller-manager k8s-ctrl2 v1.30.14 v1.31.14 kube-controller-manager k8s-ctrl3 v1.30.14 v1.31.14 kube-scheduler k8s-ctrl1 v1.30.14 v1.31.14 kube-scheduler k8s-ctrl2 v1.30.14 v1.31.14 kube-scheduler k8s-ctrl3 v1.30.14 v1.31.14 kube-proxy 1.30.14 v1.31.14 CoreDNS v1.11.3 v1.11.3 etcd k8s-ctrl1 3.5.16-0 3.5.24-0 etcd k8s-ctrl2 3.5.16-0 3.5.24-0 etcd k8s-ctrl3 3.5.16-0 3.5.24-0 You can now apply the upgrade by executing the following command: kubeadm upgrade apply v1.31.14 _____________________________________________________________________ The table below shows the current state of component configs as understood by this version of kubeadm. Configs that have a "yes" mark in the "MANUAL UPGRADE REQUIRED" column require manual config upgrade or resetting to kubeadm defaults before a successful upgrade can be performed. The version to manually upgrade to is denoted in the "PREFERRED VERSION" column. API GROUP CURRENT VERSION PREFERRED VERSION MANUAL UPGRADE REQUIRED kubeproxy.config.k8s.io v1alpha1 v1alpha1 no kubelet.config.k8s.io v1beta1 v1beta1 no _____________________________________________________________________
アップグレード from v1.30.14 to v1.31.14
0) まず ctrl1 で etcd snapshot
sudo -i ETCDCTL_API=3 etcdctl snapshot save /root/etcd-$(date +%F-%H%M)-pre131.db \ --endpoints=https://127.0.0.1:2379 \ --cacert=/etc/kubernetes/pki/etcd/ca.crt \ --cert=/etc/kubernetes/pki/etcd/server.crt \ --key=/etc/kubernetes/pki/etcd/server.key exit
0) 途中監視(Ceph)
別ターミナル:
watch -n 2 'kubectl get nodes; echo; kubectl -n rook-ceph get pods -o wide | egrep "mon|mgr|osd|mds|rgw|operator"'
1) ctrl1(最初の1台だけ “apply”)
# kubeadm が 1.31.14 になっていることを確認 kubeadm version # イメージpull sudo kubeadm config images pull # アップグレード本体 sudo kubeadm upgrade apply v1.31.14 # バージョン文字列確認 apt-cache madison kubelet | grep 1.31.14 apt-cache madison kubectl | grep 1.31.14 # kubelet/kubectl を 1.31.14 に sudo apt-get update sudo apt-get install -y kubelet=1.31.14-1.1 kubectl=1.31.14-1.1 sudo systemctl daemon-reload sudo systemctl restart kubelet
ctrl1で確認
kubectl get nodes
kubectl -n kube-system get pods -o wide | egrep 'kube-apiserver|kube-controller|kube-scheduler|etcd|coredns|kube-proxy'
cilium status
$ kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-ctrl1 Ready control-plane 616d v1.30.14 k8s-ctrl2 Ready control-plane 616d v1.29.15 k8s-ctrl3 Ready control-plane 616d v1.29.15 k8s-worker1 Ready <none> 615d v1.29.15 k8s-worker2 Ready <none> 606d v1.29.15 k8s-worker3 Ready <none> 606d v1.29.15
wurly@k8s-ctrl1:~/work$ cilium status
/¯¯\
/¯¯\__/¯¯\ Cilium: OK
\__/¯¯\__/ Operator: OK
/¯¯\__/¯¯\ Envoy DaemonSet: OK
\__/¯¯\__/ Hubble Relay: disabled
\__/ ClusterMesh: disabled
Deployment cilium-operator Desired: 2, Ready: 2/2, Available: 2/2
DaemonSet cilium-envoy Desired: 6, Ready: 6/6, Available: 6/6
DaemonSet cilium Desired: 6, Ready: 6/6, Available: 6/6
Containers: cilium-envoy Running: 6
cilium Running: 6
cilium-operator Running: 2
Cluster Pods: 36/36 managed by Cilium
Helm chart version:
Image versions cilium quay.io/cilium/cilium:v1.17.13@sha256:1e3907ba8815e2e474ea8da25876911af2da0ae07c04eaa87a326ba4343aa539: 6
cilium-operator quay.io/cilium/operator-generic:v1.17.13@sha256:c2582d9eaeec598de9cd8815a3ed20caade17c26858eea672cff3240b0970983: 2
cilium-envoy quay.io/cilium/cilium-envoy:v1.35.9-1770554954-8ce3bb4eca04188f4a0a1bfbd0a06a40f90883de@sha256:da85124deeb42c8e56e55e9e6e155740f5df00e1064759a244bc246c3addb45d: 6
2) ctrl2 / ctrl3(各ノード、順番に)
※ここは apply じゃなくて upgrade node
sudo apt-get update sudo apt-get install -y kubeadm=1.31.14-1.1 sudo kubeadm config images pull sudo kubeadm upgrade node sudo apt-get install -y kubelet=1.31.14-1.1 kubectl=1.31.14-1.1 sudo systemctl daemon-reload sudo systemctl restart kubelet
(各 ctrl の後に kubectl get nodes で上がってくるのを確認)
3) worker(必ず1台ずつ)
例:worker1
# 管理端末(rockers-ubuntu)で kubectl drain k8s-worker1 --ignore-daemonsets --delete-emptydir-data # worker1 で sudo apt-get update sudo apt-get install -y kubeadm=1.31.14-1.1 sudo kubeadm upgrade node sudo apt-get install -y kubelet=1.31.14-1.1 kubectl=1.31.14-1.1 sudo systemctl daemon-reload sudo systemctl restart kubelet # 管理端末で kubectl uncordon k8s-worker1 kubectl get nodes
worker2/3 も同様。
確認
wurly@rockers-ubuntu:~$ k get nodes -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME k8s-ctrl1 Ready control-plane 616d v1.31.14 192.168.10.11 <none> Ubuntu 22.04.4 LTS 5.15.0-1073-raspi containerd://1.6.33 k8s-ctrl2 Ready control-plane 616d v1.31.14 192.168.10.12 <none> Ubuntu 22.04.4 LTS 5.15.0-1073-raspi containerd://1.6.33 k8s-ctrl3 Ready control-plane 616d v1.31.14 192.168.10.13 <none> Ubuntu 22.04.4 LTS 5.15.0-1073-raspi containerd://1.6.33 k8s-worker1 Ready <none> 615d v1.31.14 192.168.10.21 <none> Ubuntu 22.04.4 LTS 5.15.0-164-generic containerd://1.7.18 k8s-worker2 Ready <none> 606d v1.31.14 192.168.10.22 <none> Ubuntu 22.04.4 LTS 5.15.0-134-generic containerd://1.7.18 k8s-worker3 Ready <none> 606d v1.31.14 192.168.10.23 <none> Ubuntu 22.04.4 LTS 5.15.0-134-generic containerd://1.7.18
wurly@k8s-ctrl1:~$ cilium status
/¯¯\
/¯¯\__/¯¯\ Cilium: OK
\__/¯¯\__/ Operator: OK
/¯¯\__/¯¯\ Envoy DaemonSet: OK
\__/¯¯\__/ Hubble Relay: disabled
\__/ ClusterMesh: disabled
Deployment cilium-operator Desired: 2, Ready: 2/2, Available: 2/2
DaemonSet cilium-envoy Desired: 6, Ready: 6/6, Available: 6/6
DaemonSet cilium Desired: 6, Ready: 6/6, Available: 6/6
Containers: cilium-envoy Running: 6
cilium Running: 6
cilium-operator Running: 2
Cluster Pods: 28/28 managed by Cilium
Helm chart version:
Image versions cilium-envoy quay.io/cilium/cilium-envoy:v1.35.9-1770554954-8ce3bb4eca04188f4a0a1bfbd0a06a40f90883de@sha256:da85124deeb42c8e56e55e9e6e155740f5df00e1064759a244bc246c3addb45d: 6
cilium quay.io/cilium/cilium:v1.17.13@sha256:1e3907ba8815e2e474ea8da25876911af2da0ae07c04eaa87a326ba4343aa539: 6
cilium-operator quay.io/cilium/operator-generic:v1.17.13@sha256:c2582d9eaeec598de9cd8815a3ed20caade17c26858eea672cff3240b0970983: 2
