Kubernetesクラスターのアップグレード (v1.29.15 -> v1.30.14)

apt repoのアップデート

kubernetes-apt-keyring.gpg の置き場を “etc/apt/trusted.gpg.d” から “/etc/apt/keyrings” に揃えます。

sudo rm /etc/apt/trusted.gpg.d/kubernetes-apt-keyring.gpg

sudo mkdir -p /etc/apt/keyrings
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.30/deb/Release.key \
  | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg

echo "deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.30/deb/ /" \
  | sudo tee /etc/apt/sources.list.d/kubernetes.list

sudo apt-get update

upgrade plan (ctrl1にて)

sudo apt-get install -y kubeadm
kubeadm version
sudo kubeadm upgrade plan

wurly@k8s-ctrl1:~/work$ kubeadm version
sudo kubeadm upgrade plan
kubeadm version: &version.Info{Major:"1", Minor:"30", GitVersion:"v1.30.14", GitCommit:"9e18483918821121abdf9aa82bc14d66df5d68cd", GitTreeState:"clean", BuildDate:"2025-06-17T18:34:53Z", GoVersion:"go1.23.10", Compiler:"gc", Platform:"linux/arm64"}
[preflight] Running pre-flight checks.
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[upgrade] Running cluster health checks
[upgrade] Fetching available versions to upgrade to
[upgrade/versions] Cluster version: 1.29.15
[upgrade/versions] kubeadm version: v1.30.14
I0228 15:06:42.697792   99777 version.go:256] remote version is much newer: v1.35.2; falling back to: stable-1.30
[upgrade/versions] Target version: v1.30.14
[upgrade/versions] Latest version in the v1.29 series: v1.29.15

Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT   NODE          CURRENT    TARGET
kubelet     k8s-ctrl1     v1.29.15   v1.30.14
kubelet     k8s-ctrl2     v1.29.15   v1.30.14
kubelet     k8s-ctrl3     v1.29.15   v1.30.14
kubelet     k8s-worker1   v1.29.15   v1.30.14
kubelet     k8s-worker2   v1.29.15   v1.30.14
kubelet     k8s-worker3   v1.29.15   v1.30.14

Upgrade to the latest stable version:

COMPONENT                 NODE        CURRENT    TARGET
kube-apiserver            k8s-ctrl1   v1.29.15   v1.30.14
kube-apiserver            k8s-ctrl2   v1.29.15   v1.30.14
kube-apiserver            k8s-ctrl3   v1.29.15   v1.30.14
kube-controller-manager   k8s-ctrl1   v1.29.15   v1.30.14
kube-controller-manager   k8s-ctrl2   v1.29.15   v1.30.14
kube-controller-manager   k8s-ctrl3   v1.29.15   v1.30.14
kube-scheduler            k8s-ctrl1   v1.29.15   v1.30.14
kube-scheduler            k8s-ctrl2   v1.29.15   v1.30.14
kube-scheduler            k8s-ctrl3   v1.29.15   v1.30.14
kube-proxy                            1.29.15    v1.30.14
CoreDNS                               v1.11.1    v1.11.3
etcd                      k8s-ctrl1   3.5.16-0   3.5.15-0
etcd                      k8s-ctrl2   3.5.16-0   3.5.15-0
etcd                      k8s-ctrl3   3.5.16-0   3.5.15-0

You can now apply the upgrade by executing the following command:

        kubeadm upgrade apply v1.30.14

_____________________________________________________________________


The table below shows the current state of component configs as understood by this version of kubeadm.
Configs that have a "yes" mark in the "MANUAL UPGRADE REQUIRED" column require manual config upgrade or
resetting to kubeadm defaults before a successful upgrade can be performed. The version to manually
upgrade to is denoted in the "PREFERRED VERSION" column.

API GROUP                 CURRENT VERSION   PREFERRED VERSION   MANUAL UPGRADE REQUIRED
kubeproxy.config.k8s.io   v1alpha1          v1alpha1            no
kubelet.config.k8s.io     v1beta1           v1beta1             no
_____________________________________________________________________

アップグレード from v1.29.15 to v1.30.14

前提:全ノードで v1.30 の apt repo を入れている(= kubeadm 1.30.14-1.1 が入る状態)。

0) まず ctrl1 で etcd snapshot(強く推奨)

sudo -i
ETCDCTL_API=3 etcdctl snapshot save /root/etcd-$(date +%F-%H%M)-pre130.db \
  --endpoints=https://127.0.0.1:2379 \
  --cacert=/etc/kubernetes/pki/etcd/ca.crt \
  --cert=/etc/kubernetes/pki/etcd/server.crt \
  --key=/etc/kubernetes/pki/etcd/server.key
exit

1) ctrl1(最初の1台だけ “apply”)

# kubeadm が 1.30.14 になっていることを確認
kubeadm version

# (任意)イメージpull
sudo kubeadm config images pull

# アップグレード本体
sudo kubeadm upgrade apply v1.30.14

# kubelet/kubectl を 1.30.14 に
sudo apt-get update
sudo apt-get install -y kubelet=1.30.14-1.1 kubectl=1.30.14-1.1
sudo systemctl daemon-reload
sudo systemctl restart kubelet

確認(ctrl1 で):

kubectl get nodes
kubectl -n kube-system get pods -o wide | egrep 'kube-apiserver|kube-controller|kube-scheduler|etcd|coredns|kube-proxy'
cilium status

$ kubectl get nodes
NAME          STATUS   ROLES           AGE    VERSION
k8s-ctrl1     Ready    control-plane   616d   v1.30.14
k8s-ctrl2     Ready    control-plane   616d   v1.29.15
k8s-ctrl3     Ready    control-plane   616d   v1.29.15
k8s-worker1   Ready    <none>          615d   v1.29.15
k8s-worker2   Ready    <none>          606d   v1.29.15
k8s-worker3   Ready    <none>          606d   v1.29.15

wurly@k8s-ctrl1:~/work$ cilium status
    /¯¯\
 /¯¯\__/¯¯\    Cilium:             OK
 \__/¯¯\__/    Operator:           OK
 /¯¯\__/¯¯\    Envoy DaemonSet:    OK
 \__/¯¯\__/    Hubble Relay:       disabled
    \__/       ClusterMesh:        disabled

Deployment             cilium-operator    Desired: 2, Ready: 2/2, Available: 2/2
DaemonSet              cilium-envoy       Desired: 6, Ready: 6/6, Available: 6/6
DaemonSet              cilium             Desired: 6, Ready: 6/6, Available: 6/6
Containers:            cilium-envoy       Running: 6
                       cilium             Running: 6
                       cilium-operator    Running: 2
Cluster Pods:          36/36 managed by Cilium
Helm chart version:    
Image versions         cilium             quay.io/cilium/cilium:v1.17.13@sha256:1e3907ba8815e2e474ea8da25876911af2da0ae07c04eaa87a326ba4343aa539: 6
                       cilium-operator    quay.io/cilium/operator-generic:v1.17.13@sha256:c2582d9eaeec598de9cd8815a3ed20caade17c26858eea672cff3240b0970983: 2
                       cilium-envoy       quay.io/cilium/cilium-envoy:v1.35.9-1770554954-8ce3bb4eca04188f4a0a1bfbd0a06a40f90883de@sha256:da85124deeb42c8e56e55e9e6e155740f5df00e1064759a244bc246c3addb45d: 6

2) ctrl2 / ctrl3(各ノード、順番に)

※ここは apply じゃなくて upgrade node

sudo apt-get update
sudo apt-get install -y kubeadm=1.30.14-1.1
sudo kubeadm config images pull

sudo kubeadm upgrade node
sudo apt-get install -y kubelet=1.30.14-1.1 kubectl=1.30.14-1.1
sudo systemctl daemon-reload
sudo systemctl restart kubelet

(各 ctrl の後に ctrl1 で kubectl get nodes で上がってくるのを確認)

3) worker(必ず1台ずつ)

例:worker1

# 管理端末(rockers-ubuntu)で
kubectl drain k8s-worker1 --ignore-daemonsets --delete-emptydir-data

# worker1 で
sudo apt-get update
sudo apt-get install -y kubeadm=1.30.14-1.1
sudo kubeadm upgrade node
sudo apt-get install -y kubelet=1.30.14-1.1 kubectl=1.30.14-1.1
sudo systemctl daemon-reload
sudo systemctl restart kubelet

# 管理端末で
kubectl uncordon k8s-worker1
kubectl get nodes

worker2/3 も同様。

4) 途中監視(Cephあるのでおすすめ)

別ターミナル:

watch -n 2 'kubectl get nodes; echo; kubectl -n rook-ceph get pods -o wide | egrep "mon|mgr|osd|mds|rgw|operator"'

最初の一手

まず ctrl1 でこの2つを実行して、結果を貼ってください:

sudo kubeadm upgrade apply v1.30.14 の最後の数行
kubectl get nodes(ctrl1 が v1.30.14 になってるか)

そこまで行けたら、ctrl2→ctrl3→worker の流れはほぼ作業ゲーになります。

確認

wurly@rockers-ubuntu:~/temp$ kubectl get nodes -o wide
NAME          STATUS   ROLES           AGE    VERSION    INTERNAL-IP     EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION       CONTAINER-RUNTIME
k8s-ctrl1     Ready    control-plane   616d   v1.30.14   192.168.10.11   <none>        Ubuntu 22.04.4 LTS   5.15.0-1073-raspi    containerd://1.6.33
k8s-ctrl2     Ready    control-plane   616d   v1.30.14   192.168.10.12   <none>        Ubuntu 22.04.4 LTS   5.15.0-1073-raspi    containerd://1.6.33
k8s-ctrl3     Ready    control-plane   616d   v1.30.14   192.168.10.13   <none>        Ubuntu 22.04.4 LTS   5.15.0-1073-raspi    containerd://1.6.33
k8s-worker1   Ready    <none>          615d   v1.30.14   192.168.10.21   <none>        Ubuntu 22.04.4 LTS   5.15.0-164-generic   containerd://1.7.18
k8s-worker2   Ready    <none>          606d   v1.30.14   192.168.10.22   <none>        Ubuntu 22.04.4 LTS   5.15.0-134-generic   containerd://1.7.18
k8s-worker3   Ready    <none>          606d   v1.30.14   192.168.10.23   <none>        Ubuntu 22.04.4 LTS   5.15.0-134-generic   containerd://1.7.18

wurly@k8s-ctrl1:~/work$ cilium status
    /¯¯\
 /¯¯\__/¯¯\    Cilium:             OK
 \__/¯¯\__/    Operator:           OK
 /¯¯\__/¯¯\    Envoy DaemonSet:    OK
 \__/¯¯\__/    Hubble Relay:       disabled
    \__/       ClusterMesh:        disabled

Deployment             cilium-operator    Desired: 2, Ready: 2/2, Available: 2/2
DaemonSet              cilium-envoy       Desired: 6, Ready: 6/6, Available: 6/6
DaemonSet              cilium             Desired: 6, Ready: 6/6, Available: 6/6
Containers:            cilium             Running: 6
                       cilium-operator    Running: 2
                       cilium-envoy       Running: 6
Cluster Pods:          27/27 managed by Cilium
Helm chart version:    
Image versions         cilium             quay.io/cilium/cilium:v1.17.13@sha256:1e3907ba8815e2e474ea8da25876911af2d
                       cilium-operator    quay.io/cilium/operator-generic:v1.17.13@sha256:c2582d9eaeec598de9cd8815a
983: 2
                       cilium-envoy       quay.io/cilium/cilium-envoy:v1.35.9-1770554954-8ce3bb4eca04188f4a0a1bfbd0
8e56e55e9e6e155740f5df00e1064759a244bc246c3addb45d: 6
Copied title and URL