Hyper-V 환경에서 VM에서 다시 가상 머신을 만드려고 할때 에러가 발생합니다.

이때 가상 머신에서 nested virtualization 을 지원하기 위한 옵션 변경 방법 입니다.

 

아래의 명령을 Hyper-V 호스트에서 실행합니다.

Set-VMProcessor -VMName <VMName> -ExposeVirtualizationExtensions $true

 

사용 예시

PS C:\Users\administrator> Set-VMProcessor -VMName win10 -ExposeVirtualizationExtensions $true
PS C:\Users\administrator>  (get-VMProcessor -VMName win10).ExposeVirtualizationExtensions
True

 

참고

https://docs.microsoft.com/ko-kr/virtualization/hyper-v-on-windows/user-guide/nested-virtualization

 

Linux VM에서 간단한 테스트를 하기 위해 code-server 를 세팅했습니다.

 

 

code-server 는 WEB IDE 혹은 Browser IDE 로 알려진 도구입니다.

설치도 쉽고 vscode 와 UI 가 비슷해서 간단하게 사용할 수 있습니다.

 

아래 링크의 가이드에 따라 패키지를 설치합니다.

https://github.com/coder/code-server

[root@labco7 ~]# curl -fsSL https://code-server.dev/install.sh | sh
CentOS Linux 7 (Core)
Installing v4.1.0 of the amd64 rpm package from GitHub.

+ mkdir -p ~/.cache/code-server
+ curl -#fL -o ~/.cache/code-server/code-server-4.1.0-amd64.rpm.incomplete -C - https://github.com/coder/code-server/releases/download/v4.1.0/code-server-         4.1.0-amd64.rpm
######################################################################## 100.0%
+ mv ~/.cache/code-server/code-server-4.1.0-amd64.rpm.incomplete ~/.cache/code-server/code-server-4.1.0-amd64.rpm
+ rpm -i ~/.cache/code-server/code-server-4.1.0-amd64.rpm

rpm package has been installed.

To have systemd start code-server now and restart on boot:
  sudo systemctl enable --now code-server@$USER
Or, if you don't want/need a background service you can run:
  code-server
[root@labco7 ~]#

스크립트에서 OS를 인식하고 패키지 설치를 진행하는 것을 알수 있습니다. (스크립트를 대충 살펴보면 debian 에 대한 처리도 하는 것 같아서 ubuntu 유저도 사용하시면 될거 같습니다)

 

수행 결과를 보면 데몬으로 실행하거나 혹은 code-server 로 실행하도록 가이드를 제공합니다.

패키지가 설치되고 code-server . & 를 실행하면 ~/.config/code-server 에 하위에 설정파일이 생성되고 접속가능한 URL 이 노출됩니다.

[root@labco7 ~]# code-server . &
[1] 158353
[2022-03-18T07:21:57.394Z] info  code-server 4.1.0 9e620e90f53fb91338a2ba1aaa2e556d42ae52d5
[2022-03-18T07:21:57.395Z] info  Using user-data-dir ~/.local/share/code-server
[2022-03-18T07:21:57.424Z] info  Using config file ~/.config/code-server/config.yaml
[2022-03-18T07:21:57.424Z] info  HTTP server listening on http://127.0.0.1:8080/
[2022-03-18T07:21:57.424Z] info    - Authentication is enabled
[2022-03-18T07:21:57.424Z] info      - Using password from ~/.config/code-server/config.yaml
[2022-03-18T07:21:57.424Z] info    - Not serving HTTPS

다만 기본 설정에서는 bind-addr 이 127.0.0.1 이기 때문에 서버에서만 접속이 가능합니다. Localhost로 접속이 불가한 경우라면 아래와 같이 서버 IP로 변경하면 바인딩이 변경되고 해당 IP로 접속이 가능합니다.

여러 사람이 사용하는 경우라면 포트 번호도 각자 수정하시면 됩니다.

[root@labco7 ~]# sed -i 's/127.0.0.1/192.168.10.101/g' ~/.config/code-server/config.yaml
[root@labco7 ~]# cat ~/.config/code-server/config.yaml
bind-addr: 192.168.10.101:8080
auth: password
password: 문자열
cert: false
[root@labco7 ~]# code-server . &
[2] 158486
[root@labco7 ~]# [2022-03-18T07:25:26.775Z] info  code-server 4.1.0 9e620e90f53fb91338a2ba1aaa2e556d42ae52d5
[2022-03-18T07:25:26.777Z] info  Using user-data-dir ~/.local/share/code-server
[2022-03-18T07:25:26.802Z] info  Using config file ~/.config/code-server/config.yaml
[2022-03-18T07:25:26.802Z] info  HTTP server listening on http://192.168.10.101:8080/
[2022-03-18T07:25:26.802Z] info    - Authentication is enabled
[2022-03-18T07:25:26.802Z] info      - Using password from ~/.config/code-server/config.yaml
[2022-03-18T07:25:26.802Z] info    - Not serving HTTPS

[root@labco7 ~]#

노출된 URL로 접속하면 패스워드를 입력하는 창이 나오는데 config.yaml 에 있는 password 를 사용하시면 됩니다.

 

들어가며

쿠버네티스는 버전 1.14 부터 윈도우 워커 노드를 프로덕션 레벨로 지원하기 시작했습니다.

Kubernetes 1.14: Production-level support for Windows Nodes, Kubectl Updates, Persistent Local Volumes GA
https://kubernetes.io/blog/2019/03/25/kubernetes-1-14-release-announcement/

Azure-CNI, OVN-Kubernetes, Flannel 을 윈도우 워커노드를 지원하는 CNI로 소개하였으며, 현재 kubernetes.io 의 공식 문서에는 flannel을 기준으로 윈도우 워커 노드를 추가하는 가이드를 제공하고 있습니다. 테스트나 일반적인 환경이라면 flannel 로 윈도우 워커 노드를 구성할 수 있습니다.

다만 Network Policy 를 사용해야하는 요건이 있다면 flannel이 아닌 다른 옵션을 찾아봐야 합니다. 다행히 Calico 프로젝트에서 2020년 9월 Calico for Windows 를 오픈 소스로 발표함에 따라 오픈 소스 Calico 를 CNI로 사용할 수 있는 옵션이 생겼습니다.

Tigera Announces Open-Source Calico for Windows and Collaboration with Microsoft
https://kubernetes.io/blog/2019/03/25/kubernetes-1-14-release-announcement/#production-level-support-for-windows-nodes

본 포스트에서는 Calico CNI 로 쿠버네티스에 윈도우 워커 노드를 추가하는 과정과 Calico CNI 에 대해서 다뤄보고자 합니다. 쿠버네티스가 구성되어 있다는 전제 하에 윈도우 워커노드를 위한 구성 과정의 ②, ④, ⑤ 번 과정을 중심으로 작성하였습니다.

윈도우 워커노드 구성 과정

① 리눅스 컨트롤 플레인 구성 ② Calico CNI 구성 및 윈도우 워커 노드를 위한 설정 변경 ③ 리눅스 워커 노드 추가 ④ 윈도우 워커노드 추가 ⑤ 워크로드 테스트

 

테스트 환경

윈도우 워커 노드를 구성하기 위해서 리눅스 컨트롤 플레인이 있어야 합니다. 쿠버네티스 윈도우 지원에서 윈도우 노드는 컨트롤 플레인의 역할을 할 수 없으며, 한편으로 쿠버네티스 에코시스템의 다양한 에드온(Addon)을 사용하기 위해 리눅스 워커 노드를 같이 사용하는 것이 권장됩니다.

 

구성도

 

Caclio 에서 윈도우 워커 노드를 위한 설정 변경

먼저 Calico 에서 윈도우 워커 노드 사용을 하는데 있어 제약사항은 아래 Requirements 문서를 확인할 필요가 있습니다.

https://projectcalico.docs.tigera.io/getting-started/windows-calico/kubernetes/requirements

 

오버레이 방식을 VXLAN 모드 변경

Calico 의 overlay 모드를 사용할 때 유의사항은 윈도우 워커 노드에서는 IPIP 모드를 지원하지 않기 때문에 VXLAN 으로사용해야 한다는 점입니다.

아래 명령을 통해 IPIP 에서 VXLAN 방식로 변경할 수 있습니다.

root@k8s-m:~# calicoctl get ippool default-ipv4-ippool -o wide
NAME                  CIDR             NAT    IPIPMODE   VXLANMODE   DISABLED   DISABLEBGPEXPORT   SELECTOR
default-ipv4-ippool   192.168.0.0/16   true   Always     Never       false      false              all()
root@k8s-m:~# calicoctl get ippool default-ipv4-ippool -o yaml | sed -e "s/ipipMode: Always/ipipMode: Never/" | calicoctl apply -f -
Successfully applied 1 'IPPool' resource(s)
root@k8s-m:~# calicoctl get ippool default-ipv4-ippool -o yaml | sed -e "s/vxlanMode: Never/vxlanMode: Always/" | calicoctl apply -f -
Successfully applied 1 'IPPool' resource(s)
root@k8s-m:~# calicoctl get ippool default-ipv4-ippool -o wide
NAME                  CIDR             NAT    IPIPMODE   VXLANMODE   DISABLED   DISABLEBGPEXPORT   SELECTOR
default-ipv4-ippool   192.168.0.0/16   true   Never      Always      false      false              all()

# 위의 절차만으로는 윈도우 워커 노드 추가 시 에러가 발생해서 아래의 절차를 추가하였음
root@k8s-m:~# kubectl get felixconfigurations.crd.projectcalico.org default  -o yaml -n kube-system > felixconfig.yaml
root@k8s-m:~# cp felixconfig.yaml felixconfig.yaml.org
root@k8s-m:~# vi felixconfig.yaml
root@k8s-m:~# diff felixconfig.yaml felixconfig.yaml.org
13c13
<   ipipEnabled: false
---
>   ipipEnabled: true
root@k8s-m:~# kubectl apply -f felixconfig.yaml
Warning: resource felixconfigurations/default is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
felixconfiguration.crd.projectcalico.org/default configured

 

참고로 온프레미스 환경에서 사용가능한 Calico 의 네트워크 옵션을 간략하게 소개합니다.

non-overlay 모드

Pod IP가 외부로 직접 통신하는 방식입니다. Pod IP는 Calico의 Bird에서 BGP 통하여 서로 전파하며, 네트워크 장비와 직접 BGP Peer 를 맺어 외부에서 Pod IP로 직접 통신 가능하게 만들 수 있습니다. 만약 네트워크 장비와 BGP Peering이 어렵다면 L2 네트워크라는 전제하에 클러스터 내부에서만 BGP 로 Pod IP를 전파해 사용할 수도 있습니다.

overlay 모드

Pod IP를 encapsulation 하는 오버레이 모드가 지원됩니다. 오버레이 기법에는 IPIP 와 VXLAN 이 있습니다. 먼저 IPIP 방식은 Outer IP(IPIP) 로 encapsulation 해서 패킷을 전송하는 방식입니다.

VXLAN 방식은 VXLAN 기법으로 encapsulation 해서 패킷을 전송하는 방식입니다. 다만 아래 그림과 같이 BGP 라우팅을 사용하지 않는 방법입니다. (Bird 를 사용하지 않고, Felix 가 etcd를 통해 다른 노드의 Pod CIDR 정보를 가져와서 정보를 업데이트 합니다)

cross-subnet 모드

라우팅이 필요없는 구간에서는 non-overlay 모드로 통신하고, 라우팅이 필요한 구간에서는 overlay 모드로 통신하는 특별한 모드입니다. non-overlay 모드를 통해 L2 구간에서는 오버헤드가 없는 통신을 하면서 필요할 때는overlay 모드를 사용합니다. 단 윈도우 워커 노드 환경에서는 cross-subnet 이 지원되지 않습니다.

 

IPAM 옵션 수정

Calico에서는 IP Borrowing 이라는 방식으로 노드의 IP가 부족하면 다른 노드에서 빌려오는 옵션이 있습니다. 다만 윈도우 워커 노드는 IP Borrowing 매커니즘을 지원하지 않기 때문에 아래와 같이 옵션을 수정해야 합니다.

root@k8s-m:~# calicoctl ipam configure --strictaffinity=true
Successfully set StrictAffinity to: true

 

윈도우 워커 노드 추가

1) 사전 작업

업데이트 설치

kubernetes.io 의 윈도우 노드 추가에 대한 공식 문서를 보면 윈도우 워커 노드에서 VXLAN/오버레이를 사용하기 위해서는 사전에 KB4489899를 설치해야 합니다. (참고로 KB4489899 는 현재 제공되지 않고 있습니다. 윈도우 서버 2019 의 업데이트는 누적되기 때문에 해당 업데이트 이후의 업데이트를 설치하면 됩니다 )

추가로 사전에 Docker 나 containerD를 설치해야하는데 절차는 아래의 문서를 참고하기 바랍니다.

Docker EE 설치

https://docs.microsoft.com/ko-kr/virtualization/windowscontainers/quick-start/set-up-environment?tabs=Windows-Server#install-docker

ConatinerD 설치

https://kubernetes.io/ko/docs/setup/production-environment/container-runtimes/#containerd

마이크로소프트는 2021년 9월 부터 Docker EE 빌드를 생성하지 않는다고 발표했습니다. 윈도우 서버의 컨테이너 런타임은 containerd 가 더 적절한 선택일 수 있습니다. 다만 여기서는 리눅스 노드와 동일한 환경을 구성하고자 도커를 설치했습니다.

참고로 Docker EE는 쿠버네티스 1.14 에서 stable 되었고, ContainerD 는 쿠버네티스 1.20에서 stable 되었습니다.
https://kubernetes.io/ko/docs/setup/production-environment/windows/intro-windows-in-kubernetes/#%EC%BB%A8%ED%85%8C%EC%9D%B4%EB%84%88-%EB%9F%B0%ED%83%80%EC%9E%84

 

2) Calico 컴포넌트 설치

아래부터는 Calico 공식 문서의 Quickstart 를 참고하여 실행하였습니다.

# Prepare the directory for Kubernetes files on Windows node
PS C:\Users\Administrator> mkdir c:\k


    Directory: C:\


Mode                LastWriteTime         Length Name
----                -------------         ------ ----
d-----     2022-03-14   오전 1:55                k

# Copy the Kubernetes kubeconfig file from the master node
PS C:\Users\Administrator> scp root@172.16.3.170:~/.kube/config c:\k\
root@172.16.3.170's password:
config                                                                                100% 5640   352.6KB/s   00:00

# Download the PowerShell script, install-calico-windows.ps1
PS C:\Users\Administrator> Invoke-WebRequest https://projectcalico.docs.tigera.io/scripts/install-calico-windows.ps1 -OutFile c:\install-calico-windows.ps1

# Install Calico for Windows for your datastore with using the default parameters
# The PowerShell script downloads Calico for Windows release binary, Kubernetes binaries, Windows utilities files, configures Calico for Windows, and starts the Calico service.
PS C:\Users\Administrator> c:\install-calico-windows.ps1 -KubeVersion 1.22.6 -ServiceCidr 10.96.0.0/12 -DNSServerIPs 10.96.0.10
WARNING: The names of some imported commands from the module 'helper' include unapproved verbs that might make them
less discoverable. To find the commands with unapproved verbs, run the Import-Module command again with the Verbose
parameter. For a list of approved verbs, type Get-Verb.
Creating CNI directory
<생략>
Waiting for Calico initialisation to finish...StoredLastBootTime , CurrentLastBootTime 2022-03-14 오전 1:48:58
Waiting for Calico initialisation to finish...StoredLastBootTime , CurrentLastBootTime 2022-03-14 오전 1:48:58
Waiting for Calico initialisation to finish...StoredLastBootTime , CurrentLastBootTime 2022-03-14 오전 1:48:58
Calico initialisation finished.
Done, the Calico services are running:

Status      : Running
Name        : CalicoFelix
DisplayName : Calico Windows Agent


Status      : Running
Name        : CalicoNode
DisplayName : Calico Windows Startup


Calico for Windows Started

CalicoFelix 와 CalicoNode 가 서비스로 실행되었습니다.

 

3) 쿠버네티스 컴포넌트 설치

위에서 c:\install-calico-windows.ps1 를 실행하면 C:\CalicoWindows 에 install-kube-services.ps1 파일이 생성됩니다. 이 파워쉘 스크립트를 실행하면 kubelet 과 kube-proxy 가 서비스로 설치됩니다.

마지막으로 Start-Service 로 각 서비스를 실행합니다.

PS C:\Users\Administrator> C:\CalicoWindows\kubernetes\install-kube-services.ps1
the param is
Installing kubelet service...
Service "kubelet" installed successfully!
Set parameter "AppParameters" for service "kubelet".
Set parameter "AppDirectory" for service "kubelet".
Set parameter "DisplayName" for service "kubelet".
Set parameter "Description" for service "kubelet".
Set parameter "Start" for service "kubelet".
Reset parameter "ObjectName" for service "kubelet" to its default.
Set parameter "Type" for service "kubelet".
Reset parameter "AppThrottle" for service "kubelet" to its default.
Set parameter "AppStdout" for service "kubelet".
Set parameter "AppStderr" for service "kubelet".
Set parameter "AppRotateFiles" for service "kubelet".
Set parameter "AppRotateOnline" for service "kubelet".
Set parameter "AppRotateSeconds" for service "kubelet".
Set parameter "AppRotateBytes" for service "kubelet".
Done installing kubelet service.
Installing kube-proxy service...
Service "kube-proxy" installed successfully!
Set parameter "AppParameters" for service "kube-proxy".
Set parameter "AppDirectory" for service "kube-proxy".
Set parameter "DisplayName" for service "kube-proxy".
Set parameter "Description" for service "kube-proxy".
Set parameter "Start" for service "kube-proxy".
Reset parameter "ObjectName" for service "kube-proxy" to its default.
Set parameter "Type" for service "kube-proxy".
Reset parameter "AppThrottle" for service "kube-proxy" to its default.
Set parameter "AppStdout" for service "kube-proxy".
Set parameter "AppStderr" for service "kube-proxy".
Set parameter "AppRotateFiles" for service "kube-proxy".
Set parameter "AppRotateOnline" for service "kube-proxy".
Set parameter "AppRotateSeconds" for service "kube-proxy".
Set parameter "AppRotateBytes" for service "kube-proxy".
Done installing kube-proxy service.
PS C:\Users\Administrator> Get-Service -Name kubelet

Status   Name               DisplayName
------   ----               -----------
Stopped  kubelet            kubelet service


PS C:\Users\Administrator> Get-Service -Name kube-proxy

Status   Name               DisplayName
------   ----               -----------
Stopped  kube-proxy         kube-proxy service


PS C:\Users\Administrator> Start-Service -Name kubelet
PS C:\Users\Administrator> Start-Service -Name kube-proxy
PS C:\Users\Administrator> Get-Service -Name kubelet

Status   Name               DisplayName
------   ----               -----------
Running  kubelet            kubelet service


PS C:\Users\Administrator> Get-Service -Name kube-proxy

Status   Name               DisplayName
------   ----               -----------
Running  kube-proxy         kube-proxy service

kubelet 과 kube-proxy를 실행하고 잠시 후에 컨트롤 플레인에서 확인해보면 아래와 같이 윈도우 워커 노드가 확인됩니다.

root@k8s-m:~# kubectl get no -owide
NAME     STATUS   ROLES                  AGE   VERSION   INTERNAL-IP    EXTERNAL-IP   OS-IMAGE                                    KERNEL-VERSION     CONTAINER-RUNTIME
k8s-lw   Ready    <none>                 87m   v1.22.6   172.16.3.171   <none>        Ubuntu 20.04.2 LTS                          5.4.0-77-generic   docker://20.10.13
k8s-m    Ready    control-plane,master   89m   v1.22.6   172.16.3.170   <none>        Ubuntu 20.04.2 LTS                          5.4.0-77-generic   docker://20.10.13
k8s-ww   Ready    <none>                 80s   v1.22.6   172.16.3.172   <none>        Windows Server 2019 Datacenter Evaluation   10.0.17763.1999    docker://20.10.9

리눅스 워커 노드를 kubeadm 으로 조인하는 것과 다르게 해당 스크립트를 수행하는 것만으로 윈도우 워커 노드가 추가됩니다.

 

워크로드 테스트

워크로드 테스트 샘플 애플리케이션은 아래 Calico 의 github 에서 참고 했습니다.

https://github.com/tigera-solutions/install-calico-for-windows

 

애플리케이션 디플로이

root@k8s-m:~/app# ls
netshoot.yml  stack-iis.yml  stack-nginx.yml
root@k8s-m:~/app# kubectl apply -f ./
pod/netshoot created
deployment.apps/iis created
service/iis-svc created
deployment.apps/nginx created
service/nginx-svc created
root@k8s-m:~/app# kubectl get po -w
NAME                     READY   STATUS              RESTARTS   AGE
iis-7dfbf869dd-24zzx     0/1     ContainerCreating   0          8s
netshoot                 0/1     ContainerCreating   0          8s
nginx-868547d6bf-kv858   0/1     ContainerCreating   0          8s
iis-7dfbf869dd-24zzx     0/1     ContainerCreating   0          11s
netshoot                 1/1     Running             0          43s
nginx-868547d6bf-kv858   1/1     Running             0          51s
c
root@k8s-m:~# kubectl exec -it netshoot -- bash
bash-5.1# nslookup nginx-svc
Server:         10.96.0.10
Address:        10.96.0.10#53

Name:   nginx-svc.default.svc.cluster.local
Address: 10.103.180.29

bash-5.1# nslookup iis-svc
Server:         10.96.0.10
Address:        10.96.0.10#53

Name:   iis-svc.default.svc.cluster.local
Address: 10.101.147.244

bash-5.1# curl -Is http://nginx-svc | grep -i http
HTTP/1.1 200 OK
bash-5.1# curl -Is http://iis-svc | grep -i http
HTTP/1.1 200 OK
bash-5.1# exit
exit

윈도우 워커 노드에 IIS 를 리눅스 워커 노드에 Nginx 를 배포합니다. (윈도우 컨테이너 이미지는 사이즈가 크기 때문에 Pulling 에 오랜 시간이 걸립니다)

netshoot 파드에서 각 서비스가 정상 호출됩니다.

 

Network Policy 적용

아래 allow-nginx-ingress-from-iis.yaml 를 생성하여 IIS 파드만 Nginx 파드를 호출할 수 있도록 합니다.

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-nginx-ingress-from-iis
  namespace: default
spec:
  podSelector:
    matchLabels:
      run: nginx
  policyTypes:
  - Ingress
  ingress:
  - from:
    - podSelector:
        matchLabels:
          run: iis
    ports:
    - protocol: TCP
      port: 80
---

IIS 파드는 nginx-svc 호출이 가능하고, netshoot 파드는 불가합니다.

# netshoot -> nginx : 성공 (사전)
root@k8s-m:~/app# kubectl exec -t netshoot -- sh -c 'SVC=nginx-svc; curl -m 5 -sI http://$SVC 2>/dev/null | grep -i http'
HTTP/1.1 200 OK

# netpol 생성
root@k8s-m:~/app# kubectl apply -f k8s.allow-nginx-ingress-from-iis.yaml
networkpolicy.networking.k8s.io/allow-nginx-ingress-from-iis created


# iis -> nginx : 성공

root@k8s-m:~/app# IIS_POD=$(kubectl get pod -l run=iis -o jsonpath='{.items[*].metadata.name}')
root@k8s-m:~/app# kubectl exec -t $IIS_POD -- powershell -command 'iwr -UseBasicParsing  -TimeoutSec 5 http://nginx-svc'


StatusCode        : 200
StatusDescription : OK
Content           : <!DOCTYPE html>
                    <html>
                    <head>
                    <title>Welcome to nginx!</title>
                    <style>
                    html { color-scheme: light dark; }
                    body { width: 35em; margin: 0 auto;
                    font-family: Tahoma, Verdana, Arial, sans-serif; }
                    </style...
RawContent        : HTTP/1.1 200 OK
                    Connection: keep-alive
                    Accept-Ranges: bytes
                    Content-Length: 615
                    Content-Type: text/html
                    Date: Sun, 13 Mar 2022 18:47:39 GMT
                    ETag: "61f0168e-267"
                    Last-Modified: Tue, 25 Jan 2022 ...
Forms             :
Headers           : {[Connection, keep-alive], [Accept-Ranges, bytes],
                    [Content-Length, 615], [Content-Type, text/html]...}
Images            : {}
InputFields       : {}
Links             : {@{outerHTML=<a href="http://nginx.org/">nginx.org</a>;
                    tagName=A; href=http://nginx.org/}, @{outerHTML=<a
                    href="http://nginx.com/">nginx.com</a>; tagName=A;
                    href=http://nginx.com/}}
ParsedHtml        :
RawContentLength  : 615


# netshoot -> nginx : 실패
root@k8s-m:~/app# kubectl exec -t netshoot -- sh -c 'SVC=nginx-svc; curl -m 5 -sI http://$SVC 2>/dev/null | grep -i http'
command terminated with exit code 1

 

마치며

이 포스트는 Kubernetes Korea Group 에서 1~3월 초까지 진행한 쿠버네티스 네트워크 스터디(KANS, Kubernetes Advanced Networking Study) 의 졸업 과제의 일환으로 작성하였습니다. 해당 스터디에서 다룬 쿠버네티스 네트워크와 CNI에 대한 이해를 바탕으로 원할한 테스트가 가능했습니다.

긴 글 읽어주셔서 감사하며 작성된 내용에 오류가 있다면 언제든지 알려주십시오.

 

참고

Intro to windows in kubernetes

https://kubernetes.io/docs/setup/production-environment/windows/intro-windows-in-kubernetes/#windows-os-version-support

윈도우 노드 추가

https://kubernetes.io/ko/docs/tasks/administer-cluster/kubeadm/adding-windows-nodes/

윈도우 노드 - 컨테이너 런타임 설치

https://kubernetes.io/ko/docs/tasks/administer-cluster/kubeadm/adding-windows-nodes/#%EC%9C%88%EB%8F%84%EC%9A%B0-%EC%9B%8C%EC%BB%A4-%EB%85%B8%EB%93%9C-%EC%A1%B0%EC%9D%B8-joining

Calico 네트워크 모드 선택

https://projectcalico.docs.tigera.io/networking/determine-best-networking#on-prem

Windows Calico QuickStart

https://projectcalico.docs.tigera.io/getting-started/windows-calico/quickstart

Windows Calico 관련 참고 영상

https://tigera.wistia.com/medias/gvc1f5132d

https://www.youtube.com/watch?v=DMKS43POa5s

 

curl 로 파일을 가져오는데 301 (301 Moved Permanently) 결과만 기록된다.

 

root@k8s-m:~/guestbook# curl -O https://k8s.io/examples/application/guestbook/redis-leader-service.yaml
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   178  100   178    0     0    340      0 --:--:-- --:--:-- --:--:--   340
root@k8s-m:~/guestbook# cat redis-leader-service.yaml
<html>
<head><title>301 Moved Permanently</title></head>
<body bgcolor="white">
<center><h1>301 Moved Permanently</h1></center>
<hr><center>nginx</center>
</body>
</html>

 

-L 옵션을 사용하면 리다이렉션된 페이지의 결과를 가져올 수 있다.

 

root@k8s-m:~/guestbook# curl -L -O https://k8s.io/examples/application/guestbook/redis-lead
er-service.yaml
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   178  100   178    0     0    206      0 --:--:-- --:--:-- --:--:--   206
100   310  100   310    0     0    169      0  0:00:01  0:00:01 --:--:--   536

root@k8s-m:~/guestbook# cat redis-leader-service.yaml
# SOURCE: https://cloud.google.com/kubernetes-engine/docs/tutorials/guestbook
apiVersion: v1
kind: Service
metadata:
  name: redis-leader
  labels:
    app: redis
    role: leader
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: redis
    role: leader
    tier: backendroot@k8s-m:~/guestbook#

 

Windows에서 Ubuntu 를 접속한 상태에서 CLI 모드로 실행 중인 Ubuntu 의 GUI feature를 Windows 에서 실행시키는 방법을 소개합니다.

 

 

먼저 Ubuntu 에서 아래의 패키지들을 설치합니다.

Ubuntu 에서 실행

  apt update
  apt install x11-apps xorg openbox
  
  ## firefox 사용을 위해서 추가로 설치함
  apt install firefox

 

 

Windows OS로 돌아와 xming을 설치하고, Terraterm 의 ssh forwarding 설정을 합니다.

Windows OS에서 실행

xming 설치

아래에서 다운받아 설치

https://sourceforge.net/projects/xming/

 

Terraterm 에서 ssh forwarding 설정

Setup > SSH Forwarding > Display remote X application on local X server 체크

 

Setup > Save Setup 

 

 

새 세션에서 GUI 프로그램 실행

 

 

terraform 으로 새 세션을 열고 `echo $DISPLAY` 를 수행해 본다.

 

아래와 같이 인식되면 정상이다.

ubuntu@u2004iac:~$ echo $DISPLAY
localhost:10.0

firefox를 실행해보면 xming이 실행된다.

ubuntu@u2004iac:~$ firefox

새로운 창에 firefox 가 실행된 모습

'기타' 카테고리의 다른 글

wsl: docker, kind 설치  (0) 2023.11.05
curl 에 timeout 주기  (0) 2023.10.04
개행문자(\n)를 줄바꿈으로 변환하기  (0) 2022.03.19
Browser IDE, code-server 사용해보기  (0) 2022.03.18
Windows terminal 의 pane 기능  (0) 2022.02.05

ALB 의 Target Group 에 속한 Target 이 사용하는 Security Group 을 확인해본다.

 

Instance 가 사용하는 Security Group 에서 Soruce 가 다른 Security Group 으로 지정되어 에러가 발생하였다.

ALB 의 Security Group 으로 Source를 변경하였다.

혹은 ALB 가 엉뚱한 Security Group 으로 지정된 경우로 발생할 수 있다.

 

 

이번 포스트에서는 수동으로 Kubernetes 클러스터 업그레이드 하는 방법을 살펴보고자 합니다.

kubernetes.io 공식 가이드는 맨 하단 참고의 링크를 확인하시기 바랍니다.

 

테스트 환경 정보

## Ubuntu 버전
root@k8s-m0:~# cat /etc/os-release | grep PRETTY
PRETTY_NAME="Ubuntu 20.04.3 LTS"
root@k8s-m0:~# uname -a
Linux k8s-m0 5.4.0-96-generic #109-Ubuntu SMP Wed Jan 12 16:49:16 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux

## 3 node HA Control plane 과 1 대의 worker 
root@k8s-m0:~# kubectl get no
NAME     STATUS   ROLES                  AGE     VERSION
k8s-m0   Ready    control-plane,master   68m     v1.21.7
k8s-m1   Ready    control-plane,master   43m     v1.21.7
k8s-m2   Ready    control-plane,master   20m     v1.21.7
k8s-w1   Ready    <none>                 5m27s   v1.21.7

## 주요 바이너리의 버전 확인
root@k8s-m0:~# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.7", GitCommit:"1f86634ff08f37e54e8bfcd86bc90b61c98f84d4", GitTreeState:"clean", BuildDate:"2021-11-17T14:40:08Z", GoVersion:"go1.16.10", Compiler:"gc", Platform:"linux/amd64"}
root@k8s-m0:~# kubelet --version
Kubernetes v1.21.7
root@k8s-m0:~# kubectl version
Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.7", GitCommit:"1f86634ff08f37e54e8bfcd86bc90b61c98f84d4", GitTreeState:"clean", BuildDate:"2021-11-17T14:41:19Z", GoVersion:"go1.16.10", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.7", GitCommit:"1f86634ff08f37e54e8bfcd86bc90b61c98f84d4", GitTreeState:"clean", BuildDate:"2021-11-17T14:35:38Z", GoVersion:"go1.16.10", Compiler:"gc", Platform:"linux/amd64"}

 

Kubernetes 업그레이드 절차 요약

  1. primary control plane 을 업그레이드 한다.
  2. additional control plane 을 업그레이드 한다.
  3. worker 노드를 업그레이드 한다.

 

업그레이드 가능 버전 확인

먼저 apt update 를 수행하고 apt-cache 에서 업그레이드 가능한 버전을 확인해 봅니다.

테스트에서는 가장 최신 버전인 1.23.3 을 사용합니다.

root@k8s-m0:~# apt update
Get:1 http://security.ubuntu.com/ubuntu focal-security InRelease [114 kB]
Hit:2 https://download.docker.com/linux/ubuntu focal InRelease
Hit:3 http://archive.ubuntu.com/ubuntu focal InRelease
Get:4 http://archive.ubuntu.com/ubuntu focal-updates InRelease [114 kB]
Get:6 http://archive.ubuntu.com/ubuntu focal-backports InRelease [108 kB]
Get:5 https://packages.cloud.google.com/apt kubernetes-xenial InRelease [9383 B]
Fetched 345 kB in 23s (14.8 kB/s)
Reading package lists... Done
Building dependency tree
Reading state information... Done
19 packages can be upgraded. Run 'apt list --upgradable' to see them.
root@k8s-m0:~# apt-cache madison kubeadm
   kubeadm |  1.23.3-00 | https://apt.kubernetes.io kubernetes-xenial/main amd64 Packages
   kubeadm |  1.23.2-00 | https://apt.kubernetes.io kubernetes-xenial/main amd64 Packages
   kubeadm |  1.23.1-00 | https://apt.kubernetes.io kubernetes-xenial/main amd64 Packages
   kubeadm |  1.23.0-00 | https://apt.kubernetes.io kubernetes-xenial/main amd64 Packages
   kubeadm |  1.22.6-00 | https://apt.kubernetes.io kubernetes-xenial/main amd64 Packages
   <생략>

 

이제 primary control plane 을 먼저 업그레이드 해보겠습니다.

 

Primary Control Plane 업그레이드

프라이머리 컨트롤 플레인의 업그레이드 절차는 간단히 아래와 같습니다.

1. kubeadm 바이너리 업그레이드

2. kubeadm upgrade plan 으로 사전 확인

3. kubeadm upgrade apply 로 업그레이드 (이때 kubectl version 으로 확인한 Server Version 이 업그레이드 됩니다.)

4. 노드 drain

5. kubectl, kubelet 바이너리 업그레이드

6. kubelet daemon-reload, restart

7. 노드 uncordon

 

수행 결과를 살펴 보겠습니다.

# 1. kubeadm 바이너리 업그레이드
root@k8s-m0:~# apt-mark unhold kubeadm && apt-get update && apt-get install -y kubeadm=1.22.3-00 && apt-mark hold kubeadm
Canceled hold on kubeadm.
Hit:1 https://download.docker.com/linux/ubuntu focal InRelease
<생략>
Fetched 336 kB in 24s (14.2 kB/s)
Reading package lists... Done
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following packages will be upgraded:
  kubeadm
1 upgraded, 0 newly installed, 0 to remove and 18 not upgraded.
Need to get 8712 kB of archives.
After this operation, 971 kB of additional disk space will be used.
Get:1 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubeadm amd64 1.22.3-00 [8712 kB]
Fetched 8712 kB in 23s (372 kB/s)
(Reading database ... 64135 files and directories currently installed.)
Preparing to unpack .../kubeadm_1.22.3-00_amd64.deb ...
Unpacking kubeadm (1.22.3-00) over (1.21.7-00) ...
Setting up kubeadm (1.22.3-00) ...
kubeadm set on hold.
root@k8s-m0:~#
[참고] apt-mark unhold 와 hold 의 의미
업그레이드 전 apt-mark showhold 를 수행해보면 kubeadm, kubectl, kubelet 은 자동 업데이트, 제거가 되지 않도록 보류(hold) 상태인 것을 알 수 있습니다. 바이너리를 업데이트 하기 위해 unhold -> 패키지 업데이트 -> hold 를 수행하는 것을 알 수 있습니다.

 

# 2. kubeadm upgrade plan 으로 사전 확인
root@k8s-m0:~# kubeadm upgrade plan
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade] Fetching available versions to upgrade to
[upgrade/versions] Cluster version: v1.21.7
[upgrade/versions] kubeadm version: v1.22.3
W0205 10:07:29.475001   87282 version.go:103] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable.txt": Get "https://storage.googleapis.com/kubernetes-release/release/stable.txt": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
W0205 10:07:29.476348   87282 version.go:104] falling back to the local client version: v1.22.3
[upgrade/versions] Target version: v1.22.3
W0205 10:07:39.796859   87282 version.go:103] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.21.txt": Get "https://storage.googleapis.com/kubernetes-release/release/stable-1.21.txt": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
W0205 10:07:39.797158   87282 version.go:104] falling back to the local client version: v1.22.3
[upgrade/versions] Latest version in the v1.21 series: v1.22.3

Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT   CURRENT       TARGET
kubelet     4 x v1.21.7   v1.22.3

Upgrade to the latest version in the v1.21 series:

COMPONENT                 CURRENT    TARGET
kube-apiserver            v1.21.7    v1.22.3
kube-controller-manager   v1.21.7    v1.22.3
kube-scheduler            v1.21.7    v1.22.3
kube-proxy                v1.21.7    v1.22.3
CoreDNS                   v1.8.0     v1.8.4
etcd                      3.4.13-0   3.5.0-0

You can now apply the upgrade by executing the following command:

        kubeadm upgrade apply v1.22.3

_____________________________________________________________________


The table below shows the current state of component configs as understood by this version of kubeadm.
Configs that have a "yes" mark in the "MANUAL UPGRADE REQUIRED" column require manual config upgrade or
resetting to kubeadm defaults before a successful upgrade can be performed. The version to manually
upgrade to is denoted in the "PREFERRED VERSION" column.

API GROUP                 CURRENT VERSION   PREFERRED VERSION   MANUAL UPGRADE REQUIRED
kubeproxy.config.k8s.io   v1alpha1          v1alpha1            no
kubelet.config.k8s.io     v1beta1           v1beta1             no
_____________________________________________________________________

 

# 3. kubeadm upgrade apply 로 업그레이드 (이때 kubectl version 으로 확인한 Server Version 이 업그레이드 됩니다.)
# [upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y  ## <--- y 입력

root@k8s-m0:~# kubeadm upgrade apply v1.22.3
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade/version] You have chosen to change the cluster version to "v1.22.3"
[upgrade/versions] Cluster version: v1.21.7
[upgrade/versions] kubeadm version: v1.22.3
[upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y
[upgrade/prepull] Pulling images required for setting up a Kubernetes cluster
[upgrade/prepull] This might take a minute or two, depending on the speed of your internet connection
[upgrade/prepull] You can also perform this action in beforehand using 'kubeadm config images pull'
[upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.22.3"...
Static pod: kube-apiserver-k8s-m0 hash: d8693cdc9fb5dd3fefac2a158d5685e8
Static pod: kube-controller-manager-k8s-m0 hash: c3b44cc1fd7121c166d49b72c6a30be6
Static pod: kube-scheduler-k8s-m0 hash: 341d83fb10c50335ea71f49ea327305c
[upgrade/etcd] Upgrading to TLS for etcd
Static pod: etcd-k8s-m0 hash: 39ec3d381c0aa337ca22b0433ae46cf4
[upgrade/staticpods] Preparing for "etcd" upgrade
[upgrade/staticpods] Renewing etcd-server certificate
[upgrade/staticpods] Renewing etcd-peer certificate
[upgrade/staticpods] Renewing etcd-healthcheck-client certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/etcd.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2022-02-05-10-21-19/etcd.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: etcd-k8s-m0 hash: 39ec3d381c0aa337ca22b0433ae46cf4
<생략>
Static pod: etcd-k8s-m0 hash: 39ec3d381c0aa337ca22b0433ae46cf4
Static pod: etcd-k8s-m0 hash: a6bfdd1cf0b0bfcde91c69798c3c2f61
[apiclient] Found 3 Pods for label selector component=etcd
[upgrade/staticpods] Component "etcd" upgraded successfully!
[upgrade/etcd] Waiting for etcd to become available
[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests139287456"
[upgrade/staticpods] Preparing for "kube-apiserver" upgrade
[upgrade/staticpods] Renewing apiserver certificate
[upgrade/staticpods] Renewing apiserver-kubelet-client certificate
[upgrade/staticpods] Renewing front-proxy-client certificate
[upgrade/staticpods] Renewing apiserver-etcd-client certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2022-02-05-10-21-19/kube-apiserver.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-apiserver-k8s-m0 hash: d8693cdc9fb5dd3fefac2a158d5685e8
<생략>
Static pod: kube-apiserver-k8s-m0 hash: d8693cdc9fb5dd3fefac2a158d5685e8
Static pod: kube-apiserver-k8s-m0 hash: dff5e288e0f790320cd77e7e1a9aa3db
[apiclient] Found 3 Pods for label selector component=kube-apiserver
[upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-controller-manager" upgrade
[upgrade/staticpods] Renewing controller-manager.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2022-02-05-10-21-19/kube-controller-manager.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-controller-manager-k8s-m0 hash: c3b44cc1fd7121c166d49b72c6a30be6
<생략>
Static pod: kube-controller-manager-k8s-m0 hash: c3b44cc1fd7121c166d49b72c6a30be6
Static pod: kube-controller-manager-k8s-m0 hash: aa3b83543a7ff5ed8c12c460f2ca4750
[apiclient] Found 3 Pods for label selector component=kube-controller-manager
[upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-scheduler" upgrade
[upgrade/staticpods] Renewing scheduler.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2022-02-05-10-21-19/kube-scheduler.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-scheduler-k8s-m0 hash: 341d83fb10c50335ea71f49ea327305c
<생략>
Static pod: kube-scheduler-k8s-m0 hash: 341d83fb10c50335ea71f49ea327305c
Static pod: kube-scheduler-k8s-m0 hash: 3d3feecdbe65a70d9d6a0cae39754db9
[apiclient] Found 3 Pods for label selector component=kube-scheduler
[upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
[upgrade/postupgrade] Applying label node-role.kubernetes.io/control-plane='' to Nodes with label node-role.kubernetes.io/master='' (deprecated)
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.22" in namespace kube-system with the configuration for the kubelets in the cluster
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[addons] Applied essential addon: CoreDNS
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[addons] Applied essential addon: kube-proxy

[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.22.3". Enjoy!

[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.

# 버전 확인 (primary control palne 만 업그레이드가 되었으므로, Server Version이 업그레이드 되지 않은 노드가 결과를 보낼 수 있음)
root@k8s-m0:~# kubectl version
Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.7", GitCommit:"1f86634ff08f37e54e8bfcd86bc90b61c98f84d4", GitTreeState:"clean", BuildDate:"2021-11-17T14:41:19Z", GoVersion:"go1.16.10", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.3", GitCommit:"c92036820499fedefec0f847e2054d824aea6cd1", GitTreeState:"clean", BuildDate:"2021-10-27T18:35:25Z", GoVersion:"go1.16.9", Compiler:"gc", Platform:"linux/amd64"}

 

[참고] kubeadm upgrade 를 수행하면 관련된 인증서를 자동으로 리뉴얼 합니다. 만약 인증서 리뉴얼을 제외하고자 한다면 --certificate-renewal=false 옵션을 추가해서 실행해야 합니다.

 

# 4. 노드 drain
root@k8s-m0:~# kubectl drain k8s-m0 --ignore-daemonsets
node/k8s-m0 cordoned
WARNING: ignoring DaemonSet-managed Pods: kube-system/kube-flannel-ds-589g7, kube-system/kube-proxy-hqmll
node/k8s-m0 drained
root@k8s-m0:~# kubectl get no
NAME     STATUS                     ROLES                  AGE     VERSION
k8s-m0   Ready,SchedulingDisabled   control-plane,master   3h48m   v1.21.7
k8s-m1   Ready                      control-plane,master   3h23m   v1.21.7
k8s-m2   Ready                      control-plane,master   3h      v1.21.7
k8s-w1   Ready                      <none>                 165m    v1.21.7

 

[참고] kubelet 과 kubectl 을 업그레이드 하기 전에는 노드를 drain 시켜 노드가 파드 스케줄링을 받지 않도록 하고, 실행 중인 워크로드를 퇴출(evict)합니다.

 

# 5. kubectl, kubelet 바이너리 업그레이드
root@k8s-m0:~# apt-mark unhold kubelet kubectl && apt-get update && apt-get install -y kubelet=1.22.3-00 kubectl=1.22.3-00 && apt-mark hold kubelet kubectl
Canceled hold on kubelet.
Canceled hold on kubectl.
Hit:1 https://download.docker.com/linux/ubuntu focal InRelease
Hit:2 http://archive.ubuntu.com/ubuntu focal InRelease
Get:3 http://security.ubuntu.com/ubuntu focal-security InRelease [114 kB]
Get:5 http://archive.ubuntu.com/ubuntu focal-updates InRelease [114 kB]
Get:6 http://archive.ubuntu.com/ubuntu focal-backports InRelease [108 kB]
Get:4 https://packages.cloud.google.com/apt kubernetes-xenial InRelease [9383 B]
Fetched 345 kB in 24s (14.5 kB/s)
Reading package lists... Done
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following packages will be upgraded:
  kubectl kubelet
2 upgraded, 0 newly installed, 0 to remove and 17 not upgraded.
Need to get 28.2 MB of archives.
After this operation, 3028 kB of additional disk space will be used.
Get:1 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubectl amd64 1.22.3-00 [9039 kB]
Get:2 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubelet amd64 1.22.3-00 [19.1 MB]
Fetched 28.2 MB in 28s (999 kB/s)
(Reading database ... 64135 files and directories currently installed.)
Preparing to unpack .../kubectl_1.22.3-00_amd64.deb ...
Unpacking kubectl (1.22.3-00) over (1.21.7-00) ...
Preparing to unpack .../kubelet_1.22.3-00_amd64.deb ...
Unpacking kubelet (1.22.3-00) over (1.21.7-00) ...
Setting up kubectl (1.22.3-00) ...
Setting up kubelet (1.22.3-00) ...
kubelet set on hold.
kubectl set on hold.


# 6. kubelet daemon-reload, restart
root@k8s-m0:~# systemctl daemon-reload && systemctl restart kubelet

# 7. 노드 uncordon
root@k8s-m0:~# kubectl uncordon k8s-m0
node/k8s-m0 uncordoned
root@k8s-m0:~# kubectl get no
NAME     STATUS   ROLES                  AGE     VERSION
k8s-m0   Ready    control-plane,master   4h16m   v1.22.3
k8s-m1   Ready    control-plane,master   3h51m   v1.21.7
k8s-m2   Ready    control-plane,master   3h28m   v1.21.7
k8s-w1   Ready    <none>                 3h13m   v1.21.7

프라이머리 컨트롤 플레인이 v1.22.3 으로 업그레이드 되었습니다.

 

[참고] kubeadm upgrade 명령은 static pod 매니페스트를 업그레이드 한다. 기존 매니페스트와 비교해보면 아래와 같이 버전 정보가 변경된 것을 알 수 있다. 
root@k8s-m0:~/manifests_bk# diff kube-scheduler.yaml /etc/kubernetes/manifests/kube-scheduler.yaml
20c20
< image: k8s.gcr.io/kube-scheduler:v1.21.7
---
> image: k8s.gcr.io/kube-scheduler:v1.22.3

 

이제 나머지 컨트롤 플레인도 업그레이드를 진행하겠습니다.

 

Additional Control Plane 업그레이드

추가 컨트롤 플레인의 업그레이드 절차는 아래와 같습니다.

1. kubeadm 바이너리 업그레이드

2. kubeadm upgrade node 로 업그레이드 (이 부분이 kubeadm upgrade apply 에서  kubeadm upgrade node 로 다름)

3. 노드 drain

4. kubectl, kubelet 바이너리 업그레이드

5. kubelet daemon-reload, restart

6. 노드 uncordon

 

수행 결과를 살펴 보겠습니다.

# 1. kubeadm 바이너리 업그레이드
root@k8s-m1:~# apt-mark unhold kubeadm && apt-get update && apt-get install -y kubeadm=1.22.3-00 && apt-mark hold kubeadm
Canceled hold on kubeadm.
Hit:1 https://download.docker.com/linux/ubuntu focal InRelease
Get:2 http://security.ubuntu.com/ubuntu focal-security InRelease [114 kB]
Hit:3 http://archive.ubuntu.com/ubuntu focal InRelease
Get:4 http://archive.ubuntu.com/ubuntu focal-updates InRelease [114 kB]
Get:6 http://archive.ubuntu.com/ubuntu focal-backports InRelease [108 kB]
Get:5 https://packages.cloud.google.com/apt kubernetes-xenial InRelease [9383 B]
Fetched 345 kB in 23s (14.8 kB/s)
Reading package lists... Done
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following packages will be upgraded:
  kubeadm
1 upgraded, 0 newly installed, 0 to remove and 18 not upgraded.
Need to get 8712 kB of archives.
After this operation, 971 kB of additional disk space will be used.
Get:1 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubeadm amd64 1.22.3-00 [8712 kB]
Fetched 8712 kB in 24s (366 kB/s)
(Reading database ... 64135 files and directories currently installed.)
Preparing to unpack .../kubeadm_1.22.3-00_amd64.deb ...
Unpacking kubeadm (1.22.3-00) over (1.21.7-00) ...
Setting up kubeadm (1.22.3-00) ...
kubeadm set on hold.


# 2. kubeadm upgrade node 로 업그레이드 (이 부분이 kubeadm upgrade apply 에서  kubeadm upgrade node 로 다름)
(admin-k8s:default) root@k8s-m1:~# kubeadm upgrade node
[upgrade] Reading configuration from the cluster...
[upgrade] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[upgrade] Upgrading your Static Pod-hosted control plane instance to version "v1.22.3"...
Static pod: kube-apiserver-k8s-m1 hash: 121c5f0b29db118a71b4afb666436a28
Static pod: kube-controller-manager-k8s-m1 hash: 6c9d416769ffcfda2b9b457c2278465b
Static pod: kube-scheduler-k8s-m1 hash: 5b40ca2fc4b25745dd3e0a5fbb7cd7f1
[upgrade/etcd] Upgrading to TLS for etcd
Static pod: etcd-k8s-m1 hash: 52933cc98a0e0db8d2740d5d66b76a50
[upgrade/staticpods] Preparing for "etcd" upgrade
[upgrade/staticpods] Renewing etcd-server certificate
[upgrade/staticpods] Renewing etcd-peer certificate
[upgrade/staticpods] Renewing etcd-healthcheck-client certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/etcd.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2022-02-05-12-09-38/etcd.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: etcd-k8s-m1 hash: 52933cc98a0e0db8d2740d5d66b76a50
Static pod: etcd-k8s-m1 hash: 52933cc98a0e0db8d2740d5d66b76a50
<생략>
Static pod: etcd-k8s-m1 hash: 52933cc98a0e0db8d2740d5d66b76a50
Static pod: etcd-k8s-m1 hash: 52933cc98a0e0db8d2740d5d66b76a50
Static pod: etcd-k8s-m1 hash: 572bcedba634f1d0a840c50caf2c2c89
[apiclient] Found 3 Pods for label selector component=etcd
[upgrade/staticpods] Component "etcd" upgraded successfully!
[upgrade/etcd] Waiting for etcd to become available
[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests160626215"
[upgrade/staticpods] Preparing for "kube-apiserver" upgrade
[upgrade/staticpods] Renewing apiserver certificate
[upgrade/staticpods] Renewing apiserver-kubelet-client certificate
[upgrade/staticpods] Renewing front-proxy-client certificate
[upgrade/staticpods] Renewing apiserver-etcd-client certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2022-02-05-12-09-38/kube-apiserver.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-apiserver-k8s-m1 hash: 121c5f0b29db118a71b4afb666436a28
Static pod: kube-apiserver-k8s-m1 hash: 121c5f0b29db118a71b4afb666436a28
<생략>
Static pod: kube-apiserver-k8s-m1 hash: 121c5f0b29db118a71b4afb666436a28
Static pod: kube-apiserver-k8s-m1 hash: 121c5f0b29db118a71b4afb666436a28
Static pod: kube-apiserver-k8s-m1 hash: ff13ada2027cc50e6a9db21ef2dd0678
[apiclient] Found 3 Pods for label selector component=kube-apiserver
[upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-controller-manager" upgrade
[upgrade/staticpods] Renewing controller-manager.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2022-02-05-12-09-38/kube-controller-manager.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-controller-manager-k8s-m1 hash: 6c9d416769ffcfda2b9b457c2278465b
Static pod: kube-controller-manager-k8s-m1 hash: 6c9d416769ffcfda2b9b457c2278465b
<생략>
Static pod: kube-controller-manager-k8s-m1 hash: 6c9d416769ffcfda2b9b457c2278465b
Static pod: kube-controller-manager-k8s-m1 hash: 6c9d416769ffcfda2b9b457c2278465b
Static pod: kube-controller-manager-k8s-m1 hash: 9894d6b6867df03a62434e48ca7456c3
[apiclient] Found 3 Pods for label selector component=kube-controller-manager
[upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-scheduler" upgrade
[upgrade/staticpods] Renewing scheduler.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2022-02-05-12-09-38/kube-scheduler.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-scheduler-k8s-m1 hash: 5b40ca2fc4b25745dd3e0a5fbb7cd7f1
Static pod: kube-scheduler-k8s-m1 hash: 5b40ca2fc4b25745dd3e0a5fbb7cd7f1
<생략>
Static pod: kube-scheduler-k8s-m1 hash: 5b40ca2fc4b25745dd3e0a5fbb7cd7f1
Static pod: kube-scheduler-k8s-m1 hash: 5b40ca2fc4b25745dd3e0a5fbb7cd7f1
Static pod: kube-scheduler-k8s-m1 hash: 6e1d9a4253856a2c13140b36a8811869
[apiclient] Found 3 Pods for label selector component=kube-scheduler
[apiclient] Error getting Pods with label selector "component=kube-scheduler" [Get "https://192.168.100.100:16443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-scheduler": context deadline exceeded (Client.Timeout exceeded while awaiting headers)]
[apiclient] Error getting Pods with label selector "component=kube-scheduler" [Get "https://192.168.100.100:16443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-scheduler": net/http: request canceled (Client.Timeout exceeded while awaiting headers)]
[apiclient] Error getting Pods with label selector "component=kube-scheduler" [Get "https://192.168.100.100:16443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-scheduler": net/http: request canceled (Client.Timeout exceeded while awaiting headers)]
[apiclient] Error getting Pods with label selector "component=kube-scheduler" [Get "https://192.168.100.100:16443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-scheduler": net/http: request canceled (Client.Timeout exceeded while awaiting headers)]
[upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
[upgrade] The control plane instance for this node was successfully updated!
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[upgrade] The configuration for this node was successfully updated!
[upgrade] Now you should go ahead and upgrade the kubelet package using your package manager.


# 3. 노드 drain
root@k8s-m1:~# kubectl drain k8s-m1 --ignore-daemonsets
node/k8s-m1 cordoned
WARNING: ignoring DaemonSet-managed Pods: kube-system/kube-flannel-ds-p2r67, kube-system/kube-proxy-xvrqf
evicting pod kube-system/coredns-78fcd69978-njbnw
pod/coredns-78fcd69978-njbnw evicted
node/k8s-m1 evicted


# 4. kubectl, kubelet 바이너리 업그레이드
root@k8s-m1:~# apt-mark unhold kubelet kubectl && apt-get update && apt-get install -y kubelet=1.22.3-00 kubectl=1.22.3-00 && apt-mark hold kubelet kubectl
Canceled hold on kubelet.
Canceled hold on kubectl.
Hit:1 https://download.docker.com/linux/ubuntu focal InRelease
Get:2 http://security.ubuntu.com/ubuntu focal-security InRelease [114 kB]
Hit:4 http://archive.ubuntu.com/ubuntu focal InRelease
Get:5 http://archive.ubuntu.com/ubuntu focal-updates InRelease [114 kB]
Get:6 http://archive.ubuntu.com/ubuntu focal-backports InRelease [108 kB]
Hit:3 https://packages.cloud.google.com/apt kubernetes-xenial InRelease
Fetched 336 kB in 23s (14.9 kB/s)
Reading package lists... Done
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following packages will be upgraded:
  kubectl kubelet
2 upgraded, 0 newly installed, 0 to remove and 17 not upgraded.
Need to get 28.2 MB of archives.
After this operation, 3028 kB of additional disk space will be used.
Get:1 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubectl amd64 1.22.3-00 [9039 kB]
Get:2 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubelet amd64 1.22.3-00 [19.1 MB]
Fetched 28.2 MB in 29s (970 kB/s)
(Reading database ... 64135 files and directories currently installed.)
Preparing to unpack .../kubectl_1.22.3-00_amd64.deb ...
Unpacking kubectl (1.22.3-00) over (1.21.7-00) ...
Preparing to unpack .../kubelet_1.22.3-00_amd64.deb ...
Unpacking kubelet (1.22.3-00) over (1.21.7-00) ...
Setting up kubectl (1.22.3-00) ...
Setting up kubelet (1.22.3-00) ...
kubelet set on hold.
kubectl set on hold.


# 5. kubelet daemon-reload, restart
root@k8s-m1:~# systemctl daemon-reload && systemctl restart kubelet


# 6. 노드 uncordon
root@k8s-m1:~# kubectl uncordon k8s-m1
node/k8s-m1 uncordoned
root@k8s-m1:~# kubectl get no
NAME     STATUS   ROLES                  AGE     VERSION
k8s-m0   Ready    control-plane,master   4h58m   v1.22.3
k8s-m1   Ready    control-plane,master   4h34m   v1.22.3
k8s-m2   Ready    control-plane,master   4h10m   v1.22.3
k8s-w1   Ready    <none>                 3h55m   v1.21.7

 

 

Worker 노드 업그레이드

워커 노드 업그레이드 절차는 추가 컨트롤 플레인의 업그레이드 절차와 동일합니다.

수행 결과를 살펴 보겠습니다.

 

root@k8s-w1:~# apt-mark unhold kubeadm && apt-get update && apt-get install -y kubeadm=1.22.3-00 && apt-mark hold kubeadm
Canceled hold on kubeadm.
Get:1 http://security.ubuntu.com/ubuntu focal-security InRelease [114 kB]
Hit:2 http://archive.ubuntu.com/ubuntu focal InRelease
Hit:3 https://download.docker.com/linux/ubuntu focal InRelease
Get:4 http://archive.ubuntu.com/ubuntu focal-updates InRelease [114 kB]
Get:6 http://archive.ubuntu.com/ubuntu focal-backports InRelease [108 kB]
Get:5 https://packages.cloud.google.com/apt kubernetes-xenial InRelease [9383 B]
Fetched 345 kB in 23s (15.2 kB/s)
Reading package lists... Done
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following packages will be upgraded:
  kubeadm
1 upgraded, 0 newly installed, 0 to remove and 18 not upgraded.
Need to get 8712 kB of archives.
After this operation, 971 kB of additional disk space will be used.
Get:1 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubeadm amd64 1.22.3-00 [8712 kB]
Fetched 8712 kB in 22s (390 kB/s)
(Reading database ... 63715 files and directories currently installed.)
Preparing to unpack .../kubeadm_1.22.3-00_amd64.deb ...
Unpacking kubeadm (1.22.3-00) over (1.21.7-00) ...
Setting up kubeadm (1.22.3-00) ...
kubeadm set on hold.
root@k8s-w1:~# kubeadm upgrade node
[upgrade] Reading configuration from the cluster...
[upgrade] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks
[preflight] Skipping prepull. Not a control plane node.
[upgrade] Skipping phase. Not a control plane node.
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[upgrade] The configuration for this node was successfully updated!
[upgrade] Now you should go ahead and upgrade the kubelet package using your package manager.

kubeadm upgrade node 작업은 컨트롤 플레인의 static pod의 image pulling 이 없기 때문에 빨리 끝납니다.

아래의 노드 drain 은 컨트롤 플레인에서 수행합니다.

# Control Plane 에서 수행
root@k8s-m0:~# kubectl drain k8s-w1 --ignore-daemonsets
node/k8s-w1 cordoned
WARNING: ignoring DaemonSet-managed Pods: kube-system/kube-flannel-ds-4jqg8, kube-system/kube-proxy-rhzmv
evicting pod kube-system/coredns-78fcd69978-nwf2f
evicting pod kube-system/coredns-78fcd69978-j6p7g
pod/coredns-78fcd69978-nwf2f evicted
pod/coredns-78fcd69978-j6p7g evicted
node/k8s-w1 evicted
root@k8s-m0:~# kubectl get no
NAME     STATUS                     ROLES                  AGE     VERSION
k8s-m0   Ready                      control-plane,master   5h2m    v1.22.3
k8s-m1   Ready                      control-plane,master   4h38m   v1.22.3
k8s-m2   Ready                      control-plane,master   4h14m   v1.22.3
k8s-w1   Ready,SchedulingDisabled   <none>                 3h59m   v1.21.7

이어서 워커노드에서 나머지 업그레이드 작업을 수행합니다.

root@k8s-w1:~# apt-mark unhold kubelet kubectl && apt-get update && apt-get install -y kubelet=1.22.3-00 kubectl=1.22.3-00 && apt-mark hold kubelet kubectl

Canceled hold on kubelet.
Canceled hold on kubectl.
Hit:1 https://download.docker.com/linux/ubuntu focal InRelease
Hit:2 http://archive.ubuntu.com/ubuntu focal InRelease
Get:3 http://security.ubuntu.com/ubuntu focal-security InRelease [114 kB]
Get:5 http://archive.ubuntu.com/ubuntu focal-updates InRelease [114 kB]
Get:6 http://archive.ubuntu.com/ubuntu focal-backports InRelease [108 kB]
Hit:4 https://packages.cloud.google.com/apt kubernetes-xenial InRelease
Fetched 336 kB in 22s (15.0 kB/s)
Reading package lists... Done
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following packages will be upgraded:
  kubectl kubelet
2 upgraded, 0 newly installed, 0 to remove and 17 not upgraded.
Need to get 28.2 MB of archives.
After this operation, 3028 kB of additional disk space will be used.
Get:1 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubectl amd64 1.22.3-00 [9039 kB]
Get:2 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubelet amd64 1.22.3-00 [19.1 MB]
Fetched 28.2 MB in 25s (1105 kB/s)
(Reading database ... 63715 files and directories currently installed.)
Preparing to unpack .../kubectl_1.22.3-00_amd64.deb ...
Unpacking kubectl (1.22.3-00) over (1.21.7-00) ...
Preparing to unpack .../kubelet_1.22.3-00_amd64.deb ...
Unpacking kubelet (1.22.3-00) over (1.21.7-00) ...
Setting up kubectl (1.22.3-00) ...
Setting up kubelet (1.22.3-00) ...
kubelet set on hold.
kubectl set on hold.
root@k8s-w1:~# systemctl daemon-reload && systemctl restart kubelet
root@k8s-w1:~#

마지막으로 노드 uncordon 작업을 컨트롤 플레인에서 수행합니다.

# Control Plane 에서 수행
root@k8s-m0:~# kubectl uncordon k8s-w1
node/k8s-w1 uncordoned
root@k8s-m0:~# kubectl get no
NAME     STATUS   ROLES                  AGE     VERSION
k8s-m0   Ready    control-plane,master   5h7m    v1.22.3
k8s-m1   Ready    control-plane,master   4h42m   v1.22.3
k8s-m2   Ready    control-plane,master   4h18m   v1.22.3
k8s-w1   Ready    <none>                 4h4m    v1.22.3

이로써 Kubernetes 클러스터의 전체 노드 업그레이드가 완료 되었습니다.

 

Kubernetes 업그레이드 확인

업그레이드를 마치고 최종 상태를 확인하였습니다.

root@k8s-m0:~# kubectl get no
NAME     STATUS   ROLES                  AGE     VERSION
k8s-m0   Ready    control-plane,master   5h8m    v1.22.3
k8s-m1   Ready    control-plane,master   4h44m   v1.22.3
k8s-m2   Ready    control-plane,master   4h20m   v1.22.3
k8s-w1   Ready    <none>                 4h5m    v1.22.3
root@k8s-m0:~# kubectl version
Client Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.3", GitCommit:"c92036820499fedefec0f847e2054d824aea6cd1", GitTreeState:"clean", BuildDate:"2021-10-27T18:41:28Z", GoVersion:"go1.16.9", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.3", GitCommit:"c92036820499fedefec0f847e2054d824aea6cd1", GitTreeState:"clean", BuildDate:"2021-10-27T18:35:25Z", GoVersion:"go1.16.9", Compiler:"gc", Platform:"linux/amd64"}
root@k8s-m0:~# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.3", GitCommit:"c92036820499fedefec0f847e2054d824aea6cd1", GitTreeState:"clean", BuildDate:"2021-10-27T18:40:11Z", GoVersion:"go1.16.9", Compiler:"gc", Platform:"linux/amd64"}
root@k8s-m0:~# kubelet --version
Kubernetes v1.22.3

 

참고

https://kubernetes.io/ko/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/

https://wnw1005.tistory.com/363

Windows 10 이후 버전에서 Windows terminal 을 사용할 수 있다.

뿐만 아니라 OS 자체에 ssh 를 기본적으로 제공함으로서 cmd 나 powershell 에서 불편했던 WSL 이나 Linux 를 관리하는데 유용하게 사용할 수 있다.

 

Windows Terminal 설치

https://docs.microsoft.com/ko-kr/windows/terminal/install

Microsoft Store 에서 Windows Terminal 을 검색하여 설치하면 된다.

Windows Terminal 의 Pane 기능

Windows Terminal 을 사용하면서 하나의 세션에서 화면을 분할하는 창(Pane) 기능을 활용하면 새로운 세션을 생성하거나 탭을 활용하는 것보다 시각적으로 효과적이다. Linux 의 tmux 와 동일한 기능이다.

 

Windows Terminal 의 Pane 활용 단축키 

새로운 창 (세로로 만들기) : [alt] + [shift] + [+]

새로운 창 (가로로 만들기) :  [alt] + [shift] + [-]

창 크기 조정 :  [alt] + [shift] + [방향키]

창 이동하기 :  [alt] + [방향키]

 

Windows Terminal 의 Pane 활용 예시

새로운 창 (세로 옆으로 만들기) : [alt] + [shift] + [+]

새로운 창 (가로 아래에 만들기) :  [alt] + [shift] + [-]

창 크기 조정 :  [alt] + [shift] + [방향키]

 

창 이동하기 :  [alt] + [방향키]

하나의 창에서 다른 창으로 포커스를 이동하는 방법입니다. 

 

 

이번 포스트에서는 Windows Terminal 에서 Pane 을 활용하는 방법을 알아 봤습니다.

간단하네요.

지난 번에 이어 몇가지 모듈을 더 살펴 보겠습니다.

 

(6) 명령 수행

특정 명령을 수행하는 시나리오 입니다.

[root@labco7ans ansible]# cat 07.command.yml
---
- hosts: win
  gather_facts: no
  tasks:
  - name: run an executable using win_command
    win_command: whoami.exe

  - name: run a cmd command
    win_command: cmd.exe /c mkdir C:\test
    
[root@labco7ans ansible]# ansible-playbook -i hosts 07.command.yml

PLAY [win] ***********************************************************************************************************************************************************

TASK [run an executable using win_command] ***************************************************************************************************************************
changed: [172.16.3.106]

TASK [run a cmd command] *********************************************************************************************************************************************
changed: [172.16.3.106]

PLAY RECAP ***********************************************************************************************************************************************************
172.16.3.106               : ok=2    changed=2    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

생성되었습니다.

PS C:\> dir

    Directory: C:\

Mode                LastWriteTime         Length Name
----                -------------         ------ ----
<생략>
d-----        6/29/2020   4:33 PM                Temp
d-----        6/30/2020  10:35 AM                test
d-r---        6/29/2020   4:46 PM                Users
d-----        6/29/2020   4:45 PM                Windows

그런데 이를 한번 더 실행하면 어떨까요?

[root@labco7ans ansible]# ansible-playbook -i hosts 07.command.yml

PLAY [win] ***********************************************************************************************************************************************************

TASK [run an executable using win_command] ***************************************************************************************************************************
changed: [172.16.3.106]

TASK [run a cmd command] *********************************************************************************************************************************************
fatal: [172.16.3.106]: FAILED! => {"changed": true, "cmd": "cmd.exe /c mkdir C:\\test", "delta": "0:00:00.140629", "end": "2020-06-30 01:35:51.440674", "msg": "non-zero return code", "rc": 1, "start": "2020-06-30 01:35:51.300045", "stderr": "A subdirectory or file C:\\test already exists.\r\n", "stderr_lines": ["A subdirectory or file C:\\test already exists."], "stdout": "", "stdout_lines": []}

PLAY RECAP ***********************************************************************************************************************************************************
172.16.3.106               : ok=1    changed=1    unreachable=0    failed=1    skipped=0    rescued=0    ignored=0

stderr로 "A subdirectory or file C:\\test already exists.\r\n" 이 발생합니다.

이러한 command는 멱등성의 단위가 아닌 별개의 작업으로 간주됩니다.

한가지 더 살펴볼 것은 앞서 살펴본 예제에서도 있던 register입니다. register 를 whoami.exe 결과를 받아오는데 사용될 수도 있습니다. 예제를 조금 수정했습니다.

[root@labco7ans ansible]# cat 07.command2.yml
---
- hosts: win
  gather_facts: no
  tasks:
  - name: run an executable using win_command
    win_command: whoami.exe
    register: output
  - debug: msg="{{ output.stdout }}"

[root@labco7ans ansible]# ansible-playbook -i hosts 07.command2.yml

PLAY [win] ***********************************************************************************************************************************************************

TASK [run an executable using win_command] ***************************************************************************************************************************
changed: [172.16.3.106]

TASK [debug] *********************************************************************************************************************************************************
ok: [172.16.3.106] => {
    "msg": "labw16ans\\administrator\r\n"
}

PLAY RECAP ***********************************************************************************************************************************************************
172.16.3.106               : ok=2    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

 

(7) 환경 변수 설정

시스템 환경변수를 추가하는 시나리오 입니다.

[root@labco7ans ansible]# cat 08.env.yml
---
- hosts: win
  gather_facts: no
  tasks:
  - name: Set an environment variable for all users
    win_environment:
      state: present
      name: NewVariable
      value: New Value
      level: machine

[root@labco7ans ansible]# ansible-playbook -i hosts 08.env.yml

PLAY [win] ***********************************************************************************************************************************************************

TASK [Set an environment variable for all users] *********************************************************************************************************************
changed: [172.16.3.106]

PLAY RECAP ***********************************************************************************************************************************************************
172.16.3.106               : ok=1    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

아래와 같이 생성되었습니다.

C:\Users\User1>set |findstr New
NewVariable=New Value

Windows 서버의 환경변수는 시스템 환경변수와 사용자 환경변수로 나뉩니다. 이를 level로 정의합니다. 하기 참고하시기 바랍니다. 

공식문서의 Notes에서도 언급된 내용은, 이 module 자체가 변경사항을 전파하지 않는다는 점입니다(This module does not broadcast change events). 그러하므로 변경사항을 세션이나 시스템에 반영하기 위해 새로운 세션을 생성하거나 reboot이 필요할 수 있습니다.

참고: https://docs.ansible.com/ansible/latest/modules/win_environment_module.html#win-environment-module

 

자매품으로 win_path 모듈로 세부 element를 수정할 수 있습니다. 기본적인 환경변수 추가는 win_envrionment를 사용하시기 바랍니다.

참고: https://docs.ansible.com/ansible/latest/modules/win_path_module.html#win-path-module

 

(8) 레지스트리 관리

레지스트리를 관리하는 시나리오는 아래와 같습니다.

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Kbdhid\Parameters의 CrashOnCtrlScroll를 1로 세팅

[root@labco7ans ansible]# cat 09.reg.yml
---
- hosts: win
  gather_facts: no
  tasks:
  - name: Add or update registry with dword entry 'CrashOnCtrlScroll', and containing 1 as the hex value
    win_regedit:
      path: HKLM:\SYSTEM\CurrentControlSet\Services\Kbdhid\Parameters
      name: CrashOnCtrlScroll
      data: 0x1
      type: dword
      
[root@labco7ans ansible]# ansible-playbook -i hosts 09.reg.yml

PLAY [win] ***********************************************************************************************************************************************************

TASK [Add or update registry with dword entry 'CrashOnCtrlScroll', and containing 1 as the hex value] ****************************************************************
changed: [172.16.3.106]

PLAY RECAP ***********************************************************************************************************************************************************
172.16.3.106               : ok=1    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

아래와 같이 값이 입력되었습니다.

<사전>
C:\>reg query HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Kbdhid\Parameters

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Kbdhid\Parameters
    WorkNicely    REG_DWORD    0x0

<사후>
C:\>reg query HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Kbdhid\Parameters

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Kbdhid\Parameters
    WorkNicely    REG_DWORD    0x0
    CrashOnCtrlScroll    REG_DWORD    0x1

다른 방법은 별도의 reg 파일을 작성 후 win_regmerge로 적용시킬 수도 있습니다. 이 경우 앞서 살펴본 win_copy로 필요한 reg 파일을 전달하고 적용시킬 수 있습니다.

참고: https://docs.ansible.com/ansible/latest/modules/win_regmerge_module.html#win-regmerge-module

 

(9) 페이지파일 설정

페이지 파일을 설정하는 시나리오입니다.

[root@labco7ans ansible]# ansible win -i hosts -m win_pagefile
172.16.3.106 | SUCCESS => {
    "automatic_managed_pagefiles": true,
    "changed": false,
    "pagefiles": []
}

[root@labco7ans ansible]# cat 10.pagefile.yml
---
- hosts: win
  gather_facts: no
  tasks:
  - name: Disable AutomaticManagedPagefile and set C pagefile
    win_pagefile:
      drive: C
      initial_size: 2048
      maximum_size: 2048
      automatic: no
      state: present

[root@labco7ans ansible]# ansible-playbook -i hosts 10.pagefile.yml

PLAY [win] ***********************************************************************************************************************************************************

TASK [Disable AutomaticManagedPagefile and set C pagefile] ***********************************************************************************************************
changed: [172.16.3.106]

PLAY RECAP ***********************************************************************************************************************************************************
172.16.3.106               : ok=1    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

[root@labco7ans ansible]# ansible win -i hosts -m win_pagefile
172.16.3.106 | SUCCESS => {
    "automatic_managed_pagefiles": false,
    "changed": false,
    "pagefiles": [
        {
            "caption": "C:\\ 'pagefile.sys'",
            "description": "'pagefile.sys' @ C:\\",
            "initial_size": 2048,
            "maximum_size": 2048,
            "name": "C:\\pagefile.sys"
        }
    ]
}

페이지 파일 세팅을 적용하기 위해 reboot이 필요할 수 있습니다. reboot 시나리오를 추가했습니다. pagefile 단계가 changed 상태가되면 reboot이 진행되는 시나리오입니다.

[root@labco7ans ansible]# cat 10.pagefile2.yml
---
- hosts: win
  gather_facts: no
  tasks:
  - name: Disable AutomaticManagedPagefile and set C pagefile
    win_pagefile:
      drive: C
      initial_size: 4096
      maximum_size: 4096
      automatic: no
      state: present
    register: pagefile

  - name: Reboot server
    win_reboot:
      msg: "Page file moved,rebooting..."
      pre_reboot_delay: 15
    when: pagefile.changed
    
[root@labco7ans ansible]# ansible-playbook -i hosts 10.pagefile2.yml

PLAY [win] ***********************************************************************************************************************************************************

TASK [Disable AutomaticManagedPagefile and set C pagefile] ***********************************************************************************************************
changed: [172.16.3.106]

TASK [Reboot server] *************************************************************************************************************************************************
changed: [172.16.3.106]

PLAY RECAP ***********************************************************************************************************************************************************
172.16.3.106               : ok=2    changed=2    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

[root@labco7ans ansible]# ansible win -i hosts -m win_pagefile
172.16.3.106 | SUCCESS => {
    "automatic_managed_pagefiles": false,
    "changed": false,
    "pagefiles": [
        {
            "caption": "C:\\ 'pagefile.sys'",
            "description": "'pagefile.sys' @ C:\\",
            "initial_size": 4096,
            "maximum_size": 4096,
            "name": "C:\\pagefile.sys"
        }
    ]
}

 

(10) 로컬 보안 정책 관리

로컬 보안 정책(secpol)을 변경하는 시나리오입니다.

먼저 secedit을 export해서 변경하려는 값을 살펴 봅니다.

C:\>SecEdit.exe /export /cfg C:\temp\output.ini

The task has completed successfully.
See log %windir%\security\logs\scesrv.log for detail info.

C:\>type C:\temp\output.ini
[Unicode]
Unicode=yes
[System Access]
MinimumPasswordAge = 0
MaximumPasswordAge = 42
MinimumPasswordLength = 0
PasswordComplexity = 1
<생략>

MaximumPasswordAge 을 90으로 변경해 보는 시나리오를 수행해보겠습니다.

파일에서 확인한 내용으로 section에서 [System Access] 를 필요한 key와 value 를 각각 설정합니다.

[root@labco7ans ansible]# cat 11.secpol.yml
---
- hosts: win
  gather_facts: no
  tasks:
  - name: Set the maximum password age
    win_security_policy:
      section: System Access
      key: MaximumPasswordAge
      value: 90
      
[root@labco7ans ansible]# ansible-playbook -i hosts 11.secpol.yml

PLAY [win] ***********************************************************************************************************************************************************

TASK [Set the maximum password age] **********************************************************************************************************************************
changed: [172.16.3.106]

PLAY RECAP ***********************************************************************************************************************************************************
172.16.3.106               : ok=1    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

아래와 같이 변경되었습니다.

C:\>SecEdit.exe /export /cfg C:\temp\output2.ini

The task has completed successfully.
See log %windir%\security\logs\scesrv.log for detail info.

C:\>type C:\temp\output2.ini | findstr Password
MinimumPasswordAge = 0
MaximumPasswordAge = 90
MinimumPasswordLength = 0
PasswordComplexity = 1
PasswordHistorySize = 0
<생략>

 

(11) 업데이트 적용

윈도우즈의 패치는 아래와 같이 진행할 수 있습니다.

- name: Install all critical and security updates
  win_updates:
    category_names:
    - CriticalUpdates
    - SecurityUpdates
    state: installed
  register: update_result

- name: Reboot host if required
  win_reboot:
  when: update_result.reboot_required

다만 Windows 서버의 업데이트는 약간의 복잡성을 가지고 있습니다. 상기의 시나리오는 MS나 WSUS 로의 업데이트가 가능한 환경이라는 전제가 되있어야합니다.

Air-gapped 환경이라면 아래와 같은 시나리오도 생각해 볼 수 있습니다.

  • 웹서버 세팅 > 업데이트 파일을 Docroot에 복사 > playbook (win_get_url + win_hotfix로 설치)

업데이트 파일 이동을 위해 win_copy를 생각할 수도 있겠지만 win_copy의 경우 WinRM 상에서 동작하므로 용량이 큰 경우 효과적이지 않습니다 (Because win_copy runs over WinRM, it is not a very efficient transfer mechanism. If sending large files consider hosting them on a web service and using [win_get_url](https://docs.ansible.com/ansible/latest/modules/win_get_url_module.html#win-get-url-module) instead.) 공식문서의 커맨트를 따라 win_get_url을 사용했습니다.

혹은 download.windowsupdate.com을 URL 방화벽으로 오픈이 가능하다면 아래와 같이 진행할 수도 있습니다.

- name: Download KB3172729 for Server 2012 R2
  win_get_url:
    url: http://download.windowsupdate.com/d/msdownload/update/software/secu/2016/07/windows8.1-kb3172729-x64_e8003822a7ef4705cbb65623b72fd3cec73fe222.msu
    dest: C:\temp\KB3172729.msu

- name: Install hotfix
  win_hotfix:
    hotfix_kb: KB3172729
    source: C:\temp\KB3172729.msu
    state: present
  register: hotfix_result

- name: Reboot host if required
  win_reboot:
  when: hotfix_result.reboot_required

마지막 시나리오로 SSU 업데이트를 적용해 보겠습니다.

[root@labco7ans ansible]# cat 12.update.yml
---
- hosts: win
  gather_facts: no
  tasks:
  - name: Download SSU(KB4550994) for Server 2016
    win_get_url:
      url: http://download.windowsupdate.com/c/msdownload/update/software/secu/2020/04/windows10.0-kb4550994-x64_1df8a8ea245041495f3b219fb22f3849908d8e27.msu
      dest: C:\temp\KB4550994.msu

  - name: Install hotfix
    win_hotfix:
      hotfix_kb: KB4550994
      source: C:\temp\KB4550994.msu
      state: present
    register: hotfix_result

  - name: Reboot host if required
    win_reboot:
    when: hotfix_result.reboot_required
    
[root@labco7ans ansible]# ansible-playbook -i hosts 12.update.yml

PLAY [win] ***********************************************************************************************************************************************************

TASK [Download SSU(KB4550994) for Server 2016] ***********************************************************************************************************************
changed: [172.16.3.106]

TASK [Install hotfix] ************************************************************************************************************************************************
changed: [172.16.3.106]

TASK [Reboot host if required] ***************************************************************************************************************************************
skipping: [172.16.3.106]

PLAY RECAP ***********************************************************************************************************************************************************
172.16.3.106               : ok=2    changed=2    unreachable=0    failed=0    skipped=1    rescued=0    ignored=0

SSU 패치라 reboot task는 skip 되었고, 아래 결과를 보면 KB4550994 가 설치된 것을 알 수 있습니다.

C:\>wmic qfe
Caption                                     CSName     Description      FixComments  HotFixID   InstallDate  InstalledBy              InstalledOn  Name  ServicePackInEffect  Status
http://support.microsoft.com/?kbid=3192137  LABW16ANS  Update                        KB3192137               NT AUTHORITY\SYSTEM      9/12/2016
http://support.microsoft.com/?kbid=4550994  LABW16ANS  Security Update               KB4550994               LABW16ANS\Administrator  6/30/2020

이를 마지막으로 Windows 서버에서 ansible을 활용해 사용해볼만한 Windows Module을 살펴봤습니다.

 

 

참고

https://docs.ansible.com/ansible/latest/modules/win_feature_module.html#win-feature-module

https://docs.ansible.com/ansible/latest/user_guide/windows_usage.html#use-cases

https://geekflare.com/ansible-playbook-windows-example/

'IaC' 카테고리의 다른 글

ansible - windows 모듈 (1)  (0) 2022.02.04

ansible 에서 Windows 서버를 관리하기 위해 활용할 수 있는 몇가지 모듈을 살펴보겠습니다.

 

 

(1) 정보 가져오기

setup 모듈을 확인을 해보면 ansible_facts에 다양한 정보를 가져올 수 있습니다.

[root@labco7ans ansible]# ansible -i hosts win -m setup
172.16.3.106 | SUCCESS => {
    "ansible_facts": {
        "ansible_architecture": "64-bit",
        "ansible_bios_date": "11/26/2012",
        "ansible_bios_version": "Hyper-V UEFI Release v1.0",
<생략>
}

이를 아래와 같이 활용할 수 있습니다.

[root@labco7ans ansible]# cat 01.facts.yml
---
- hosts: win
  remote_user: ansible
  tasks:
  - name: gathering facts with ansible
    debug:
      msg:
        - " OS: {{ ansible_distribution }} "
        - " Version: {{ ansible_distribution_version }} "
        - " Hostname: {{ ansible_hostname }} "
        - " IPaddress: {{ ansible_ip_addresses }} "
        - " Core: {{ ansible_processor_cores }}"
        - " Mem: {{ ansible_memtotal_mb }}"
[root@labco7ans ansible]# ansible-playbook -i hosts 01.facts.yml

PLAY [win] *************************************************************************************************************

TASK [Gathering Facts] *************************************************************************************************
ok: [172.16.3.106]

TASK [gathering facts with ansible] ************************************************************************************
ok: [172.16.3.106] => {
    "msg": [
        " OS: Microsoft Windows Server 2016 Datacenter ",
        " Version: 10.0.14393.0 ",
        " Hostname: labw16ans ",
        " IPaddress: [u'172.16.3.106', u'fe80::4183:9a41:126b:5eb'] ",
        " Core: 4",
        " Mem: 8192"
    ]
}

PLAY RECAP *************************************************************************************************************
172.16.3.106               : ok=2    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

여러 대를 대상으로 했다면 조금 더 효과적으로 보였을 것 같습니다.

참고로 disk 정보는 별도의 win_disk_facts 모듈로 가져올 수 있습니다.

참고: https://docs.ansible.com/ansible/latest/modules/win_disk_facts_module.html#win-disk-facts-module

 

추가로 확인해보니 performance counter 관련된 모듈은 없습니다. 이는 모니터링에서 보는게 효과적일 것 같습니다.

다른 방법은 win_command 모듈로 wmic 명령을 전달 할 수도 있습니다.

---
- hosts: win
  tasks:
   - name: run wmic command
     win_command: wmic cpu get caption, deviceid, name, numberofcores, maxclockspeed, status
     register: usage
   - debug: msg="{{ usage.stdout }}"

참고로 앞선 win_ping 결과의 [Gathering Facts]를 보면 알 수 있듯이 playbook을 실행하면 기본적으로 ansible_facts를 사용해 서버 정보를 수집합니다. 이러한 정보를 활용을 하지 않는 경우 facts를 수집하지 않으므로써 수행 속도를 빠르게 할 수 있습니다. gather_facts: no 로 facts 수집을 생략할 수 있습니다.

[root@labco7ans ansible]# cat ping2.yml
---
- hosts: win
  gather_facts: no
  remote_user: ansible
  tasks:
  - name: ping test
    win_ping:
[root@labco7ans ansible]# ansible-playbook -i hosts ping2.yml

PLAY [win] *************************************************************************************************************

TASK [ping test] *******************************************************************************************************
ok: [172.16.3.106]

PLAY RECAP *************************************************************************************************************
172.16.3.106               : ok=1    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

참고: https://ossian.tistory.com/98

 

(2) Doamin/계정 관리

Domain Join

여러 대의 서버를 AD join 해야하는 상황은 접속>정보 입력>인증>OS reboot>확인이 반복되야 합니다.

아래의 VM을 Domain에 Join하고 세팅하는 예시를 들어보겠습니다.

PS C:\Users\Administrator> Get-CimInstance Win32_ComputerSystem

Name             PrimaryOwnerName     Domain               TotalPhysicalMemory  Model               Manufacturer
----             ----------------     ------               -------------------  -----               ------------
WIN-VS4UITP6P9O  Windows User         WORKGROUP            8588902400           Virtual Machine     Microsoft Corpor...

아래의 playbook을 실행합니다.

[root@labco7ans ansible]# cat 02.ad_join.yml
- hosts: win_ad
  gather_facts: no
  tasks:
  - win_domain_membership:
      dns_domain_name: winlab.citec.com
      hostname: labw16ansad
      domain_admin_user: winlab\administrator
      domain_admin_password: dnlsehdn123$
      state: domain
    register: dmout

  - win_reboot:
    when: dmout.reboot_required

[root@labco7ans ansible]# ansible-playbook -i hosts 02.ad_join.yml

PLAY [win_ad] ********************************************************************************************************************************************************

TASK [win_domain_membership] *****************************************************************************************************************************************
changed: [172.16.3.107]

TASK [win_reboot] ****************************************************************************************************************************************************
changed: [172.16.3.107]

PLAY RECAP ***********************************************************************************************************************************************************
172.16.3.107               : ok=2    changed=2    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

inventory 파일의 win_ad에 여러대의 IP가 등록되었다면 다수 서버의 설정을 한번에 할 수 있습니다.

TASK [win_reboot] 단계에서 서버의 재시작이 수행되고, 완료가 되면 접속이 됩니다.

아래와 같이 AD 조인과 hostname 변경된 것을 확인 할 수 있습니다.

PS C:\Users\administrator.WINLAB> Get-CimInstance Win32_ComputerSystem

Name             PrimaryOwnerName     Domain               TotalPhysicalMemory  Model               Manufacturer
----             ----------------     ------               -------------------  -----               ------------
LABW16ANSAD      Windows User         winlab.citec.com     8588902400           Virtual Machine     Microsoft Corpor...

 

User & Group 관리

아래는 로컬 계정 및 그룹을 관리하는 시나리오 입니다.

[root@labco7ans ansible]# cat 02.user_group.yml
- hosts: win
  gather_facts: no
  tasks:
  - name: Create local group to contain new users
    win_group:
      name: LocalGroup
      description: LocalGroup

  - name: Create local user
    win_user:
      name: '{{ item.name }}'
      password: '{{ item.password }}'
      groups: LocalGroup, Remote Desktop Users
      password_never_expires: yes
    loop:
    - name: User1
      password: Password1
    - name: User2
      password: Password2

[root@labco7ans ansible]# ansible-playbook -i hosts 02.user_group.yml

PLAY [win] ***********************************************************************************************************************************************************

TASK [Create local group to contain new users] ***********************************************************************************************************************
ok: [172.16.3.106]

TASK [Create local user] *********************************************************************************************************************************************
changed: [172.16.3.106] => (item={u'password': u'Password1', u'name': u'User1'})
changed: [172.16.3.106] => (item={u'password': u'Password2', u'name': u'User2'})

PLAY RECAP ***********************************************************************************************************************************************************
172.16.3.106               : ok=2    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

 

(3) feature 관리

서버에 Feature를 관리하는 시나리오 입니다.

[root@labco7ans ansible]# cat 03.feature.yml
- hosts: win
  gather_facts: no
  tasks:
  - name: Install IIS Web-Server with sub features and management tools
    win_feature:
      name: Web-Server
      state: present
      include_sub_features: yes
      include_management_tools: yes
    register: win_feature

  - name: Reboot if installing Web-Server feature requires it
    win_reboot:
    when: win_feature.reboot_required

[root@labco7ans ansible]# ansible-playbook -i hosts 03.feature.yml

PLAY [win] ***********************************************************************************************************************************************************

TASK [Install IIS Web-Server with sub features and management tools] *************************************************************************************************
changed: [172.16.3.106]

TASK [Reboot if installing Web-Server feature requires it] ***********************************************************************************************************
skipping: [172.16.3.106]

PLAY RECAP ***********************************************************************************************************************************************************
172.16.3.106               : ok=1    changed=1    unreachable=0    failed=0    skipped=1    rescued=0    ignored=0

curl로 접속해보니 default website가 호출됩니다.

[root@labco7ans ansible]# curl 172.16.3.106
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1" />
<title>IIS Windows Server</title>
<style type="text/css">
<!--
생략
-->
</style>
</head>
<body>
<div id="container">
<a href="http://go.microsoft.com/fwlink/?linkid=66138&amp;clcid=0x409"><img src="iisstart.png" alt="IIS" width="960" height="600" /></a>
</div>
</body>
 
(4) 서비스 관리

서버의 서비스를 관리하는 시나리오 입니다.

다들 아시겠지만 state는 상태를 의미하고, start_mode는 시작 유형입니다.

간단히 서비스를 재시작 하는 예제입니다.

[root@labco7ans ansible]# cat 04.service.yml
- hosts: win
  gather_facts: no
  tasks:
  - name: Restart a service
    win_service:
      name: spooler
      state: restarted

[root@labco7ans ansible]# ansible-playbook -i hosts 04.service.yml

PLAY [win] ***********************************************************************************************************************************************************

TASK [Restart a service] *********************************************************************************************************************************************
changed: [172.16.3.106]

PLAY RECAP ***********************************************************************************************************************************************************
172.16.3.106               : ok=1    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

참고: https://docs.ansible.com/ansible/latest/modules/win_service_module.html

 

 

(5) 파일 복사

파일을 전송하는 시나리오 입니다.

[root@labco7ans ansible]# cat 05.copyto.yml
---
- hosts: win
  gather_facts: no
  tasks:
  - name: Copy a single file to remote
    win_copy:
      src: /root/ansible/files/foo.conf
      dest: C:\Temp\renamed-foo.conf

[root@labco7ans ansible]# ansible-playbook -i hosts 05.copyto.yml

PLAY [win] ***********************************************************************************************************************************************************

TASK [Copy a single file to remote] **********************************************************************************************************************************
changed: [172.16.3.106]

PLAY RECAP ***********************************************************************************************************************************************************
172.16.3.106               : ok=1    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

파일 이동 성공!

c:\Temp>type renamed-foo.conf
foo conf file

만약 존재하던 설정파일이라면 backup: yes 옵션으로 기존 파일을 백업할 수도 있습니다.

옵션 중 remote_src: yes의 remote의 의미를 오해해서 한참 헤맸습니다. 이 옵션은 remote > ansible node가 아닙니다. remote src > remote dest 입니다.

[root@labco7ans ansible]# cat 05.copyremote.yml
---
- hosts: win
  tasks:
  - name: Copy File
    win_copy:
      src: C:\Windows\System32\drivers\etc\hosts
      dest: C:\Temp\hosts_backup
      remote_src: yes
      
[root@labco7ans ansible]# ansible-playbook -i hosts 05.copyremote.yml

PLAY [win] ***********************************************************************************************************************************************************

TASK [Gathering Facts] ***********************************************************************************************************************************************
ok: [172.16.3.106]

TASK [Copy File] *****************************************************************************************************************************************************
changed: [172.16.3.106]

PLAY RECAP ***********************************************************************************************************************************************************
172.16.3.106               : ok=2    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

결과는 아래와 같습니다.

c:\Temp>dir
 Directory of c:\Temp

06/29/2020  04:33 PM    <DIR>          .
06/29/2020  04:33 PM    <DIR>          ..
07/16/2016  10:21 PM               824 hosts_backup
06/29/2020  04:05 PM                14 renamed-foo.conf
               2 File(s)            838 bytes
               2 Dir(s)  122,811,580,416 bytes free

 

 

참고: https://docs.ansible.com/ansible/latest/modules/win_copy_module.html#win-copy-module

docs.ansible.com의 Parameters를 확인하는 TIP
1) Parameter 설명을 보면 어느 버전에서 사용가능한지(ex. addded in 2.8)와 필수값(required)인지 등을 알 수 있습니다.
2) Choices/Default 값을 알 수 있습니다. 파란색 글자로 작성된 값이 default이므로 명시하지 않은 경우 적용됩니다.

 

 

참고

https://docs.ansible.com/ansible/latest/modules/win_feature_module.html#win-feature-module

https://docs.ansible.com/ansible/latest/user_guide/windows_usage.html#use-cases

https://geekflare.com/ansible-playbook-windows-example/

'IaC' 카테고리의 다른 글

ansible - windows 모듈 (2)  (0) 2022.02.04

+ Recent posts