<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0">
  <channel>
    <title>a story</title>
    <link>https://a-person.tistory.com/</link>
    <description>Cloud Native 관련 기술 블로그 입니다.</description>
    <language>ko</language>
    <pubDate>Wed, 15 Apr 2026 06:38:30 +0900</pubDate>
    <generator>TISTORY</generator>
    <ttl>100</ttl>
    <managingEditor>한명</managingEditor>
    
    <item>
      <title>[4] MinIO - MNMD 배포</title>
      <link>https://a-person.tistory.com/64</link>
      <description>&lt;p data-ke-size=&quot;size16&quot;&gt;이번 게시물에서는 MinIO의 MNMD(Multi-Node Multi-Drive)를 구성해보겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;실습 환경으로 클라우드를 고민해 보았지만, non-managed 쿠버네티스를 구성하기에 어려움이 있고, Managed 쿠버네티스를 활용하면 로컬 디스크를 연결는 데 어려움이 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이러한 이유로 Vagrant로 로컬 쿠버네티스 환경을 구성하고 노드에 여러 개 디스크를 연결한 뒤, DirectPV로 드라이브를 구성하고 MinIO의 MNMD 배포 환경을 테스트 해보겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;목차&lt;/h2&gt;
&lt;ol style=&quot;list-style-type: decimal;&quot; data-ke-list-type=&quot;decimal&quot;&gt;
&lt;li&gt;로컬 쿠버네티스 환경 구성&lt;/li&gt;
&lt;li&gt;DirectPV 구성&lt;/li&gt;
&lt;li&gt;MinIO MNMD 배포&lt;/li&gt;
&lt;/ol&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;1. 로컬 쿠버네티스 환경 구성&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;vagrant로 4개 worker node에 각 4개의 로컬 디스크를 가지는 환경을 생성하겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아래와 같이 Vagrantfile을 생성하고 &lt;code&gt;vagrnat up&lt;/code&gt;을 통해서 환경을 구성합니다.&lt;/p&gt;
&lt;pre class=&quot;ruby&quot;&gt;&lt;code&gt;# Variables
K8SV = '1.33.2-1.1' # Kubernetes Version : apt list -a kubelet , ex) 1.32.5-1.1
CONTAINERDV = '1.7.27-1' # Containerd Version : apt list -a containerd.io , ex) 1.6.33-1
CILIUMV = '1.17.6' # Cilium CNI Version : https://github.com/cilium/cilium/tags
N = 4 # max number of worker nodes

# Base Image  https://portal.cloud.hashicorp.com/vagrant/discover/bento/ubuntu-24.04
BOX_IMAGE = &quot;bento/ubuntu-24.04&quot;
BOX_VERSION = &quot;202502.21.0&quot;

Vagrant.configure(&quot;2&quot;) do |config|
#-ControlPlane Node
    config.vm.define &quot;k8s-ctr&quot; do |subconfig|
      subconfig.vm.box = BOX_IMAGE

      subconfig.vm.box_version = BOX_VERSION
      subconfig.vm.provider &quot;virtualbox&quot; do |vb|
        vb.customize [&quot;modifyvm&quot;, :id, &quot;--groups&quot;, &quot;/Cilium-Lab&quot;]
        vb.customize [&quot;modifyvm&quot;, :id, &quot;--nicpromisc2&quot;, &quot;allow-all&quot;]
        vb.name = &quot;k8s-ctr&quot;
        vb.cpus = 2
        vb.memory = 2048
        vb.linked_clone = true
      end
      subconfig.vm.host_name = &quot;k8s-ctr&quot;
      subconfig.vm.network &quot;private_network&quot;, ip: &quot;192.168.10.100&quot;
      subconfig.vm.network &quot;forwarded_port&quot;, guest: 22, host: 60000, auto_correct: true, id: &quot;ssh&quot;
      subconfig.vm.synced_folder &quot;./&quot;, &quot;/vagrant&quot;, disabled: true
      subconfig.vm.provision &quot;shell&quot;, path: &quot;https://raw.githubusercontent.com/gasida/vagrant-lab/refs/heads/main/cilium-study/2w/init_cfg.sh&quot;, args: [ K8SV, CONTAINERDV ]
      subconfig.vm.provision &quot;shell&quot;, path: &quot;https://raw.githubusercontent.com/gasida/vagrant-lab/refs/heads/main/cilium-study/2w/k8s-ctr.sh&quot;, args: [ N, CILIUMV ]
    end

#-Worker Nodes Subnet1
  (1..N).each do |i|
    config.vm.define &quot;k8s-w#{i}&quot; do |subconfig|
      subconfig.vm.box = BOX_IMAGE
      subconfig.vm.box_version = BOX_VERSION
      subconfig.vm.provider &quot;virtualbox&quot; do |vb|
        vb.customize [&quot;modifyvm&quot;, :id, &quot;--groups&quot;, &quot;/Cilium-Lab&quot;]
        vb.customize [&quot;modifyvm&quot;, :id, &quot;--nicpromisc2&quot;, &quot;allow-all&quot;]
        vb.name = &quot;k8s-w#{i}&quot;
        vb.cpus = 2
        vb.memory = 1536
        vb.linked_clone = true

        (1..4).each do |d|
          disk_path = &quot;disk-w#{i}-#{d}.vdi&quot;
          vb.customize [&quot;createhd&quot;, &quot;--filename&quot;, disk_path, &quot;--size&quot;, 10240] # 10GB
          vb.customize [&quot;storageattach&quot;, :id, &quot;--storagectl&quot;, &quot;SATA Controller&quot;, &quot;--port&quot;, d, &quot;--device&quot;, 0, &quot;--type&quot;, &quot;hdd&quot;, &quot;--medium&quot;, disk_path]
        end
      end
      subconfig.vm.host_name = &quot;k8s-w#{i}&quot;
      subconfig.vm.network &quot;private_network&quot;, ip: &quot;192.168.10.10#{i}&quot;
      subconfig.vm.network &quot;forwarded_port&quot;, guest: 22, host: &quot;6000#{i}&quot;, auto_correct: true, id: &quot;ssh&quot;
      subconfig.vm.synced_folder &quot;./&quot;, &quot;/vagrant&quot;, disabled: true
      subconfig.vm.provision &quot;shell&quot;, path: &quot;https://raw.githubusercontent.com/gasida/vagrant-lab/refs/heads/main/cilium-study/2w/init_cfg.sh&quot;, args: [ K8SV, CONTAINERDV]
      subconfig.vm.provision &quot;shell&quot;, path: &quot;https://raw.githubusercontent.com/gasida/vagrant-lab/refs/heads/main/cilium-study/2w/k8s-w.sh&quot;
    end
  end
end&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;blockquote data-ke-style=&quot;style3&quot;&gt;[Note] 해당 vagrantfile과 참조하는 shell script는 CloudNet 스터디 그룹을 통해서 제공받은 내용임을 알려드립니다.&lt;/blockquote&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;참고로 &lt;code&gt;vagrant up&lt;/code&gt; 이 정상 수행되지 않는 경우에는 노드의 리소스가 부족한 상황일 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;정상적으로 &lt;code&gt;vagrant up&lt;/code&gt;이 수행되면 &lt;code&gt;vagrant ssh k8s-ctr&lt;/code&gt;로 컨트롤 플레인에 접속하여 나머지 실습을 진행하겠습니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get no -owide
NAME      STATUS   ROLES           AGE   VERSION   INTERNAL-IP      EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION     CONTAINER-RUNTIME
k8s-ctr   Ready    control-plane   67m   v1.33.2   192.168.10.100   &amp;lt;none&amp;gt;        Ubuntu 24.04.2 LTS   6.8.0-53-generic   containerd://1.7.27
k8s-w1    Ready    &amp;lt;none&amp;gt;          56m   v1.33.2   192.168.10.101   &amp;lt;none&amp;gt;        Ubuntu 24.04.2 LTS   6.8.0-53-generic   containerd://1.7.27
k8s-w2    Ready    &amp;lt;none&amp;gt;          51m   v1.33.2   192.168.10.102   &amp;lt;none&amp;gt;        Ubuntu 24.04.2 LTS   6.8.0-53-generic   containerd://1.7.27
k8s-w3    Ready    &amp;lt;none&amp;gt;          46m   v1.33.2   192.168.10.103   &amp;lt;none&amp;gt;        Ubuntu 24.04.2 LTS   6.8.0-53-generic   containerd://1.7.27
k8s-w4    Ready    &amp;lt;none&amp;gt;          27m   v1.33.2   192.168.10.104   &amp;lt;none&amp;gt;        Ubuntu 24.04.2 LTS   6.8.0-53-generic   containerd://1.7.27&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;컨트롤 플레인 역할을 하는 &lt;code&gt;k8s-ctr&lt;/code&gt;을 제외한 워커 노드에는 4개의 disk가 연결되어 있습니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;root@k8s-w1:~# lsblk
NAME                      MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sda                         8:0    0  64G  0 disk
├─sda1                      8:1    0   1M  0 part
├─sda2                      8:2    0   2G  0 part /boot
└─sda3                      8:3    0  62G  0 part
  └─ubuntu--vg-ubuntu--lv 252:0    0  31G  0 lvm  /
sdb                         8:16   0  10G  0 disk
sdc                         8:32   0  10G  0 disk
sdd                         8:48   0  10G  0 disk
sde                         8:64   0  10G  0 disk&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이제 로컬 디스크를 관리하기 위해서MinIO의 DirectPV 부터 설치를 진행해 보겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;2. DirectPV 구성&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;DirectPV는 kubectl krew 플러그인을 통해서 설치할 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;먼저 krew 플러그인을 설치합니다.&lt;/p&gt;
&lt;pre class=&quot;routeros&quot;&gt;&lt;code&gt;# Install Krew
wget -P /root &quot;https://github.com/kubernetes-sigs/krew/releases/latest/download/krew-linux_amd64.tar.gz&quot;
tar zxvf &quot;/root/krew-linux_amd64.tar.gz&quot; --warning=no-unknown-keyword
./krew-linux_amd64 install krew
export PATH=&quot;${KREW_ROOT:-$HOME/.krew}/bin:$PATH&quot; # export PATH=&quot;$PATH:/root/.krew/bin&quot;
echo 'export PATH=&quot;$PATH:/root/.krew/bin:/root/go/bin&quot;' &amp;gt;&amp;gt; /etc/profile
kubectl krew list

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl krew list
PLUGIN  VERSION
krew    v0.4.5&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아래와 같이 krew에 directpv 플러그인을 설치하겠습니다.&lt;/p&gt;
&lt;pre class=&quot;sql&quot;&gt;&lt;code&gt;# directpv 플러그인 설치
kubectl krew install directpv
kubectl directpv -h

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl directpv -h
Kubectl plugin for managing DirectPV drives and volumes.

USAGE:
  directpv [command]

FLAGS:
      --kubeconfig string   Path to the kubeconfig file to use for CLI requests
      --quiet               Suppress printing error messages
  -h, --help                help for directpv
      --version             version for directpv

AVAILABLE COMMANDS:
  install     Install DirectPV in Kubernetes
  discover    Discover new drives
  init        Initialize the drives
  info        Show information about DirectPV installation
  list        List drives and volumes
  label       Set labels to drives and volumes
  cordon      Mark drives as unschedulable
  uncordon    Mark drives as schedulable
  migrate     Migrate drives and volumes from legacy DirectCSI
  move        Move volumes excluding data from source drive to destination drive on a same node
  clean       Cleanup stale volumes
  suspend     Suspend drives and volumes
  resume      Resume suspended drives and volumes
  repair      Repair filesystem of drives
  remove      Remove unused drives from DirectPV
  uninstall   Uninstall DirectPV in Kubernetes

Use &quot;directpv [command] --help&quot; for more information about this command.&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;krew에 directpv 플로그인 설치가 완료되면, 쿠버네티스에 DirectPV를 설치를 진행합니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;# DirectPV 설치
kubectl directpv install

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl directpv install


 ███████████████████████████████████████████████████████████████████████████ 100%

┌──────────────────────────────────────┬──────────────────────────┐
│ NAME                                 │ KIND                     │
├──────────────────────────────────────┼──────────────────────────┤
│ directpv                             │ Namespace                │
│ directpv-min-io                      │ ServiceAccount           │
│ directpv-min-io                      │ ClusterRole              │
│ directpv-min-io                      │ ClusterRoleBinding       │
│ directpv-min-io                      │ Role                     │
│ directpv-min-io                      │ RoleBinding              │
│ directpvdrives.directpv.min.io       │ CustomResourceDefinition │
│ directpvvolumes.directpv.min.io      │ CustomResourceDefinition │
│ directpvnodes.directpv.min.io        │ CustomResourceDefinition │
│ directpvinitrequests.directpv.min.io │ CustomResourceDefinition │
│ directpv-min-io                      │ CSIDriver                │
│ directpv-min-io                      │ StorageClass             │
│ node-server                          │ Daemonset                │
│ controller                           │ Deployment               │
└──────────────────────────────────────┴──────────────────────────┘

DirectPV installed successfully

# 확인
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get sc
NAME              PROVISIONER       RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
directpv-min-io   directpv-min-io   Delete          WaitForFirstConsumer   true                   60s
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get all -n directpv
NAME                              READY   STATUS    RESTARTS   AGE
pod/controller-7fcf6ddd76-lj8kn   2/3     Running   0          65s
pod/controller-7fcf6ddd76-wxlpz   2/3     Running   0          65s
pod/controller-7fcf6ddd76-zdlr2   2/3     Running   0          65s
pod/node-server-2vv5h             3/4     Running   0          65s
pod/node-server-7bkv6             3/4     Running   0          66s
pod/node-server-nww89             3/4     Running   0          66s
pod/node-server-xbsw4             3/4     Running   0          66s

NAME                         DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
daemonset.apps/node-server   4         4         0       4            0           &amp;lt;none&amp;gt;          66s

NAME                         READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/controller   0/3     3            0           66s

NAME                                    DESIRED   CURRENT   READY   AGE
replicaset.apps/controller-7fcf6ddd76   3         3         0       66s&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;잠시 대기하면 controller 파드까지 Ready가 완료됩니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이후 DirectPV를 통해서 디스크를 discover하고 초기화 해보겠습니다.&lt;/p&gt;
&lt;pre class=&quot;yaml&quot;&gt;&lt;code&gt;# direct pv로 관리되는 드라이브 확인
kubectl directpv info

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl directpv info
┌──────────┬──────────┬───────────┬─────────┬────────┐
│ NODE     │ CAPACITY │ ALLOCATED │ VOLUMES │ DRIVES │
├──────────┼──────────┼───────────┼─────────┼────────┤
│ &amp;bull; k8s-w1 │ -        │ -         │ -       │ -      │
│ &amp;bull; k8s-w2 │ -        │ -         │ -       │ -      │
│ &amp;bull; k8s-w3 │ -        │ -         │ -       │ -      │
│ &amp;bull; k8s-w4 │ -        │ -         │ -       │ -      │
└──────────┴──────────┴───────────┴─────────┴────────┘

0 B/0 B used, 0 volumes, 0 drives

# discover 진행
kubectl directpv discover

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl directpv discover

 Discovered node 'k8s-w1' ✔
 Discovered node 'k8s-w2' ✔
 Discovered node 'k8s-w3' ✔
 Discovered node 'k8s-w4' ✔

┌─────────────────────┬────────┬───────┬────────┬────────────┬───────────────────┬───────────┬─────────────┐
│ ID                  │ NODE   │ DRIVE │ SIZE   │ FILESYSTEM │ MAKE              │ AVAILABLE │ DESCRIPTION │
├─────────────────────┼────────┼───────┼────────┼────────────┼───────────────────┼───────────┼─────────────┤
│ 8:16$poCi/BftVMA... │ k8s-w1 │ sdb   │ 10 GiB │ -          │ ATA VBOX_HARDDISK │ YES       │ -           │
│ 8:32$22H3uom5IOY... │ k8s-w1 │ sdc   │ 10 GiB │ -          │ ATA VBOX_HARDDISK │ YES       │ -           │
│ 8:48$u3HIe1sE+p6... │ k8s-w1 │ sdd   │ 10 GiB │ -          │ ATA VBOX_HARDDISK │ YES       │ -           │
│ 8:64$yics0QWKvft... │ k8s-w1 │ sde   │ 10 GiB │ -          │ ATA VBOX_HARDDISK │ YES       │ -           │
│ 8:16$b3pnXr9RpwI... │ k8s-w2 │ sdb   │ 10 GiB │ -          │ ATA VBOX_HARDDISK │ YES       │ -           │
│ 8:32$ymtWBIelp6q... │ k8s-w2 │ sdc   │ 10 GiB │ -          │ ATA VBOX_HARDDISK │ YES       │ -           │
│ 8:48$G2YkrXtl+uz... │ k8s-w2 │ sdd   │ 10 GiB │ -          │ ATA VBOX_HARDDISK │ YES       │ -           │
│ 8:64$hIDV0oBlCCV... │ k8s-w2 │ sde   │ 10 GiB │ -          │ ATA VBOX_HARDDISK │ YES       │ -           │
│ 8:16$EURY5fbb8T8... │ k8s-w3 │ sdb   │ 10 GiB │ -          │ ATA VBOX_HARDDISK │ YES       │ -           │
│ 8:32$n/4+uDq2Gn1... │ k8s-w3 │ sdc   │ 10 GiB │ -          │ ATA VBOX_HARDDISK │ YES       │ -           │
│ 8:48$BTigOfLE531... │ k8s-w3 │ sdd   │ 10 GiB │ -          │ ATA VBOX_HARDDISK │ YES       │ -           │
│ 8:64$XqEdziKUCmD... │ k8s-w3 │ sde   │ 10 GiB │ -          │ ATA VBOX_HARDDISK │ YES       │ -           │
│ 8:16$X83+Qf4i0g2... │ k8s-w4 │ sdb   │ 10 GiB │ -          │ ATA VBOX_HARDDISK │ YES       │ -           │
│ 8:32$FwpJO3yFPEu... │ k8s-w4 │ sdc   │ 10 GiB │ -          │ ATA VBOX_HARDDISK │ YES       │ -           │
│ 8:48$6PCJY7rbVop... │ k8s-w4 │ sdd   │ 10 GiB │ -          │ ATA VBOX_HARDDISK │ YES       │ -           │
│ 8:64$UvZbw4QKXA1... │ k8s-w4 │ sde   │ 10 GiB │ -          │ ATA VBOX_HARDDISK │ YES       │ -           │
└─────────────────────┴────────┴───────┴────────┴────────────┴───────────────────┴───────────┴─────────────┘

Generated 'drives.yaml' successfully.&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;각 노드에 4개의 디스크가 확인됩니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;discover 명령을 수행하면, 해당 폴더에 &lt;code&gt;drives.yaml&lt;/code&gt; 파일이 생성됩니다. 이 파일을 인자로 init하면 초기화가 이뤄집니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;# (참고) 적용 예외 설정 시 select: &quot;no&quot; 설정
cat drives.yaml

(⎈|HomeLab:N/A) root@k8s-ctr:~# cat drives.yaml
version: v1
nodes:
    - name: k8s-w2
      drives:
        - id: 8:32$ymtWBIelp6qh/m1COfQEByjhTh3b3bSAd/UTRh6XRSw=
          name: sdc
          size: 10737418240
          make: ATA VBOX_HARDDISK
          select: &quot;yes&quot;
        - id: 8:48$G2YkrXtl+uz+QqUW3KmxtlmoyhDNVWLRJqtRMh9OW/0=
          name: sdd
          size: 10737418240
          make: ATA VBOX_HARDDISK
          select: &quot;yes&quot;
        - id: 8:64$hIDV0oBlCCV+OpcFmqH1cHBDDxjWfi6JkaGVcbUf4RA=
          name: sde
          size: 10737418240
          make: ATA VBOX_HARDDISK
          select: &quot;yes&quot;
        - id: 8:16$b3pnXr9RpwIz1mbdQC/GCJ5Nrvm4DpwXUAH8hEZIDtw=
          name: sdb
          size: 10737418240
          make: ATA VBOX_HARDDISK
          select: &quot;yes&quot;
    - name: k8s-w3
      drives:
        - id: 8:64$XqEdziKUCmDkPCGbe2khHIpjlxV8eL8W72CDriSf9fw=
          name: sde
          size: 10737418240
          make: ATA VBOX_HARDDISK
          select: &quot;yes&quot;
        - id: 8:32$n/4+uDq2Gn14BIitmPY0yU3fK3Y/bFq1vBGMT0pvf1Y=
          name: sdc
          size: 10737418240
          make: ATA VBOX_HARDDISK
          select: &quot;yes&quot;
        - id: 8:16$EURY5fbb8T8KNvOTNYyEEP73RsfObj+jakjzAMn5cAY=
          name: sdb
          size: 10737418240
          make: ATA VBOX_HARDDISK
          select: &quot;yes&quot;
        - id: 8:48$BTigOfLE531fSrM10Pe6GkIySn3Y16Puiq1dOnMQLfY=
          name: sdd
          size: 10737418240
          make: ATA VBOX_HARDDISK
          select: &quot;yes&quot;
    - name: k8s-w4
      drives:
        - id: 8:64$UvZbw4QKXA1jsfwAdWMUZFW6Z232EXYI/UmWhZh3Oi4=
          name: sde
          size: 10737418240
          make: ATA VBOX_HARDDISK
          select: &quot;yes&quot;
        - id: 8:48$6PCJY7rbVoprbVT8BFr4AyPgKyu4AdyFcHGrGC9AHSk=
          name: sdd
          size: 10737418240
          make: ATA VBOX_HARDDISK
          select: &quot;yes&quot;
        - id: 8:32$FwpJO3yFPEugMOPzApdxFMYCV+nz29NadD4P32+Zt98=
          name: sdc
          size: 10737418240
          make: ATA VBOX_HARDDISK
          select: &quot;yes&quot;
        - id: 8:16$X83+Qf4i0g2z0JW4ZxyR6Km7QPFY9n1vxPJqGITwlhU=
          name: sdb
          size: 10737418240
          make: ATA VBOX_HARDDISK
          select: &quot;yes&quot;
    - name: k8s-w1
      drives:
        - id: 8:16$poCi/BftVMAUIio+XwTBRXyDD/SVcCoeAOaTqA16X8U=
          name: sdb
          size: 10737418240
          make: ATA VBOX_HARDDISK
          select: &quot;yes&quot;
        - id: 8:32$22H3uom5IOYEbYA2m98gPa9UBU5bvi3GwKsC7DZwqfc=
          name: sdc
          size: 10737418240
          make: ATA VBOX_HARDDISK
          select: &quot;yes&quot;
        - id: 8:48$u3HIe1sE+p6ap/9LamRZsq9eFlwaNuJjyQ0CJDvJwxQ=
          name: sdd
          size: 10737418240
          make: ATA VBOX_HARDDISK
          select: &quot;yes&quot;
        - id: 8:64$yics0QWKvftlG6rBfwjzz9T0PtE99s8BQO0DErOb1H0=
          name: sde
          size: 10737418240
          make: ATA VBOX_HARDDISK
          select: &quot;yes&quot;

# 초기화 (데이터가 지워짐!)
kubectl directpv init drives.yaml --dangerous

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl directpv init drives.yaml --dangerous

 ███████████████████████████████████████████████████████████████████████████ 100%

 Processed initialization request '026f7148-cce3-4758-9147-9050a41516dd' for node 'k8s-w4' ✔
 Processed initialization request '2c9d4f0e-0511-4867-be8d-082189a96b7b' for node 'k8s-w1' ✔
 Processed initialization request '43965296-bcee-47cf-96a5-67d063fd43d6' for node 'k8s-w2' ✔
 Processed initialization request 'd757d040-b0d7-4ee2-8b80-3b803f840a0a' for node 'k8s-w3' ✔

┌──────────────────────────────────────┬────────┬───────┬─────────┐
│ REQUEST_ID                           │ NODE   │ DRIVE │ MESSAGE │
├──────────────────────────────────────┼────────┼───────┼─────────┤
│ 2c9d4f0e-0511-4867-be8d-082189a96b7b │ k8s-w1 │ sdb   │ Success │
│ 2c9d4f0e-0511-4867-be8d-082189a96b7b │ k8s-w1 │ sdc   │ Success │
│ 2c9d4f0e-0511-4867-be8d-082189a96b7b │ k8s-w1 │ sdd   │ Success │
│ 2c9d4f0e-0511-4867-be8d-082189a96b7b │ k8s-w1 │ sde   │ Success │
│ 43965296-bcee-47cf-96a5-67d063fd43d6 │ k8s-w2 │ sdb   │ Success │
│ 43965296-bcee-47cf-96a5-67d063fd43d6 │ k8s-w2 │ sdc   │ Success │
│ 43965296-bcee-47cf-96a5-67d063fd43d6 │ k8s-w2 │ sdd   │ Success │
│ 43965296-bcee-47cf-96a5-67d063fd43d6 │ k8s-w2 │ sde   │ Success │
│ d757d040-b0d7-4ee2-8b80-3b803f840a0a │ k8s-w3 │ sdb   │ Success │
│ d757d040-b0d7-4ee2-8b80-3b803f840a0a │ k8s-w3 │ sdc   │ Success │
│ d757d040-b0d7-4ee2-8b80-3b803f840a0a │ k8s-w3 │ sdd   │ Success │
│ d757d040-b0d7-4ee2-8b80-3b803f840a0a │ k8s-w3 │ sde   │ Success │
│ 026f7148-cce3-4758-9147-9050a41516dd │ k8s-w4 │ sdb   │ Success │
│ 026f7148-cce3-4758-9147-9050a41516dd │ k8s-w4 │ sdc   │ Success │
│ 026f7148-cce3-4758-9147-9050a41516dd │ k8s-w4 │ sdd   │ Success │
│ 026f7148-cce3-4758-9147-9050a41516dd │ k8s-w4 │ sde   │ Success │

# 드라이브 확인
kubectl directpv list drives

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl directpv list drives
┌────────┬──────┬───────────────────┬────────┬────────┬─────────┬────────┐
│ NODE   │ NAME │ MAKE              │ SIZE   │ FREE   │ VOLUMES │ STATUS │
├────────┼──────┼───────────────────┼────────┼────────┼─────────┼────────┤
│ k8s-w1 │ sdb  │ ATA VBOX_HARDDISK │ 10 GiB │ 10 GiB │ -       │ Ready  │
│ k8s-w1 │ sdc  │ ATA VBOX_HARDDISK │ 10 GiB │ 10 GiB │ -       │ Ready  │
│ k8s-w1 │ sdd  │ ATA VBOX_HARDDISK │ 10 GiB │ 10 GiB │ -       │ Ready  │
│ k8s-w1 │ sde  │ ATA VBOX_HARDDISK │ 10 GiB │ 10 GiB │ -       │ Ready  │
│ k8s-w2 │ sdb  │ ATA VBOX_HARDDISK │ 10 GiB │ 10 GiB │ -       │ Ready  │
│ k8s-w2 │ sdc  │ ATA VBOX_HARDDISK │ 10 GiB │ 10 GiB │ -       │ Ready  │
│ k8s-w2 │ sdd  │ ATA VBOX_HARDDISK │ 10 GiB │ 10 GiB │ -       │ Ready  │
│ k8s-w2 │ sde  │ ATA VBOX_HARDDISK │ 10 GiB │ 10 GiB │ -       │ Ready  │
│ k8s-w3 │ sdb  │ ATA VBOX_HARDDISK │ 10 GiB │ 10 GiB │ -       │ Ready  │
│ k8s-w3 │ sdc  │ ATA VBOX_HARDDISK │ 10 GiB │ 10 GiB │ -       │ Ready  │
│ k8s-w3 │ sdd  │ ATA VBOX_HARDDISK │ 10 GiB │ 10 GiB │ -       │ Ready  │
│ k8s-w3 │ sde  │ ATA VBOX_HARDDISK │ 10 GiB │ 10 GiB │ -       │ Ready  │
│ k8s-w4 │ sdb  │ ATA VBOX_HARDDISK │ 10 GiB │ 10 GiB │ -       │ Ready  │
│ k8s-w4 │ sdc  │ ATA VBOX_HARDDISK │ 10 GiB │ 10 GiB │ -       │ Ready  │
│ k8s-w4 │ sdd  │ ATA VBOX_HARDDISK │ 10 GiB │ 10 GiB │ -       │ Ready  │
│ k8s-w4 │ sde  │ ATA VBOX_HARDDISK │ 10 GiB │ 10 GiB │ -       │ Ready  │
└────────┴──────┴───────────────────┴────────┴────────┴─────────┴────────┘

# 각 노드당 4개의 드라이브, 총 16개가 인식됨
kubectl directpv info

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl directpv info
┌──────────┬──────────┬───────────┬─────────┬────────┐
│ NODE     │ CAPACITY │ ALLOCATED │ VOLUMES │ DRIVES │
├──────────┼──────────┼───────────┼─────────┼────────┤
│ &amp;bull; k8s-w1 │ 40 GiB   │ 0 B       │ 0       │ 4      │
│ &amp;bull; k8s-w2 │ 40 GiB   │ 0 B       │ 0       │ 4      │
│ &amp;bull; k8s-w3 │ 40 GiB   │ 0 B       │ 0       │ 4      │
│ &amp;bull; k8s-w4 │ 40 GiB   │ 0 B       │ 0       │ 4      │
└──────────┴──────────┴───────────┴─────────┴────────┘

0 B/160 GiB used, 0 volumes, 16 drives

# 확인
lsblk

# 사전 정보
root@k8s-w1:~# lsblk
NAME                      MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sda                         8:0    0  64G  0 disk
├─sda1                      8:1    0   1M  0 part
├─sda2                      8:2    0   2G  0 part /boot
└─sda3                      8:3    0  62G  0 part
  └─ubuntu--vg-ubuntu--lv 252:0    0  31G  0 lvm  /
sdb                         8:16   0  10G  0 disk
sdc                         8:32   0  10G  0 disk
sdd                         8:48   0  10G  0 disk
sde                         8:64   0  10G  0 disk

# 사후 정보
root@k8s-w1:~# lsblk
NAME                      MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
loop0                       7:0    0  16M  0 loop
sda                         8:0    0  64G  0 disk
├─sda1                      8:1    0   1M  0 part
├─sda2                      8:2    0   2G  0 part /boot
└─sda3                      8:3    0  62G  0 part
  └─ubuntu--vg-ubuntu--lv 252:0    0  31G  0 lvm  /
sdb                         8:16   0  10G  0 disk /var/lib/directpv/mnt/b3bf1d12-430f-413d-8d6d-4f2300ac7e2d
sdc                         8:32   0  10G  0 disk /var/lib/directpv/mnt/83242820-7ec4-4018-95ee-33d6e477c9b1
sdd                         8:48   0  10G  0 disk /var/lib/directpv/mnt/ad5cdb87-4dde-4ddb-ae6f-54d2b93303e7
sde                         8:64   0  10G  0 disk /var/lib/directpv/mnt/1ec669cd-106d-42eb-9a75-c74acace67d6

# 디스크가 xfs로 포맷팅되어 마운트 된 상태
df -hT --type xfs

root@k8s-w1:~# df -hT --type xfs
Filesystem     Type  Size  Used Avail Use% Mounted on
/dev/sdd       xfs    10G  104M  9.9G   2% /var/lib/directpv/mnt/ad5cdb87-4dde-4ddb-ae6f-54d2b93303e7
/dev/sde       xfs    10G  104M  9.9G   2% /var/lib/directpv/mnt/1ec669cd-106d-42eb-9a75-c74acace67d6
/dev/sdb       xfs    10G  104M  9.9G   2% /var/lib/directpv/mnt/b3bf1d12-430f-413d-8d6d-4f2300ac7e2d
/dev/sdc       xfs    10G  104M  9.9G   2% /var/lib/directpv/mnt/83242820-7ec4-4018-95ee-33d6e477c9b1


tree -h /var/lib/directpv/

root@k8s-w1:~# tree -h /var/lib/directpv/
[4.0K]  /var/lib/directpv/
├── [4.0K]  mnt
│   ├── [  75]  1ec669cd-106d-42eb-9a75-c74acace67d6
│   ├── [  75]  83242820-7ec4-4018-95ee-33d6e477c9b1
│   ├── [  75]  ad5cdb87-4dde-4ddb-ae6f-54d2b93303e7
│   └── [  75]  b3bf1d12-430f-413d-8d6d-4f2300ac7e2d
└── [  40]  tmp

7 directories, 0 files

# 각 드라이브는 directpvdirves로 등록됨
kubectl get directpvdrives.directpv.min.io -o yaml | yq

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get directpvdrives.directpv.min.io
NAME                                   AGE
11679efe-44ef-4849-8755-136085abe018   3m8s
1ec669cd-106d-42eb-9a75-c74acace67d6   3m8s
254b8fc1-9159-471e-9df5-8b7467149ac4   3m8s
2f708625-91a1-4751-94f1-4560f494afc8   3m8s
3b3ef65c-42f6-4b50-a44a-44ffc28cbbac   3m8s
696c0595-1854-459d-a49d-704dc8141389   3m8s
7f34ec30-ad93-4506-a0ca-657163eb5fc3   3m7s
83242820-7ec4-4018-95ee-33d6e477c9b1   3m8s
ad5cdb87-4dde-4ddb-ae6f-54d2b93303e7   3m8s
b305dca2-ef12-4a61-955a-9ea12db10740   3m8s
b3bf1d12-430f-413d-8d6d-4f2300ac7e2d   3m8s
cb4ac76d-e415-440d-817d-83a0c095e249   3m8s
cba257bb-7576-4fb2-8c1d-1b7200e6fe03   3m8s
d1e0638b-b1bc-4276-a3a9-71ae8366f11b   3m8s
e9e46095-447a-4e09-afa0-817a75e36893   3m7s
f9474487-102f-4435-97a2-5a0a50fa98e8   3m8s&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;DirectPV를 통해서 각 노드의 로컬 볼륨이 인식되었고, MinIO를 설치해보겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;3. MinIO MNMD 배포&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이제 MinIO를 설치해보고 실습을 이어가겠습니다.&lt;/p&gt;
&lt;pre class=&quot;yaml&quot;&gt;&lt;code&gt;# helm repo 등록
helm repo add minio-operator https://operator.min.io

# https://github.com/minio/operator/blob/master/helm/operator/values.yaml
cat &amp;lt;&amp;lt; EOF &amp;gt; minio-operator-values.yaml
operator:  
  replicaCount: 1
EOF
helm install --namespace minio-operator --create-namespace minio-operator minio-operator/operator --values minio-operator-values.yaml


(⎈|HomeLab:N/A) root@k8s-ctr:~# helm install --namespace minio-operator --create-namespace minio-operator minio-operator/operator --values minio-operator-values.yaml
NAME: minio-operator
LAST DEPLOYED: Wed Sep 24 00:07:01 2025
NAMESPACE: minio-operator
STATUS: deployed
REVISION: 1
TEST SUITE: None


# 확인 
kubectl get all -n minio-operator
kubectl get pod,svc,ep -n minio-operator
kubectl get crd

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get all -n minio-operator
NAME                                  READY   STATUS    RESTARTS   AGE
pod/minio-operator-75946dc4db-qz6j6   1/1     Running   0          14s

NAME               TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)    AGE
service/operator   ClusterIP   10.96.80.111   &amp;lt;none&amp;gt;        4221/TCP   16s
service/sts        ClusterIP   10.96.8.120    &amp;lt;none&amp;gt;        4223/TCP   16s

NAME                             READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/minio-operator   1/1     1            1           15s

NAME                                        DESIRED   CURRENT   READY   AGE
replicaset.apps/minio-operator-75946dc4db   1         1         1       15s
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get pod,svc,ep -n minio-operator
Warning: v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
NAME                                  READY   STATUS    RESTARTS   AGE
pod/minio-operator-75946dc4db-qz6j6   1/1     Running   0          29s

NAME               TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)    AGE
service/operator   ClusterIP   10.96.80.111   &amp;lt;none&amp;gt;        4221/TCP   31s
service/sts        ClusterIP   10.96.8.120    &amp;lt;none&amp;gt;        4223/TCP   31s

NAME                 ENDPOINTS           AGE
endpoints/operator   172.20.3.244:4221   30s
endpoints/sts        172.20.3.244:4223   30s
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get crd
NAME                                         CREATED AT
ciliumcidrgroups.cilium.io                   2025-09-23T13:57:26Z
ciliumclusterwidenetworkpolicies.cilium.io   2025-09-23T13:57:29Z
ciliumendpoints.cilium.io                    2025-09-23T13:57:26Z
ciliumexternalworkloads.cilium.io            2025-09-23T13:57:26Z
ciliumidentities.cilium.io                   2025-09-23T13:57:26Z
ciliuml2announcementpolicies.cilium.io       2025-09-23T13:57:26Z
ciliumloadbalancerippools.cilium.io          2025-09-23T13:57:26Z
ciliumnetworkpolicies.cilium.io              2025-09-23T13:57:29Z
ciliumnodeconfigs.cilium.io                  2025-09-23T13:57:26Z
ciliumnodes.cilium.io                        2025-09-23T13:57:27Z
ciliumpodippools.cilium.io                   2025-09-23T13:57:27Z
directpvdrives.directpv.min.io               2025-09-23T14:58:17Z
directpvinitrequests.directpv.min.io         2025-09-23T14:58:17Z
directpvnodes.directpv.min.io                2025-09-23T14:58:17Z
directpvvolumes.directpv.min.io              2025-09-23T14:58:17Z
policybindings.sts.min.io                    2025-09-23T15:07:03Z
tenants.minio.min.io                         2025-09-23T15:07:03Z&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;MinIO Operator 설치가 완료되었고, tenant 설치를 진행하겠습니다.&lt;/p&gt;
&lt;pre class=&quot;yaml&quot;&gt;&lt;code&gt;# tenant values : https://github.com/minio/operator/blob/master/helm/tenant/values.yaml
cat &amp;lt;&amp;lt; EOF &amp;gt; minio-tenant-1-values.yaml
tenant:
  name: tenant1

  configSecret:
    name: tenant1-env-configuration
    accessKey: minio
    secretKey: minio123

  pools:
    - servers: 4
      name: pool-0
      volumesPerServer: 4
      size: 10Gi 
      storageClassName: directpv-min-io # directpv를 storageclass를 사용함을 명시
  env:
    - name: MINIO_STORAGE_CLASS_STANDARD
      value: &quot;EC:1&quot;

  metrics:
    enabled: true
    port: 9000
    protocol: http
EOF

&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아래는 tenant 생성 과정에서 실패한 내용을 참고로 작성한 내용입니다.&lt;/p&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;참고: tenant 생성 실패 이슈&lt;/h3&gt;
&lt;pre class=&quot;routeros&quot;&gt;&lt;code&gt;helm install --namespace tenant1 --create-namespace --values minio-tenant-1-values.yaml tenant1 minio-operator/tenant \
 &amp;amp;&amp;amp; kubectl get tenants -A -w

(⎈|HomeLab:N/A) root@k8s-ctr:~# helm install --namespace tenant1 --create-namespace --values minio-tenant-1-values.yaml tenant1 minio-operator/tenant \
 &amp;amp;&amp;amp; kubectl get tenants -A -w
NAME: tenant1
LAST DEPLOYED: Wed Sep 24 00:12:40 2025
NAMESPACE: tenant1
STATUS: deployed
REVISION: 1
TEST SUITE: None
NAMESPACE   NAME      STATE   HEALTH   AGE
tenant1     tenant1                    0s
tenant1     tenant1                    5s
tenant1     tenant1                    5s
tenant1     tenant1   Waiting for MinIO TLS Certificate            5s
tenant1     tenant1   Provisioning MinIO Cluster IP Service            15s
tenant1     tenant1   Provisioning Console Service                     15s
tenant1     tenant1   Provisioning MinIO Headless Service              16s
tenant1     tenant1   Provisioning MinIO Headless Service              17s
tenant1     tenant1   Provisioning MinIO Statefulset                   17s
tenant1     tenant1   Provisioning MinIO Statefulset                   17s
tenant1     tenant1   Provisioning MinIO Statefulset                   17s
tenant1     tenant1   Waiting for Tenant to be healthy                 17s&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;내용을 확인해보면 minio server가 전체 Pending인 것을 확인할 수 있습니다. 파드를 describe 해보면 &lt;code&gt;VolumeBinding&lt;/code&gt;으로 스케줄링이 되지 않은 상태입니다.&lt;/p&gt;
&lt;pre class=&quot;yaml&quot;&gt;&lt;code&gt;(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get all -n tenant1
NAME                   READY   STATUS    RESTARTS   AGE
pod/tenant1-pool-0-0   0/2     Pending   0          6m28s
pod/tenant1-pool-0-1   0/2     Pending   0          6m27s
pod/tenant1-pool-0-2   0/2     Pending   0          6m27s
pod/tenant1-pool-0-3   0/2     Pending   0          6m26s

NAME                      TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)    AGE
service/minio             ClusterIP   10.96.139.12   &amp;lt;none&amp;gt;        443/TCP    6m31s
service/tenant1-console   ClusterIP   10.96.15.45    &amp;lt;none&amp;gt;        9443/TCP   6m30s
service/tenant1-hl        ClusterIP   None           &amp;lt;none&amp;gt;        9000/TCP   6m30s

NAME                              READY   AGE
statefulset.apps/tenant1-pool-0   0/4     6m29s
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl describe po -n tenant1 tenant1-pool-0-0
Name:             tenant1-pool-0-0
Namespace:        tenant1
Priority:         0
Service Account:  tenant1-sa
Node:             &amp;lt;none&amp;gt;
Labels:           apps.kubernetes.io/pod-index=0
                  controller-revision-hash=tenant1-pool-0-b5b7b8c97
                  statefulset.kubernetes.io/pod-name=tenant1-pool-0-0
                  v1.min.io/console=tenant1-console
                  v1.min.io/pool=pool-0
                  v1.min.io/tenant=tenant1
Annotations:      min.io/revision: 0
Status:           Pending
IP:
IPs:              &amp;lt;none&amp;gt;
Controlled By:    StatefulSet/tenant1-pool-0
Init Containers:
  validate-arguments:
    Image:      quay.io/minio/operator-sidecar:v7.0.1
    Port:       &amp;lt;none&amp;gt;
    Host Port:  &amp;lt;none&amp;gt;
    Args:
      validate
      --tenant
      tenant1
    Environment:
      CLUSTER_DOMAIN:  cluster.local
    Mounts:
      /tmp/minio-config from configuration (rw)
      /tmp/minio/ from cfg-vol (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-pmj9n (ro)
Containers:
  minio:
    Image:       quay.io/minio/minio:RELEASE.2025-04-08T15-41-24Z
    Ports:       9000/TCP, 9443/TCP
    Host Ports:  0/TCP, 0/TCP
    Args:
      server
      --certs-dir
      /tmp/certs
      --console-address
      :9443
    Environment:
      MINIO_CONFIG_ENV_FILE:  /tmp/minio/config.env
    Mounts:
      /export0 from data0 (rw)
      /export1 from data1 (rw)
      /export2 from data2 (rw)
      /export3 from data3 (rw)
      /tmp/certs from tenant1-tls (rw)
      /tmp/minio/ from cfg-vol (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-pmj9n (ro)
  sidecar:
    Image:      quay.io/minio/operator-sidecar:v7.0.1
    Port:       &amp;lt;none&amp;gt;
    Host Port:  &amp;lt;none&amp;gt;
    Args:
      sidecar
      --tenant
      tenant1
      --config-name
      tenant1-env-configuration
    Readiness:  http-get http://:4444/ready delay=5s timeout=1s period=1s #success=1 #failure=1
    Environment:
      CLUSTER_DOMAIN:  cluster.local
    Mounts:
      /tmp/minio-config from configuration (rw)
      /tmp/minio/ from cfg-vol (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-pmj9n (ro)
Conditions:
  Type           Status
  PodScheduled   False
Volumes:
  data3:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  data3-tenant1-pool-0-0
    ReadOnly:   false
  data0:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  data0-tenant1-pool-0-0
    ReadOnly:   false
  data1:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  data1-tenant1-pool-0-0
    ReadOnly:   false
  data2:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  data2-tenant1-pool-0-0
    ReadOnly:   false
  cfg-vol:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    SizeLimit:  &amp;lt;unset&amp;gt;
  tenant1-tls:
    Type:        Projected (a volume that contains injected data from multiple sources)
    SecretName:  tenant1-tls
    Optional:    false
  configuration:
    Type:        Projected (a volume that contains injected data from multiple sources)
    SecretName:  tenant1-env-configuration
    Optional:    false
  kube-api-access-pmj9n:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    Optional:                false
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              &amp;lt;none&amp;gt;
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason            Age                    From               Message
  ----     ------            ----                   ----               -------
  Warning  FailedScheduling  6m43s                  default-scheduler  running PreBind plugin &quot;VolumeBinding&quot;: binding volumes: provisioning failed for PVC &quot;data3-tenant1-pool-0-0&quot;
  Warning  FailedScheduling  6m36s (x2 over 6m40s)  default-scheduler  running PreBind plugin &quot;VolumeBinding&quot;: binding volumes: provisioning failed for PVC &quot;data3-tenant1-pool-0-0&quot;

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get pvc -n tenant1
NAME                     STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS      VOLUMEATTRIBUTESCLASS   AGE
data0-tenant1-pool-0-0   Pending                                      directpv-min-io   &amp;lt;unset&amp;gt;                 6m57s
data0-tenant1-pool-0-1   Pending                                      directpv-min-io   &amp;lt;unset&amp;gt;                 6m57s
data0-tenant1-pool-0-2   Pending                                      directpv-min-io   &amp;lt;unset&amp;gt;                 6m57s
data0-tenant1-pool-0-3   Pending                                      directpv-min-io   &amp;lt;unset&amp;gt;                 6m56s
data1-tenant1-pool-0-0   Pending                                      directpv-min-io   &amp;lt;unset&amp;gt;                 6m57s
data1-tenant1-pool-0-1   Pending                                      directpv-min-io   &amp;lt;unset&amp;gt;                 6m57s
data1-tenant1-pool-0-2   Pending                                      directpv-min-io   &amp;lt;unset&amp;gt;                 6m57s
data1-tenant1-pool-0-3   Pending                                      directpv-min-io   &amp;lt;unset&amp;gt;                 6m56s
data2-tenant1-pool-0-0   Pending                                      directpv-min-io   &amp;lt;unset&amp;gt;                 6m58s
data2-tenant1-pool-0-1   Pending                                      directpv-min-io   &amp;lt;unset&amp;gt;                 6m57s
data2-tenant1-pool-0-2   Pending                                      directpv-min-io   &amp;lt;unset&amp;gt;                 6m57s
data2-tenant1-pool-0-3   Pending                                      directpv-min-io   &amp;lt;unset&amp;gt;                 6m55s
data3-tenant1-pool-0-0   Pending                                      directpv-min-io   &amp;lt;unset&amp;gt;                 6m57s
data3-tenant1-pool-0-1   Pending                                      directpv-min-io   &amp;lt;unset&amp;gt;                 6m57s
data3-tenant1-pool-0-2   Pending                                      directpv-min-io   &amp;lt;unset&amp;gt;                 6m57s
data3-tenant1-pool-0-3   Pending                                      directpv-min-io   &amp;lt;unset&amp;gt;                 6m56s
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl describe pvc -n tenant1 data0-tenant1-pool-0-0
Name:          data0-tenant1-pool-0-0
Namespace:     tenant1
StorageClass:  directpv-min-io
Status:        Pending
Volume:
Labels:        v1.min.io/console=tenant1-console
               v1.min.io/pool=pool-0
               v1.min.io/tenant=tenant1
Annotations:   volume.beta.kubernetes.io/storage-provisioner: directpv-min-io
               volume.kubernetes.io/storage-provisioner: directpv-min-io
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode:    Filesystem
Used By:       tenant1-pool-0-0
Events:
  Type     Reason                Age                     From                                                                              Message
  ----     ------                ----                    ----                                                                              -------
  Normal   WaitForFirstConsumer  7m11s                   persistentvolume-controller                                                       waiting for first consumer to be created before binding
  Warning  ProvisioningFailed    7m10s                   directpv-min-io_controller-7fcf6ddd76-zdlr2_fb2d2b49-6e1b-466b-9da6-bf74b5151674  failed to provision volume with StorageClass &quot;directpv-min-io&quot;: rpc error: code = ResourceExhausted desc = no drive found for requested topology; requested node(s): k8s-w4; requested size: 10737418240 bytes
  Normal   ExternalProvisioning  5m59s (x11 over 7m11s)  persistentvolume-controller                                                       Waiting for a volume to be created either by the external provisioner 'directpv-min-io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered.
  Warning  ProvisioningFailed    5m27s (x11 over 7m14s)  directpv-min-io_controller-7fcf6ddd76-zdlr2_fb2d2b49-6e1b-466b-9da6-bf74b5151674  failed to provision volume with StorageClass &quot;directpv-min-io&quot;: rpc error: code = ResourceExhausted desc = no drive found for requested topology; requested node(s): k8s-w3; requested size: 10737418240 bytes
  Normal   WaitForPodScheduled   2m11s (x44 over 7m10s)  persistentvolume-controller                                                       waiting for pod tenant1-pool-0-0 to be scheduled
  Normal   Provisioning          2m3s (x29 over 7m14s)   directpv-min-io_controller-7fcf6ddd76-zdlr2_fb2d2b49-6e1b-466b-9da6-bf74b5151674  External provisioner is provisioning volume for claim &quot;tenant1/data0-tenant1-pool-0-0&quot;&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;PVC가 Pending으로 확인되며, PVC의 이벤트를 확인해보면 &lt;code&gt;ResourceExhausted&lt;/code&gt;가 발생한 것으로 보입니다. 로컬디스크를 10240*4개을 연결했지만 파일시스템 포맷 등으로 용량이 실제로는 10GiB까지는 부족했던 것으로 추정됩니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;용량을 조금 줄여서 tenant를 생성해보겠습니다. 참고로, tenant 생성이 실패하면 helm으로 tenant를 삭제한 뒤 PVC도 모두 삭제해줘야 합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;테스트이므로 5G로 변경하고 다시 작업 진행하면 정상적으로 초기화가 이뤄진 것을 알 수 있습니다.&lt;/p&gt;
&lt;pre class=&quot;routeros&quot;&gt;&lt;code&gt;cat &amp;lt;&amp;lt; EOF &amp;gt; minio-tenant-1-values.yaml
tenant:
  name: tenant1

  configSecret:
    name: tenant1-env-configuration
    accessKey: minio
    secretKey: minio123

  pools:
    - servers: 4
      name: pool-0
      volumesPerServer: 4
      size: 5Gi 
      storageClassName: directpv-min-io # directpv를 storageclass를 사용함을 명시
  env:
    - name: MINIO_STORAGE_CLASS_STANDARD
      value: &quot;EC:1&quot;

  metrics:
    enabled: true
    port: 9000
    protocol: http
EOF


(⎈|HomeLab:N/A) root@k8s-ctr:~# helm install --namespace tenant1 --create-namespace --values minio-tenant-1-values.yaml tenant1 minio-operator/tenant  &amp;amp;&amp;amp; kubectl get tenants -A -w
NAME: tenant1
LAST DEPLOYED: Wed Sep 24 00:30:37 2025
NAMESPACE: tenant1
STATUS: deployed
REVISION: 1
TEST SUITE: None
NAMESPACE   NAME      STATE   HEALTH   AGE
tenant1     tenant1                    1s
tenant1     tenant1                    5s
tenant1     tenant1                    5s
tenant1     tenant1   Waiting for MinIO TLS Certificate            10s
tenant1     tenant1   Provisioning MinIO Cluster IP Service            16s
tenant1     tenant1   Provisioning Console Service                     17s
tenant1     tenant1   Provisioning MinIO Headless Service              17s
tenant1     tenant1   Provisioning MinIO Headless Service              18s
tenant1     tenant1   Provisioning MinIO Statefulset                   18s
tenant1     tenant1   Provisioning MinIO Statefulset                   19s
tenant1     tenant1   Provisioning MinIO Statefulset                   19s
tenant1     tenant1   Waiting for Tenant to be healthy                 19s
tenant1     tenant1   Waiting for Tenant to be healthy        red      65s
tenant1     tenant1   Waiting for Tenant to be healthy        green    67s
tenant1     tenant1   Waiting for Tenant to be healthy        green    68s
tenant1     tenant1   Waiting for Tenant to be healthy        green    69s
tenant1     tenant1   Initialized                             green    69s
&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;tenant 구성이 완료된 이후 상태를 확인해보겠습니다.&lt;/p&gt;
&lt;pre class=&quot;routeros&quot;&gt;&lt;code&gt;# 확인
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get all -n tenant1
NAME                   READY   STATUS    RESTARTS   AGE
pod/tenant1-pool-0-0   2/2     Running   0          20h
pod/tenant1-pool-0-1   2/2     Running   0          20h
pod/tenant1-pool-0-2   2/2     Running   0          20h
pod/tenant1-pool-0-3   2/2     Running   0          20h

NAME                      TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
service/minio             NodePort    10.96.130.108   &amp;lt;none&amp;gt;        443:30002/TCP    20h
service/tenant1-console   NodePort    10.96.116.65    &amp;lt;none&amp;gt;        9443:30001/TCP   20h
service/tenant1-hl        ClusterIP   None            &amp;lt;none&amp;gt;        9000/TCP         20h

NAME                              READY   AGE
statefulset.apps/tenant1-pool-0   4/4     20h

(⎈|default:N/A) root@k3s-s:~# kubectl describe tenants -n tenant1
Name:         tenant1
Namespace:    tenant1
Labels:       app=minio
              app.kubernetes.io/managed-by=Helm
Annotations:  meta.helm.sh/release-name: tenant1
              meta.helm.sh/release-namespace: tenant1
              prometheus.io/path: /minio/v2/metrics/cluster
              prometheus.io/port: 9000
              prometheus.io/scheme: http
              prometheus.io/scrape: true
API Version:  minio.min.io/v2
Kind:         Tenant
Metadata:
  Creation Timestamp:  2025-09-17T14:56:25Z
  Generation:          1
  Resource Version:    6390
  UID:                 12a0ce88-64ad-4212-bcfb-63ca4269b203
Spec:
  Configuration:
    Name:  tenant1-env-configuration
  Env:
    Name:   MINIO_STORAGE_CLASS_STANDARD
    Value:  EC:1
  Features:
    Bucket DNS:           false
    Enable SFTP:          false
  Image:                  quay.io/minio/minio:RELEASE.2025-04-08T15-41-24Z
  Image Pull Policy:      IfNotPresent
  Mount Path:             /export
  Pod Management Policy:  Parallel
  Pools:
    Name:     pool-0
    Servers:  1
    Volume Claim Template:
      Metadata:
        Name:  data
      Spec:
        Access Modes:
          ReadWriteOnce
        Resources:
          Requests:
            Storage:         10Gi
        Storage Class Name:  directpv-min-io
    Volumes Per Server:      4
  Pools Metadata:
    Annotations:
    Labels:
  Prometheus Operator:  false
  Request Auto Cert:    true
  Sub Path:             /data
Status:
  Available Replicas:  1
  Certificates:
    Auto Cert Enabled:  true
    Custom Certificates:
  Current State:  Initialized
  Drives Online:  4
  Health Status:  green
  Pools:
    Legacy Security Context:  false
    Ss Name:                  tenant1-pool-0
    State:                    PoolInitialized
  Revision:                   0
  Sync Version:               v6.0.0
  Usage:
    Capacity:      32212193280
    Raw Capacity:  42949591040
    Raw Usage:     81920
    Usage:         61440
  Write Quorum:    3
Events:
  Type     Reason                 Age                  From            Message
  ----     ------                 ----                 ----            -------
  Normal   CSRCreated             2m20s                minio-operator  MinIO CSR Created
  Normal   SvcCreated             2m9s                 minio-operator  MinIO Service Created
  Normal   SvcCreated             2m9s                 minio-operator  Console Service Created
  Normal   SvcCreated             2m9s                 minio-operator  Headless Service created
  Normal   PoolCreated            2m9s                 minio-operator  Tenant pool pool-0 created
  Normal   Updated                2m4s                 minio-operator  Headless Service Updated
  Warning  WaitingMinIOIsHealthy  114s (x4 over 2m8s)  minio-operator  Waiting for MinIO to be ready&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;앞서 tenant에서 pool에 4개 볼륨(volumesPerServer: 4)이 생성되고, 또한 스토리지 사이즈를 10Gi(size: 10Gi)으로 지정했습니다. 이후 정보가 어떻게 변경되었는지 확인해보겠습니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;# 확인
lsblk
kubectl directpv info
kubectl directpv list drives
kubectl directpv list volumes

(⎈|default:N/A) root@k3s-s:~# lsblk
NAME         MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTS
loop0          7:0    0 73.9M  1 loop /snap/core22/2111
loop1          7:1    0 27.6M  1 loop /snap/amazon-ssm-agent/11797
loop2          7:2    0 50.8M  1 loop /snap/snapd/25202
loop3          7:3    0   16M  0 loop
nvme1n1      259:0    0   30G  0 disk /var/lib/kubelet/pods/72ee3a1a-4a61-44f0-94b6-a7cc861eb829/volumes/kubernetes.io~csi/pvc-9eba11d7-3331-423c-91d7-2a8f45a08ce3/mount
                                      /var/lib/kubelet/plugins/kubernetes.io/csi/directpv-min-io/f94049e38beb31a7b9cf88a9d48e54c8af90509d141e70ff851eb8cdf87b09f2/globalmount
                                      /var/lib/directpv/mnt/ffd730c8-c056-454a-830f-208b9529104c
nvme0n1      259:1    0   30G  0 disk
├─nvme0n1p1  259:5    0   29G  0 part /
├─nvme0n1p14 259:6    0    4M  0 part
├─nvme0n1p15 259:7    0  106M  0 part /boot/efi
└─nvme0n1p16 259:8    0  913M  0 part /boot
nvme4n1      259:2    0   30G  0 disk /var/lib/kubelet/pods/72ee3a1a-4a61-44f0-94b6-a7cc861eb829/volumes/kubernetes.io~csi/pvc-88ff8de1-0702-4783-9a24-f63af88dda30/mount
                                      /var/lib/kubelet/plugins/kubernetes.io/csi/directpv-min-io/20cd114efbb71cad4c72f66f980b71335e29a50b57ad159a6c18566c3d01eaf9/globalmount
                                      /var/lib/directpv/mnt/7f010ba0-6e36-4bac-8734-8101f5fc86cd
nvme3n1      259:3    0   30G  0 disk /var/lib/kubelet/pods/72ee3a1a-4a61-44f0-94b6-a7cc861eb829/volumes/kubernetes.io~csi/pvc-08bdde0c-b472-4dfe-8b95-09e59e6aa4d8/mount
                                      /var/lib/kubelet/plugins/kubernetes.io/csi/directpv-min-io/3f4d3fabd87e625fc0d887fdf2f9c90a2743b72354a7de4a6ab53ac502d291c6/globalmount
                                      /var/lib/directpv/mnt/ff9fbf17-a2ca-475a-83c3-88b9c4c77140
nvme2n1      259:4    0   30G  0 disk /var/lib/kubelet/pods/72ee3a1a-4a61-44f0-94b6-a7cc861eb829/volumes/kubernetes.io~csi/pvc-e846556e-da9f-4670-8c69-7479a723af37/mount
                                      /var/lib/kubelet/plugins/kubernetes.io/csi/directpv-min-io/28f2fa689cc75aff33f7429c65d5912fb23dfa3394a23dbc6ff22fbaacc112e4/globalmount
                                      /var/lib/directpv/mnt/d29e80c7-dc3b-4a48-9a81-82352886d63f
(⎈|default:N/A) root@k3s-s:~# kubectl directpv info
┌─────────┬──────────┬───────────┬─────────┬────────┐
│ NODE    │ CAPACITY │ ALLOCATED │ VOLUMES │ DRIVES │
├─────────┼──────────┼───────────┼─────────┼────────┤
│ &amp;bull; k3s-s │ 120 GiB  │ 40 GiB    │ 4       │ 4      │
└─────────┴──────────┴───────────┴─────────┴────────┘

40 GiB/120 GiB used, 4 volumes, 4 drives
(⎈|default:N/A) root@k3s-s:~# kubectl directpv list drives
┌───────┬─────────┬────────────────────────────┬────────┬────────┬─────────┬────────┐
│ NODE  │ NAME    │ MAKE                       │ SIZE   │ FREE   │ VOLUMES │ STATUS │
├───────┼─────────┼────────────────────────────┼────────┼────────┼─────────┼────────┤
│ k3s-s │ nvme1n1 │ Amazon Elastic Block Store │ 30 GiB │ 20 GiB │ 1       │ Ready  │
│ k3s-s │ nvme2n1 │ Amazon Elastic Block Store │ 30 GiB │ 20 GiB │ 1       │ Ready  │
│ k3s-s │ nvme3n1 │ Amazon Elastic Block Store │ 30 GiB │ 20 GiB │ 1       │ Ready  │
│ k3s-s │ nvme4n1 │ Amazon Elastic Block Store │ 30 GiB │ 20 GiB │ 1       │ Ready  │
└───────┴─────────┴────────────────────────────┴────────┴────────┴─────────┴────────┘
(⎈|default:N/A) root@k3s-s:~# kubectl directpv list volumes
┌──────────────────────────────────────────┬──────────┬───────┬─────────┬──────────────────┬──────────────┬─────────┐
│ VOLUME                                   │ CAPACITY │ NODE  │ DRIVE   │ PODNAME          │ PODNAMESPACE │ STATUS  │
├──────────────────────────────────────────┼──────────┼───────┼─────────┼──────────────────┼──────────────┼─────────┤
│ pvc-9eba11d7-3331-423c-91d7-2a8f45a08ce3 │ 10 GiB   │ k3s-s │ nvme1n1 │ tenant1-pool-0-0 │ tenant1      │ Bounded │
│ pvc-e846556e-da9f-4670-8c69-7479a723af37 │ 10 GiB   │ k3s-s │ nvme2n1 │ tenant1-pool-0-0 │ tenant1      │ Bounded │
│ pvc-08bdde0c-b472-4dfe-8b95-09e59e6aa4d8 │ 10 GiB   │ k3s-s │ nvme3n1 │ tenant1-pool-0-0 │ tenant1      │ Bounded │
│ pvc-88ff8de1-0702-4783-9a24-f63af88dda30 │ 10 GiB   │ k3s-s │ nvme4n1 │ tenant1-pool-0-0 │ tenant1      │ Bounded │
└──────────────────────────────────────────┴──────────┴───────┴─────────┴──────────────────┴──────────────┴─────────┘&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;지정된 정보와 같이 4개의 볼륨이 각 10Gi씩 생성된 것을 확인할 수 있습니다. 추가로 정보를 확인해보겠습니다.&lt;/p&gt;
&lt;pre class=&quot;subunit&quot;&gt;&lt;code&gt;# 확인
kubectl get directpvvolumes.directpv.min.io
kubectl get directpvvolumes.directpv.min.io -o yaml | yq
kubectl describe directpvvolumes
tree -ah /var/lib/kubelet/plugins
tree -ah /var/lib/directpv/mnt
cat /var/lib/kubelet/plugins/kubernetes.io/csi/directpv-min-io/*/vol_data.json

(⎈|default:N/A) root@k3s-s:~# kubectl get directpvvolumes.directpv.min.io
NAME                                       AGE
pvc-08bdde0c-b472-4dfe-8b95-09e59e6aa4d8   26m
pvc-88ff8de1-0702-4783-9a24-f63af88dda30   26m
pvc-9eba11d7-3331-423c-91d7-2a8f45a08ce3   26m
pvc-e846556e-da9f-4670-8c69-7479a723af37   26m

(⎈|default:N/A) root@k3s-s:~# tree -h /var/lib/kubelet/plugins
[4.0K]  /var/lib/kubelet/plugins
├── [4.0K]  controller-controller
│   └── [   0]  csi.sock
├── [4.0K]  directpv-min-io
│   └── [   0]  csi.sock
└── [4.0K]  kubernetes.io
    └── [4.0K]  csi
        └── [4.0K]  directpv-min-io
            ├── [4.0K]  20cd114efbb71cad4c72f66f980b71335e29a50b57ad159a6c18566c3d01eaf9
            │   ├── [  18]  globalmount
            │   │   └── [  24]  data
            │   └── [  91]  vol_data.json
            ├── [4.0K]  28f2fa689cc75aff33f7429c65d5912fb23dfa3394a23dbc6ff22fbaacc112e4
            │   ├── [  18]  globalmount
            │   │   └── [  24]  data
            │   └── [  91]  vol_data.json
            ├── [4.0K]  3f4d3fabd87e625fc0d887fdf2f9c90a2743b72354a7de4a6ab53ac502d291c6
            │   ├── [  18]  globalmount
            │   │   └── [  24]  data
            │   └── [  91]  vol_data.json
            └── [4.0K]  f94049e38beb31a7b9cf88a9d48e54c8af90509d141e70ff851eb8cdf87b09f2
                ├── [  18]  globalmount
                │   └── [  24]  data
                └── [  91]  vol_data.json

18 directories, 6 files

(⎈|default:N/A) root@k3s-s:~# tree -h /var/lib/directpv/mnt
[4.0K]  /var/lib/directpv/mnt
├── [ 123]  7f010ba0-6e36-4bac-8734-8101f5fc86cd
│   └── [  18]  pvc-88ff8de1-0702-4783-9a24-f63af88dda30
│       └── [  24]  data
├── [ 123]  d29e80c7-dc3b-4a48-9a81-82352886d63f
│   └── [  18]  pvc-e846556e-da9f-4670-8c69-7479a723af37
│       └── [  24]  data
├── [ 123]  ff9fbf17-a2ca-475a-83c3-88b9c4c77140
│   └── [  18]  pvc-08bdde0c-b472-4dfe-8b95-09e59e6aa4d8
│       └── [  24]  data
└── [ 123]  ffd730c8-c056-454a-830f-208b9529104c
    └── [  18]  pvc-9eba11d7-3331-423c-91d7-2a8f45a08ce3
        └── [  24]  data

13 directories, 0 files

(⎈|default:N/A) root@k3s-s:~# cat /var/lib/kubelet/plugins/kubernetes.io/csi/directpv-min-io/*/vol_data.json
{&quot;driverName&quot;:&quot;directpv-min-io&quot;,&quot;volumeHandle&quot;:&quot;pvc-88ff8de1-0702-4783-9a24-f63af88dda30&quot;}
{&quot;driverName&quot;:&quot;directpv-min-io&quot;,&quot;volumeHandle&quot;:&quot;pvc-e846556e-da9f-4670-8c69-7479a723af37&quot;}
{&quot;driverName&quot;:&quot;directpv-min-io&quot;,&quot;volumeHandle&quot;:&quot;pvc-08bdde0c-b472-4dfe-8b95-09e59e6aa4d8&quot;}
{&quot;driverName&quot;:&quot;directpv-min-io&quot;,&quot;volumeHandle&quot;:&quot;pvc-9eba11d7-3331-423c-91d7-2a8f45a08ce3&quot;}

# PVC 정보
kubectl get pvc -n tenant1
kubectl get pvc -n tenant1 -o yaml | yq
kubectl describe pvc -n tenant1

(⎈|default:N/A) root@k3s-s:~# kubectl get pvc -n tenant1
NAME                     STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS      VOLUMEATTRIBUTESCLASS   AGE
data0-tenant1-pool-0-0   Bound    pvc-e846556e-da9f-4670-8c69-7479a723af37   10Gi       RWO            directpv-min-io   &amp;lt;unset&amp;gt;                 28m
data1-tenant1-pool-0-0   Bound    pvc-08bdde0c-b472-4dfe-8b95-09e59e6aa4d8   10Gi       RWO            directpv-min-io   &amp;lt;unset&amp;gt;                 28m
data2-tenant1-pool-0-0   Bound    pvc-9eba11d7-3331-423c-91d7-2a8f45a08ce3   10Gi       RWO            directpv-min-io   &amp;lt;unset&amp;gt;                 28m
data3-tenant1-pool-0-0   Bound    pvc-88ff8de1-0702-4783-9a24-f63af88dda30   10Gi       RWO            directpv-min-io   &amp;lt;unset&amp;gt;                 28m&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;지난 게시물에서 살펴본 바와 같이 tenant를 생성해야 실제로 MinIO 오브젝트 스토리지가 설치됩니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;tenant를 생성하면서 MinIO가 배포된 상태를 확인해보겠습니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;# tenant 확인
kubectl get sts,pod,svc,ep,pvc,secret -n tenant1
kubectl get pod -n tenant1 -l v1.min.io/pool=pool-0 -owide
kubectl describe pod -n tenant1 -l v1.min.io/pool=pool-0
kubectl logs -n tenant1 -l v1.min.io/pool=pool-0
kubectl exec -it -n tenant1 sts/tenant1-pool-0 -c minio -- id
kubectl exec -it -n tenant1 sts/tenant1-pool-0 -c minio -- env
kubectl exec -it -n tenant1 sts/tenant1-pool-0 -c minio -- cat /tmp/minio/config.env
kubectl get secret -n tenant1 tenant1-env-configuration -o jsonpath='{.data.config\.env}' | base64 -d ; echo
kubectl get secret -n tenant1 tenant1-tls -o jsonpath='{.data.public\.crt}' | base64 -d
kubectl get secret -n tenant1 tenant1-tls -o jsonpath='{.data.public\.crt}' | base64 -d | openssl x509 -noout -text


((⎈|HomeLab:N/A) root@k8s-ctr:~kubectl get sts,pod,svc,ep,pvc,secret -n tenant1t1
Warning: v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
NAME                              READY   AGE
statefulset.apps/tenant1-pool-0   4/4     4m21s

NAME                   READY   STATUS    RESTARTS   AGE
pod/tenant1-pool-0-0   2/2     Running   0          4m20s
pod/tenant1-pool-0-1   2/2     Running   0          4m18s
pod/tenant1-pool-0-2   2/2     Running   0          4m19s
pod/tenant1-pool-0-3   2/2     Running   0          4m17s

NAME                      TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
service/minio             ClusterIP   10.96.130.108   &amp;lt;none&amp;gt;        443/TCP    4m23s
service/tenant1-console   ClusterIP   10.96.116.65    &amp;lt;none&amp;gt;        9443/TCP   4m22s
service/tenant1-hl        ClusterIP   None            &amp;lt;none&amp;gt;        9000/TCP   4m22s

NAME                        ENDPOINTS                                                         AGE
endpoints/minio             172.20.1.234:9000,172.20.2.58:9000,172.20.3.97:9000 + 1 more...   4m22s
endpoints/tenant1-console   172.20.1.234:9443,172.20.2.58:9443,172.20.3.97:9443 + 1 more...   4m22s
endpoints/tenant1-hl        172.20.1.234:9000,172.20.2.58:9000,172.20.3.97:9000 + 1 more...   4m22s

NAME                                           STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS      VOLUMEATTRIBUTESCLASS   AGE
persistentvolumeclaim/data0-tenant1-pool-0-0   Bound    pvc-09cb0ef1-1f35-498d-8af5-0a07552fedf6   5Gi        RWO            directpv-min-io   &amp;lt;unset&amp;gt;                 4m20s
persistentvolumeclaim/data0-tenant1-pool-0-1   Bound    pvc-ba974c12-1c13-4214-b833-857ca77b16d6   5Gi        RWO            directpv-min-io   &amp;lt;unset&amp;gt;                 4m19s
persistentvolumeclaim/data0-tenant1-pool-0-2   Bound    pvc-4195aaee-bc68-49d9-baae-24008b99e37d   5Gi        RWO            directpv-min-io   &amp;lt;unset&amp;gt;                 4m19s
persistentvolumeclaim/data0-tenant1-pool-0-3   Bound    pvc-0356c85b-ed24-4c02-ab86-1f679cbe3a14   5Gi        RWO            directpv-min-io   &amp;lt;unset&amp;gt;                 4m18s
persistentvolumeclaim/data1-tenant1-pool-0-0   Bound    pvc-cdfe3904-0aeb-4435-b8f0-63227956bbae   5Gi        RWO            directpv-min-io   &amp;lt;unset&amp;gt;                 4m20s
persistentvolumeclaim/data1-tenant1-pool-0-1   Bound    pvc-b42bf921-10eb-4552-a71b-d9f06da5a0ef   5Gi        RWO            directpv-min-io   &amp;lt;unset&amp;gt;                 4m19s
persistentvolumeclaim/data1-tenant1-pool-0-2   Bound    pvc-b6c9174e-5c42-48f0-8d5b-bfea48ac5e05   5Gi        RWO            directpv-min-io   &amp;lt;unset&amp;gt;                 4m20s
persistentvolumeclaim/data1-tenant1-pool-0-3   Bound    pvc-f62c2729-7c20-4fbb-b3a4-eda290556a7d   5Gi        RWO            directpv-min-io   &amp;lt;unset&amp;gt;                 4m18s
persistentvolumeclaim/data2-tenant1-pool-0-0   Bound    pvc-164de563-bfe9-4618-8992-b40e911e1986   5Gi        RWO            directpv-min-io   &amp;lt;unset&amp;gt;                 4m20s
persistentvolumeclaim/data2-tenant1-pool-0-1   Bound    pvc-544aad79-0ad0-4214-b852-57f937553b8e   5Gi        RWO            directpv-min-io   &amp;lt;unset&amp;gt;                 4m19s
persistentvolumeclaim/data2-tenant1-pool-0-2   Bound    pvc-9ed305a2-2be1-429d-b41b-4f8b61d1e89c   5Gi        RWO            directpv-min-io   &amp;lt;unset&amp;gt;                 4m19s
persistentvolumeclaim/data2-tenant1-pool-0-3   Bound    pvc-939bb9c0-d71c-4df6-bafc-073326b3901c   5Gi        RWO            directpv-min-io   &amp;lt;unset&amp;gt;                 4m18s
persistentvolumeclaim/data3-tenant1-pool-0-0   Bound    pvc-590bcea7-127e-4bbe-b480-6d0d35bda008   5Gi        RWO            directpv-min-io   &amp;lt;unset&amp;gt;                 4m20s
persistentvolumeclaim/data3-tenant1-pool-0-1   Bound    pvc-1ab5a9a1-0600-4196-8c50-aa7e0e842100   5Gi        RWO            directpv-min-io   &amp;lt;unset&amp;gt;                 4m20s
persistentvolumeclaim/data3-tenant1-pool-0-2   Bound    pvc-761fe37d-09b7-4e7c-9625-8a2adacec35c   5Gi        RWO            directpv-min-io   &amp;lt;unset&amp;gt;                 4m19s
persistentvolumeclaim/data3-tenant1-pool-0-3   Bound    pvc-9ee5c940-e6fd-4cea-83d5-91823ca67081   5Gi        RWO            directpv-min-io   &amp;lt;unset&amp;gt;                 4m17s

NAME                                   TYPE                 DATA   AGE
secret/sh.helm.release.v1.tenant1.v1   helm.sh/release.v1   1      4m39s
secret/tenant1-env-configuration       Opaque               1      4m39s
secret/tenant1-tls                     Opaque               2      4m23s


(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl logs -n tenant1 -l v1.min.io/pool=pool-0
Defaulted container &quot;minio&quot; out of: minio, sidecar, validate-arguments (init)
Defaulted container &quot;minio&quot; out of: minio, sidecar, validate-arguments (init)
Defaulted container &quot;minio&quot; out of: minio, sidecar, validate-arguments (init)
Defaulted container &quot;minio&quot; out of: minio, sidecar, validate-arguments (init)
MinIO Object Storage Server
Copyright: 2015-2025 MinIO, Inc.
License: GNU AGPLv3 - https://www.gnu.org/licenses/agpl-3.0.html
Version: RELEASE.2025-04-08T15-41-24Z (go1.24.2 linux/amd64)

API: https://minio.tenant1.svc.cluster.local
WebUI: https://172.20.4.224:9443 https://127.0.0.1:9443

Docs: https://docs.min.io
---------------------------
MinIO Object Storage Server
Copyright: 2015-2025 MinIO, Inc.
License: GNU AGPLv3 - https://www.gnu.org/licenses/agpl-3.0.html
Version: RELEASE.2025-04-08T15-41-24Z (go1.24.2 linux/amd64)

API: https://minio.tenant1.svc.cluster.local
WebUI: https://172.20.2.58:9443 https://127.0.0.1:9443

Docs: https://docs.min.io
---------------------------
MinIO Object Storage Server
Copyright: 2015-2025 MinIO, Inc.
License: GNU AGPLv3 - https://www.gnu.org/licenses/agpl-3.0.html
Version: RELEASE.2025-04-08T15-41-24Z (go1.24.2 linux/amd64)

API: https://minio.tenant1.svc.cluster.local
WebUI: https://172.20.1.234:9443 https://127.0.0.1:9443

Docs: https://docs.min.io
---------------------------
MinIO Object Storage Server
Copyright: 2015-2025 MinIO, Inc.
License: GNU AGPLv3 - https://www.gnu.org/licenses/agpl-3.0.html
Version: RELEASE.2025-04-08T15-41-24Z (go1.24.2 linux/amd64)

API: https://minio.tenant1.svc.cluster.local
WebUI: https://172.20.3.97:9443 https://127.0.0.1:9443

Docs: https://docs.min.io
---------------------------

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get secret -n tenant1 tenant1-env-configuration -o jsonpath='{.data.config\.env}' | base64 -d ; echo
export MINIO_ROOT_USER=&quot;minio&quot;
export MINIO_ROOT_PASSWORD=&quot;minio123&quot;&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;MinIO의 webUI를 접속해보겠습니다.&lt;/p&gt;
&lt;pre class=&quot;jboss-cli&quot;&gt;&lt;code&gt;# console을 nodeport로 변경
kubectl patch svc -n tenant1 tenant1-console -p '{&quot;spec&quot;: {&quot;type&quot;: &quot;NodePort&quot;, &quot;ports&quot;: [{&quot;port&quot;: 9443, &quot;targetPort&quot;: 9443, &quot;nodePort&quot;: 30001}]}}'

# k8s-ctr의 eth1 인터페이스로 노드포트 접속: 기본키(minio , minio123)
echo &quot;https://192.168.10.100:30001&quot;

(⎈|default:N/A) root@k3s-s:~# echo &quot;https://$(curl -s ipinfo.io/ip):30001&quot;
https://15.164.244.91:30001

# minio API도 nodeport로 변경
kubectl patch svc -n tenant1 minio -p '{&quot;spec&quot;: {&quot;type&quot;: &quot;NodePort&quot;, &quot;ports&quot;: [{&quot;port&quot;: 443, &quot;targetPort&quot;: 9000, &quot;nodePort&quot;: 30002}]}}'&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아래와 같이 접속이 됩니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1998&quot; data-origin-height=&quot;1322&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/7nbzB/btsQLz6iTEi/zNBYW076Ur40KkaoOhEsUK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/7nbzB/btsQLz6iTEi/zNBYW076Ur40KkaoOhEsUK/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/7nbzB/btsQLz6iTEi/zNBYW076Ur40KkaoOhEsUK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2F7nbzB%2FbtsQLz6iTEi%2FzNBYW076Ur40KkaoOhEsUK%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1998&quot; height=&quot;1322&quot; data-origin-width=&quot;1998&quot; data-origin-height=&quot;1322&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;관리를 위해서 &lt;code&gt;mc&lt;/code&gt; 커맨드로 설치해서 살펴보겠습니다.&lt;/p&gt;
&lt;pre class=&quot;jboss-cli&quot;&gt;&lt;code&gt;curl --progress-bar -L https://dl.min.io/aistor/mc/release/linux-amd64/mc \
--create-dirs \
-o $HOME/aistor-binaries/mc

chmod +x ~/aistor-binaries/mc

~/aistor-binaries/mc --help

# 간단하게 사용하기 위해서 /usr/bin으로 이동
sudo cp ~/aistor-binaries/mc /usr/bin&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;alias를 등록해서 관리해보겠습니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;# mc alias
mc alias set k8s-tenant1 https://127.0.0.1:30002 minio minio123 --insecure
mc alias list
mc admin info k8s-tenant1 --insecure

(⎈|HomeLab:N/A) root@k8s-ctr:~# mc alias set k8s-tenant1 https://127.0.0.1:30002 minio minio123 --insecure
mc: Configuration written to `/root/.mc/config.json`. Please update your access credentials.
mc: Successfully created `/root/.mc/share`.
mc: Initialized share uploads `/root/.mc/share/uploads.json` file.
mc: Initialized share downloads `/root/.mc/share/downloads.json` file.
Added `k8s-tenant1` successfully.
(⎈|HomeLab:N/A) root@k8s-ctr:~# mc admin info k8s-tenant1 --insecure
●  tenant1-pool-0-0.tenant1-hl.tenant1.svc.cluster.local:9000
   Uptime: 10 minutes
   Version: 2025-04-08T15:41:24Z
   Network: 4/4 OK
   Drives: 4/4 OK
   Pool: 1

●  tenant1-pool-0-1.tenant1-hl.tenant1.svc.cluster.local:9000
   Uptime: 10 minutes
   Version: 2025-04-08T15:41:24Z
   Network: 4/4 OK
   Drives: 4/4 OK
   Pool: 1

●  tenant1-pool-0-2.tenant1-hl.tenant1.svc.cluster.local:9000
   Uptime: 10 minutes
   Version: 2025-04-08T15:41:24Z
   Network: 4/4 OK
   Drives: 4/4 OK
   Pool: 1

●  tenant1-pool-0-3.tenant1-hl.tenant1.svc.cluster.local:9000
   Uptime: 10 minutes
   Version: 2025-04-08T15:41:24Z
   Network: 4/4 OK
   Drives: 4/4 OK
   Pool: 1

┌──────┬──────────────────────┬─────────────────────┬──────────────┐
│ Pool │ Drives Usage         │ Erasure stripe size │ Erasure sets │
│ 1st  │ 0.0% (total: 75 GiB) │ 16                  │ 1            │
└──────┴──────────────────────┴─────────────────────┴──────────────┘

16 drives online, 0 drives offline, EC:1

# 버킷 생성
mc mb k8s-tenant1/mybucket --insecure
mc ls k8s-tenant1 --insecure

(⎈|HomeLab:N/A) root@k8s-ctr:~# mc mb k8s-tenant1/mybucket --insecure
Bucket created successfully `k8s-tenant1/mybucket`.
(⎈|HomeLab:N/A) root@k8s-ctr:~# mc ls k8s-tenant1 --insecure
[2025-09-24 00:43:05 KST]     0B mybucket/&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아래와 같이 생성한 버킷에 테스트 파일을 업로드 했습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1998&quot; data-origin-height=&quot;789&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/beTdEQ/btsQPbWDIqa/fCxmxEJynUTVseP2pLKsdK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/beTdEQ/btsQPbWDIqa/fCxmxEJynUTVseP2pLKsdK/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/beTdEQ/btsQPbWDIqa/fCxmxEJynUTVseP2pLKsdK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbeTdEQ%2FbtsQPbWDIqa%2FfCxmxEJynUTVseP2pLKsdK%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1998&quot; height=&quot;789&quot; data-origin-width=&quot;1998&quot; data-origin-height=&quot;789&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이 환경에서 실제로 erasure code가 어떤식으로 동작하는지 살펴보겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;노드에서 확인해보면, directPV로 구성된 볼륨 하위에 life.txt 라는 폴더가 생성된 것으로 보입니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;root@k8s-w1:~# find / -name &quot;*life.txt*&quot;
/var/lib/directpv/mnt/b3bf1d12-430f-413d-8d6d-4f2300ac7e2d/pvc-f62c2729-7c20-4fbb-b3a4-eda290556a7d/data/mybucket/life.txt
/var/lib/directpv/mnt/ad5cdb87-4dde-4ddb-ae6f-54d2b93303e7/pvc-9ee5c940-e6fd-4cea-83d5-91823ca67081/data/mybucket/life.txt
/var/lib/directpv/mnt/83242820-7ec4-4018-95ee-33d6e477c9b1/pvc-0356c85b-ed24-4c02-ab86-1f679cbe3a14/data/mybucket/life.txt
/var/lib/directpv/mnt/1ec669cd-106d-42eb-9a75-c74acace67d6/pvc-939bb9c0-d71c-4df6-bafc-073326b3901c/data/mybucket/life.txt
/var/lib/kubelet/pods/a30bd9ab-25e0-4c72-9429-858df424262a/volumes/kubernetes.io~csi/pvc-939bb9c0-d71c-4df6-bafc-073326b3901c/mount/data/mybucket/life.txt
/var/lib/kubelet/pods/a30bd9ab-25e0-4c72-9429-858df424262a/volumes/kubernetes.io~csi/pvc-f62c2729-7c20-4fbb-b3a4-eda290556a7d/mount/data/mybucket/life.txt
/var/lib/kubelet/pods/a30bd9ab-25e0-4c72-9429-858df424262a/volumes/kubernetes.io~csi/pvc-0356c85b-ed24-4c02-ab86-1f679cbe3a14/mount/data/mybucket/life.txt
/var/lib/kubelet/pods/a30bd9ab-25e0-4c72-9429-858df424262a/volumes/kubernetes.io~csi/pvc-9ee5c940-e6fd-4cea-83d5-91823ca67081/mount/data/mybucket/life.txt
/var/lib/kubelet/plugins/kubernetes.io/csi/directpv-min-io/569287ffffbda1fe8a427f2b6825c0759212f83a2f87ed21dd499f5f9674507a/globalmount/data/mybucket/life.txt
/var/lib/kubelet/plugins/kubernetes.io/csi/directpv-min-io/dc6eef8758cb51d7b81d91762471fcfb539150c5ddc54a9cce761787fc1df07d/globalmount/data/mybucket/life.txt
/var/lib/kubelet/plugins/kubernetes.io/csi/directpv-min-io/bae672d5bbdb154b57b5456419666c5d3b776f2800e560bce33ea2de3aefae53/globalmount/data/mybucket/life.txt
/var/lib/kubelet/plugins/kubernetes.io/csi/directpv-min-io/eda7cad7f5a5374662a277af713dbbb529f8c7ca447f1d053d5abcbeffc74c64/globalmount/data/mybucket/life.txt&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아래와 같이 확인해보면 이전에 SNMD에서 확인한 것과 같이 각 노드에 분산되어 파일이 저장된 것을 알 수 있습니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;root@k8s-w1:~# tree /var/lib/directpv/mnt/
/var/lib/directpv/mnt/
├── 1ec669cd-106d-42eb-9a75-c74acace67d6
│   └── pvc-939bb9c0-d71c-4df6-bafc-073326b3901c
│       └── data
│           └── mybucket
│               └── life.txt
│                   └── xl.meta
├── 83242820-7ec4-4018-95ee-33d6e477c9b1
│   └── pvc-0356c85b-ed24-4c02-ab86-1f679cbe3a14
│       └── data
│           └── mybucket
│               └── life.txt
│                   └── xl.meta
├── ad5cdb87-4dde-4ddb-ae6f-54d2b93303e7
│   └── pvc-9ee5c940-e6fd-4cea-83d5-91823ca67081
│       └── data
│           └── mybucket
│               └── life.txt
│                   └── xl.meta
└── b3bf1d12-430f-413d-8d6d-4f2300ac7e2d
    └── pvc-f62c2729-7c20-4fbb-b3a4-eda290556a7d
        └── data
            └── mybucket
                └── life.txt
                    └── xl.meta

21 directories, 4 files 
...
root@k8s-w4:~# tree /var/lib/directpv/mnt/
/var/lib/directpv/mnt/
├── 254b8fc1-9159-471e-9df5-8b7467149ac4
│   └── pvc-ba974c12-1c13-4214-b833-857ca77b16d6
│       └── data
│           └── mybucket
│               └── life.txt
│                   └── xl.meta
├── 3b3ef65c-42f6-4b50-a44a-44ffc28cbbac
│   └── pvc-b42bf921-10eb-4552-a71b-d9f06da5a0ef
│       └── data
│           └── mybucket
│               └── life.txt
│                   └── xl.meta
├── b305dca2-ef12-4a61-955a-9ea12db10740
│   └── pvc-544aad79-0ad0-4214-b852-57f937553b8e
│       └── data
│           └── mybucket
│               └── life.txt
│                   └── xl.meta
└── cb4ac76d-e415-440d-817d-83a0c095e249
    └── pvc-1ab5a9a1-0600-4196-8c50-aa7e0e842100
        └── data
            └── mybucket
                └── life.txt
                    └── xl.meta

21 directories, 4 files&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;파일을 확인해보면, &lt;code&gt;EC:1&lt;/code&gt;로 설정되어 이들 중 하나가 parity block인 것을 알 수 있습니다. (tenant를 생성하면서 EC:4 정도로 했더라면 좋았을텐데 좀 아쉽네요)&lt;/p&gt;
&lt;pre class=&quot;asciidoc&quot;&gt;&lt;code&gt;for f in /var/lib/directpv/mnt/*/pvc-*/data/mybucket/life.txt/xl.meta; do
  echo &quot;=== $f ===&quot;
  tail &quot;$f&quot;
  echo &quot;&quot;
done

...
root@k8s-w3:~# for f in /var/lib/directpv/mnt/*/pvc-*/data/mybucket/life.txt/xl.meta; do
  echo &quot;=== $f ===&quot;
  tail &quot;$f&quot;
  echo &quot;&quot;
done
=== /var/lib/directpv/mnt/11679efe-44ef-4849-8755-136085abe018/pvc-09cb0ef1-1f35-498d-8af5-0a07552fedf6/data/mybucket/life.txt/xl.meta ===
33. History trains the conscience to recognize progress and mistakes.
34. Education without curiosity is empty memorization.
...

=== /var/lib/directpv/mnt/cba257bb-7576-4fb2-8c1d-1b7200e6fe03/pvc-590bcea7-127e-4bbe-b480-6d0d35bda008/data/mybucket/life.txt/xl.meta ===
Lr`nl`Ibkk���sn~saw'gq/0
             -nsW���ysJ}mD앖sf2j7�(&amp;lt;Bx���C4}&amp;gt;|#'cjS;v:c)ucjI&amp;gt;sL3Mpi&amp;lt;&amp;amp;Lin1k=b?6  \�ϔmd���=cs␦TnW&amp;amp;���Xt9  V'\kdK\i.}���k&amp;lt;\��?2*x?xuZM:r��Nw%Sdb$1w`Ba     9r:d8lnv,x+6@v6NW/?%=]D}4=#F{&amp;gt;L%*X7)8\sy&amp;lt;$4f0)BPtc^gVpFfO:X)!hBf3~)G,k6/(s(␦c#H%;&amp;amp;5*Z0l[$       }20|&amp;lt;IAUl?i+:35uzaVDw(6- ofYa'#���&quot;%���v/dfc62 5*���
     s{a--Q|j4Te_5X&quot;95*%.
                         ~qOs#ol␦ZTv &quot;)y~7-5yRpJ6wkpJ&amp;gt;mJw.$2aybw#lX~0hV.{{Tcan1n
...&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;마지막으로 data block 중 하나를 삭제하고, 복구가 잘되는지 확인해보겠습니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;# 삭제
root@k8s-w2:~# rm -rf /var/lib/directpv/mnt/f9474487-102f-4435-97a2-5a0a50fa98e8/pvc-761fe37d-09b7-4e7c-9625-8a2adacec35c/data/mybucket/
root@k8s-w2:~# tree /var/lib/directpv/mnt/
/var/lib/directpv/mnt/
├── 7f34ec30-ad93-4506-a0ca-657163eb5fc3
│   └── pvc-4195aaee-bc68-49d9-baae-24008b99e37d
│       └── data
│           └── mybucket
│               └── life.txt
│                   └── xl.meta
├── d1e0638b-b1bc-4276-a3a9-71ae8366f11b
│   └── pvc-9ed305a2-2be1-429d-b41b-4f8b61d1e89c
│       └── data
│           └── mybucket
│               └── life.txt
│                   └── xl.meta
├── e9e46095-447a-4e09-afa0-817a75e36893
│   └── pvc-b6c9174e-5c42-48f0-8d5b-bfea48ac5e05
│       └── data
│           └── mybucket
│               └── life.txt
│                   └── xl.meta
└── f9474487-102f-4435-97a2-5a0a50fa98e8
    └── pvc-761fe37d-09b7-4e7c-9625-8a2adacec35c
        └── data

19 directories, 3 files

# 복구
mc admin heal k8s-tenant1/mybucket --insecure

(⎈|HomeLab:N/A) root@k8s-ctr:~# mc admin heal k8s-tenant1/mybucket --insecure
 ◐  mybucket
    0/0 objects; 0 B in -4s
    ┌────────┬───┬─────────────────────┐
    │ Green  │ 1 │ 100.0% ████████████ │
    │ Yellow │ 0 │   0.0%              │
    │ Red    │ 0 │   0.0%              │
    │ Grey   │ 0 │   0.0%              │
    └────────┴───┴─────────────────────┘

# 재확인 (변경 안됨) _ 다만 이 상태에서도 WebUI에서 직접 파일을 다운하면 정상적으로 다운이 됩니다. 이미 parity block으로 IO처리는 가능합니다.
root@k8s-w2:~# tree /var/lib/directpv/mnt/
/var/lib/directpv/mnt/
├── 7f34ec30-ad93-4506-a0ca-657163eb5fc3
│   └── pvc-4195aaee-bc68-49d9-baae-24008b99e37d
│       └── data
│           └── mybucket
│               └── life.txt
│                   └── xl.meta
├── d1e0638b-b1bc-4276-a3a9-71ae8366f11b
│   └── pvc-9ed305a2-2be1-429d-b41b-4f8b61d1e89c
│       └── data
│           └── mybucket
│               └── life.txt
│                   └── xl.meta
├── e9e46095-447a-4e09-afa0-817a75e36893
│   └── pvc-b6c9174e-5c42-48f0-8d5b-bfea48ac5e05
│       └── data
│           └── mybucket
│               └── life.txt
│                   └── xl.meta
└── f9474487-102f-4435-97a2-5a0a50fa98e8
    └── pvc-761fe37d-09b7-4e7c-9625-8a2adacec35c
        └── data
            └── mybucket

20 directories, 3 files

# 파일명까지 넣어서 복구 요청
(⎈|HomeLab:N/A) root@k8s-ctr:~# mc admin heal k8s-tenant1/mybucket/life.txt --insecure
 ◐  mybucket/life.txt
    0/1 objects; 65 KiB in -4s
    ┌────────┬───┬─────────────────────┐
    │ Green  │ 2 │ 100.0% ████████████ │
    │ Yellow │ 0 │   0.0%              │
    │ Red    │ 0 │   0.0%              │
    │ Grey   │ 0 │   0.0%              │
    └────────┴───┴─────────────────────┘

# 재확인
root@k8s-w2:~# tree /var/lib/directpv/mnt/
/var/lib/directpv/mnt/
├── 7f34ec30-ad93-4506-a0ca-657163eb5fc3
│   └── pvc-4195aaee-bc68-49d9-baae-24008b99e37d
│       └── data
│           └── mybucket
│               └── life.txt
│                   └── xl.meta
├── d1e0638b-b1bc-4276-a3a9-71ae8366f11b
│   └── pvc-9ed305a2-2be1-429d-b41b-4f8b61d1e89c
│       └── data
│           └── mybucket
│               └── life.txt
│                   └── xl.meta
├── e9e46095-447a-4e09-afa0-817a75e36893
│   └── pvc-b6c9174e-5c42-48f0-8d5b-bfea48ac5e05
│       └── data
│           └── mybucket
│               └── life.txt
│                   └── xl.meta
└── f9474487-102f-4435-97a2-5a0a50fa98e8
    └── pvc-761fe37d-09b7-4e7c-9625-8a2adacec35c
        └── data
            └── mybucket
                └── life.txt
                    └── xl.meta

21 directories, 4 files&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이상 로컬 쿠버네티스 환경에서 DirectPV를 통해서 로컬 디스크를 구성하고, 이후 MinIO의 MNMD를 실습해봤습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;실습 환경을 삭제하기 위해서 아래의 명령을 수행합니다.&lt;/p&gt;
&lt;pre class=&quot;ebnf&quot;&gt;&lt;code&gt;vagrant destroy -f&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;마치며&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;지난 3주 동안 MinIO 오브젝트 스토리지를 살펴봤습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;https://a-person.tistory.com/61&quot;&gt;[1] MinIO 개요&lt;/a&gt;에서 오브젝트 스토리지에 대한 이해와 MinIO의 주요 개념과 동작 방식 살펴봤습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;https://a-person.tistory.com/62&quot;&gt;[2] MinIO 사용해보기&lt;/a&gt;를 통해 MinIO와 Docker 환경에서 SNSD, SNMD를 구성해보고, 이후 쿠버네티스 환경에서 MinIO를 배포하여 살펴보았습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;https://a-person.tistory.com/63&quot;&gt;[3] MinIO - Direct PV&lt;/a&gt;에서는 DirectPV를 살펴보고, AWS EC2 환경의 로컬 디스크에서 DirectPV를 구성해보고, k3s 환경에서 MinIO를 구성했습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;마지막으로, 본 게시물에서 MinIO의 DirectPV와 MNMD를 구성하는 절차를 살펴봤습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;본 게시물은 CloudNet에서 진행한 &lt;code&gt;MinIO 스터디&lt;/code&gt;를 진행하는 과정에서 학습한 내용을 작성하였음을 알려드립니다.&lt;/p&gt;</description>
      <category>MinIO</category>
      <category>directpv</category>
      <category>minIO</category>
      <category>mnmd</category>
      <category>Object Storage</category>
      <author>한명</author>
      <guid isPermaLink="true">https://a-person.tistory.com/64</guid>
      <comments>https://a-person.tistory.com/64#entry64comment</comments>
      <pubDate>Wed, 24 Sep 2025 20:55:59 +0900</pubDate>
    </item>
    <item>
      <title>[3] MinIO - Direct PV</title>
      <link>https://a-person.tistory.com/63</link>
      <description>&lt;p data-ke-size=&quot;size16&quot;&gt;이번 게시물에서는 MinIO의 DirectPV를 살펴보겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;목차&lt;/h2&gt;
&lt;ol style=&quot;list-style-type: decimal;&quot; data-ke-list-type=&quot;decimal&quot;&gt;
&lt;li&gt;실습환경 구성&lt;/li&gt;
&lt;li&gt;DirectPV 개요&lt;/li&gt;
&lt;li&gt;DirectPV 실습&lt;/li&gt;
&lt;li&gt;DirectPV 환경의 MinIO 실습&lt;/li&gt;
&lt;/ol&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;1. 실습환경 구성&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;실습 환경은 AWS의 EC2를 통해서 구성하도록 하겠습니다.&lt;/p&gt;
&lt;pre class=&quot;asciidoc&quot;&gt;&lt;code&gt;# YAML 파일 다운로드
curl -O https://s3.ap-northeast-2.amazonaws.com/cloudformation.cloudneta.net/K8S/minio-ec2-1node.yaml

# CloudFormation 스택 배포
# aws cloudformation deploy --template-file &amp;lt;template file&amp;gt; --stack-name mylab --parameter-overrides KeyName=&amp;lt;My SSH Keyname&amp;gt; SgIngressSshCidr=&amp;lt;My Home Public IP Address&amp;gt;/32 --region ap-northeast-2

$ aws cloudformation deploy --template-file minio-ec2-1node.yaml --stack-name miniolab --parameter-overrides KeyName=mykey SgIngressSshCidr=$(curl -s ipinfo.io/ip)/32 --region ap-northeast-2

Waiting for changeset to be created..
Waiting for stack create/update to complete
Successfully created/updated stack - miniolab

# CloudFormation 스택 배포 완료 후 작업용 EC2 IP 출력
aws cloudformation describe-stacks --stack-name miniolab --query 'Stacks[*].Outputs[0].OutputValue' --output text --region ap-northeast-2

# [모니터링] CloudFormation 스택 상태 : 생성 완료 확인
while true; do 
  date
  AWS_PAGER=&quot;&quot; aws cloudformation list-stacks \
    --stack-status-filter CREATE_IN_PROGRESS CREATE_COMPLETE CREATE_FAILED DELETE_IN_PROGRESS DELETE_FAILED \
    --query &quot;StackSummaries[*].{StackName:StackName, StackStatus:StackStatus}&quot; \
    --output table
  sleep 1
done

...
Wed Sep 17 21:31:59 KST 2025
----------------------------------
|           ListStacks           |
+------------+-------------------+
|  StackName |    StackStatus    |
+------------+-------------------+
|  miniolab  |  CREATE_COMPLETE  |
+------------+-------------------+


# 배포된 aws ec2 유동 공인 IP 확인
aws ec2 describe-instances --query &quot;Reservations[*].Instances[*].{PublicIPAdd:PublicIpAddress,InstanceName:Tags[?Key=='Name']|[0].Value,Status:State.Name}&quot; --filters Name=instance-state-name,Values=running --output text

k3s-s   15.164.244.91   running

# EC2 SSH 접속 : 바로 접속하지 말고, 3~5분 정도 후에 접속 할 것
ssh -i ~/.ssh/mykey.pem ubuntu@$(aws cloudformation describe-stacks --stack-name miniolab --query 'Stacks[*].Outputs[0].OutputValue' --output text --region ap-northeast-2)
...
(⎈|default:N/A) root@k3s-s:~# &lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;실습 환경을 CloudFormation으로 배포하면 EC2 인스턴스가 생성됩니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;cloudformation의 template file을 살펴보면 인스턴스의 Userdata를 통해서 필요한 명령과 스크립트를 수행하였으며, 이미 k3s가 설치된 상태입니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;blockquote data-ke-style=&quot;style3&quot;&gt;[Note] k8s&lt;br /&gt;k8s는 rancher에서 IoT 및 edge computing 디바이스 위에서도 동작할 수 있도록 만들어진 경량 쿠버네티스 입니다. 컨트롤 플레인은&amp;nbsp;k3s server&amp;nbsp;명령어로 수행되며, 프로세스 내에 컨트롤 플레인과 데이터스토어 컴포넌트가 실행됩니다. 에이전트 노드는&amp;nbsp;k3s agent&amp;nbsp;명령어로 수행되어, 프로세스 내에 노드에 해당하는 컴포넌트가 실행됩니다.&lt;br /&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;261&quot; data-origin-height=&quot;150&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/ef68g3/dJMb9O1FsSZ/wodZIv5kMZXvuQ87117wmK/tfile.svg&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/ef68g3/dJMb9O1FsSZ/wodZIv5kMZXvuQ87117wmK/tfile.svg&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/ef68g3/dJMb9O1FsSZ/wodZIv5kMZXvuQ87117wmK/tfile.svg&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2Fef68g3%2FdJMb9O1FsSZ%2FwodZIv5kMZXvuQ87117wmK%2Ftfile.svg&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;261&quot; height=&quot;150&quot; data-origin-width=&quot;261&quot; data-origin-height=&quot;150&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;br /&gt;출처:&amp;nbsp;https://docs.k3s.io/architecture&lt;/blockquote&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;설치된 k3s 를 확인해보고, &lt;code&gt;hostnamectl&lt;/code&gt;로 EC2 인스턴스 여부를 확인해봅니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;kubectl get node -owide
kubectl get po -A
hostnamectl 

(⎈|default:N/A) root@k3s-s:~# kubectl get no -owide
NAME    STATUS   ROLES                  AGE   VERSION        INTERNAL-IP     EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION    CONTAINER-RUNTIME
k3s-s   Ready    control-plane,master   29m   v1.33.4+k3s1   192.168.10.10   &amp;lt;none&amp;gt;        Ubuntu 24.04.3 LTS   6.14.0-1012-aws   containerd://2.0.5-k3s2

(⎈|default:N/A) root@k3s-s:~# kubectl get po -A
NAMESPACE     NAME                                      READY   STATUS    RESTARTS   AGE
kube-system   coredns-64fd4b4794-cnj7x                  1/1     Running   0          32m
kube-system   local-path-provisioner-774c6665dc-crrsn   1/1     Running   0          32m
kube-system   metrics-server-7bfffcd44-6p9xf            1/1     Running   0          32m

(⎈|default:N/A) root@k3s-s:~# hostnamectl
 Static hostname: k3s-s
       Icon name: computer-vm
         Chassis: vm  
      Machine ID: ec2e66b04812a4eb62dc9c11ecfa9ae3
         Boot ID: 5d01caeee674406a9bdcd31e1fcd35ee
  Virtualization: amazon
Operating System: Ubuntu 24.04.3 LTS
          Kernel: Linux 6.14.0-1012-aws
    Architecture: x86-64
 Hardware Vendor: Amazon EC2
  Hardware Model: t3.xlarge
Firmware Version: 1.0
   Firmware Date: Mon 2017-10-16
    Firmware Age: 7y 11month 1d&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이 환경을 바탕으로 앞으로 실습을 이어 가겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;실습에 앞서 DirectPV에 대해서 먼저 살펴보겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;2. DirectPV 개요&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;MinIO의 DirectPV는 DAS(Directed Attached Storage)를 위한 CSI(Container Storage Interface)입니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;출처: &lt;a href=&quot;https://docs.min.io/community/minio-directpv/&quot;&gt;https://docs.min.io/community/minio-directpv/&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;쿠버네티스 환경에서 hostPath나 localPV가 보통 로컬 디스크를 활용하는 옵션으로 알려져 있습니다. 다만 hostPath는 사전에 노드의 특정 경로를 직접 만들고, 해당 경로를 지정을 하여 사용하는 방식이며, localPV는 사전에 로컬 디스크를 PV로 등록해야 합니다. 즉, 두가지 방식은 노드마다 디스크를 수동으로 설정 관리해야 합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;반면 DirectPV는 분산 환경에서 로컬 디스크를 discover, format, mount, schedule , monitor까지 지원하는 분산 환경의 Persistent Volume Manager로 역할합니다. 쿠버네티스 환경에서 DirectPV를 통해 로컬 디스크를 식별하고, PVC에 대한 PV를 생성합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;그림의 우측과 같이 SAN, NAS 기반의 CSI 드라이버에서는 SAN/NAS 기반의 복제나 혹은 Erasure code가 동작하고, 불필요한 네트워크 홉이 발생하여 복잡성과 성능을 저하시킵니다. 반면, 좌측에서 로컬 스토리지를 사용하는 DriectPV는 이러한 복잡한 솔루션이 없이 로컬 스토리지의 이점을 최대한 사용할 수 있습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1139&quot; data-origin-height=&quot;726&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/Dbph5/btsQIzw1l2C/AWtHoonwKV2m5BRDKrgGiK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/Dbph5/btsQIzw1l2C/AWtHoonwKV2m5BRDKrgGiK/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/Dbph5/btsQIzw1l2C/AWtHoonwKV2m5BRDKrgGiK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FDbph5%2FbtsQIzw1l2C%2FAWtHoonwKV2m5BRDKrgGiK%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1139&quot; height=&quot;726&quot; data-origin-width=&quot;1139&quot; data-origin-height=&quot;726&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;출처: &lt;a href=&quot;https://docs.min.io/community/minio-directpv/&quot;&gt;https://docs.min.io/community/minio-directpv/&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;DirectPV의 구성 요소&lt;/h3&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;DirectPV는 두 가지 컴포넌트를 가지며, 쿠버네티스 환경에서 파드로 실행됩니다.&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;Controller&lt;/li&gt;
&lt;li&gt;Node server&lt;/li&gt;
&lt;/ul&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;먼저 Controller를 살펴 보겠습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;801&quot; data-origin-height=&quot;201&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/UeHgS/btsQJAoFLC5/iGYF3R3M8DrwFFbAKPlp2k/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/UeHgS/btsQJAoFLC5/iGYF3R3M8DrwFFbAKPlp2k/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/UeHgS/btsQJAoFLC5/iGYF3R3M8DrwFFbAKPlp2k/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FUeHgS%2FbtsQJAoFLC5%2FiGYF3R3M8DrwFFbAKPlp2k%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;801&quot; height=&quot;201&quot; data-origin-width=&quot;801&quot; data-origin-height=&quot;201&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;출처: &lt;a href=&quot;https://docs.min.io/community/minio-directpv/concepts/architecture/&quot;&gt;https://docs.min.io/community/minio-directpv/concepts/architecture/&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Controller는 &lt;code&gt;controller&lt;/code&gt;라는 이름의 파드로 디플로이먼트 형태로 실행되며, 3개의 replicas로 실행됩니다. 이때 하나의 인스턴스가 요청을 처리합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;각 파드는 3개의 컨테이너로 이뤄집니다.&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;Controller: 볼륨 생성, 삭제, 확장에 대한 CSI 요청을 처리합니다.&lt;/li&gt;
&lt;li&gt;CSI provisioner: PVC로 부터 볼륨 생성과 삭제 요청을 CSI controller로 전달하는 역할을 합니다.&lt;/li&gt;
&lt;li&gt;CSI resizer: PVC로 부터 볼륨 확장에 대한 요청을 CSI controller로 전달하는 역할을 합니다.&lt;/li&gt;
&lt;/ul&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;controller server는 controller 컨테이너의 형태로 실행되며, &lt;code&gt;Create volume&lt;/code&gt; , &lt;code&gt;Delete volume&lt;/code&gt;, &lt;code&gt;Expand volume&lt;/code&gt;이라는 요청을 처리합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;다음 살펴볼 컴포넌트는 Node server입니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;892&quot; data-origin-height=&quot;304&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/m6QGg/btsQH1HiYNU/cKKzHPN20Lh6Fa3sMiNzu1/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/m6QGg/btsQH1HiYNU/cKKzHPN20Lh6Fa3sMiNzu1/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/m6QGg/btsQH1HiYNU/cKKzHPN20Lh6Fa3sMiNzu1/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2Fm6QGg%2FbtsQH1HiYNU%2FcKKzHPN20Lh6Fa3sMiNzu1%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;892&quot; height=&quot;304&quot; data-origin-width=&quot;892&quot; data-origin-height=&quot;304&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;node server는 &lt;code&gt;node-server&lt;/code&gt;라는 이름으로 데몬 셋으로 실행됩니다. 각 노드에서 로컬 디스크를 처리하는 역할을 합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;파드는 4개의 컨테이너로 이뤄집니다.&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;Node driver registrar: node server를 kubelet에 등록해 CSI RPC call을 받습니다.&lt;/li&gt;
&lt;li&gt;Node server: stage, ustage, publish, unpublish, expand 볼륨 RPC 요청을 처리합니다.&lt;/li&gt;
&lt;li&gt;Node controller: &lt;code&gt;DirectPVDrive&lt;/code&gt;, &lt;code&gt;DirectPVVolume&lt;/code&gt;, &lt;code&gt;DirectPVNode&lt;/code&gt;, &lt;code&gt;DirectPVInitRequest&lt;/code&gt;로 부터 CRD 이벤트를 처리합니다.&lt;/li&gt;
&lt;li&gt;Liveness probe: 쿠버네티스의 liveness prove에 대한 &lt;code&gt;/healthz&lt;/code&gt; 엔드포인트를 노출합니다.&lt;/li&gt;
&lt;/ul&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;DirectPV의 controller와 node-server라는 컴포넌트를 살펴봤습니다. 쉽게 controller는 CSI 관점에서 PVC 요청을 받아 적절한 PV 생성, 삭제, 확장하는 역할을 처리합니다. node-server는 노드 수준에서 디스크를 discovery, format, mount, monitoring하는 역할을 수행합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아래와 같은 절차로 처리됩니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;[PVC 생성 요청]&lt;br /&gt;&amp;nbsp; &amp;nbsp;&amp;darr;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;[CSI Provisioner]&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;span style=&quot;color: #333333; text-align: start;&quot;&gt;&lt;span&gt;&amp;nbsp;&lt;/span&gt;&amp;nbsp;&amp;darr;&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;[Controller]&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;디스크 선택&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Volume 리소스 생성&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp; ​ &amp;darr;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;[Node Server (on 해당 노드)]&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;디스크 포맷&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;마운트&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;처음에 MinIO와 DirectPV가 직접적인 연관을 가지고 있는 것으로 이해하고 문서를 살펴보다 보니 혼란스러웠는데, 결국 두가지 다른 솔루션을 조합해서 사용할 수 있는 것으로 이해할 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;MinIO는 쿠버네티스 환경을 바탕으로 배포가 가능하며, MinIO에서 MNMD(Multi-Node Multi-Drive)로 배포를 하려면 각 노드에 연결된 드라이브를 관리하는 방법이 필요합니다.이때 쿠버네티스 환경에서 DirectPV CSI를 사용하면 다수 노드의 로컬 드라이브를 효과적으로 관리할 수 있게되고, 이 환경에서 MinIO를 활용하는 것이 보다 효과적인 접근 방법으로 이해됩니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;실습을 통해서 자세히 살펴보겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;3. DirectPV 실습&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;DirectPV는 kubectl krew 플러그인을 통해서 설치할 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;먼저 krew 플러그인을 설치합니다.&lt;/p&gt;
&lt;pre class=&quot;dsconfig&quot;&gt;&lt;code&gt;# Install Krew
wget -P /root &quot;https://github.com/kubernetes-sigs/krew/releases/latest/download/krew-linux_amd64.tar.gz&quot;
tar zxvf &quot;/root/krew-linux_amd64.tar.gz&quot; --warning=no-unknown-keyword
./krew-linux_amd64 install krew
export PATH=&quot;${KREW_ROOT:-$HOME/.krew}/bin:$PATH&quot; # export PATH=&quot;$PATH:/root/.krew/bin&quot;
echo 'export PATH=&quot;$PATH:/root/.krew/bin:/root/go/bin&quot;' &amp;gt;&amp;gt; /etc/profile
kubectl krew install get-all neat rolesum pexec stern
kubectl krew list

(⎈|default:N/A) root@k3s-s:~# kubectl krew list
PLUGIN   VERSION
get-all  v1.4.2
krew     v0.4.5
neat     v2.0.4
pexec    v0.4.1
rolesum  v1.5.5
stern    v1.33.0&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;실습을 이어 나가겠습니다.&lt;/p&gt;
&lt;pre class=&quot;sql&quot;&gt;&lt;code&gt;# directpv 플러그인 설치
kubectl krew install directpv
kubectl directpv -h

(⎈|default:N/A) root@k3s-s:~# kubectl krew install directpv
Updated the local copy of plugin index.
Installing plugin: directpv
Installed plugin: directpv
\
 | Use this plugin:
 |      kubectl directpv
 | Documentation:
 |      https://github.com/minio/directpv
/
WARNING: You installed plugin &quot;directpv&quot; from the krew-index plugin repository.
   These plugins are not audited for security by the Krew maintainers.
   Run them at your own risk.
(⎈|default:N/A) root@k3s-s:~# kubectl directpv -h
Kubectl plugin for managing DirectPV drives and volumes.

USAGE:
  directpv [command]

FLAGS:
      --kubeconfig string   Path to the kubeconfig file to use for CLI requests
      --quiet               Suppress printing error messages
  -h, --help                help for directpv
      --version             version for directpv

AVAILABLE COMMANDS:
  install     Install DirectPV in Kubernetes
  discover    Discover new drives
  init        Initialize the drives
  info        Show information about DirectPV installation
  list        List drives and volumes
  label       Set labels to drives and volumes
  cordon      Mark drives as unschedulable
  uncordon    Mark drives as schedulable
  migrate     Migrate drives and volumes from legacy DirectCSI
  move        Move volumes excluding data from source drive to destination drive on a same node
  clean       Cleanup stale volumes
  suspend     Suspend drives and volumes
  resume      Resume suspended drives and volumes
  repair      Repair filesystem of drives
  remove      Remove unused drives from DirectPV
  uninstall   Uninstall DirectPV in Kubernetes

Use &quot;directpv [command] --help&quot; for more information about this command.&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;krew에 directpv 플로그인 설치가 완료되면, 쿠버네티스에 DirectPV를 설치를 진행합니다.&lt;/p&gt;
&lt;pre class=&quot;lua&quot;&gt;&lt;code&gt;# DirectPV 설치
kubectl directpv install

(⎈|default:N/A) root@k3s-s:~# kubectl directpv install
Installing on unsupported Kubernetes v1.33

 ███████████████████████████████████████████████████████████████████████████ 100%

┌──────────────────────────────────────┬──────────────────────────┐
│ NAME                                 │ KIND                     │
├──────────────────────────────────────┼──────────────────────────┤
│ directpv                             │ Namespace                │
│ directpv-min-io                      │ ServiceAccount           │
│ directpv-min-io                      │ ClusterRole              │
│ directpv-min-io                      │ ClusterRoleBinding       │
│ directpv-min-io                      │ Role                     │
│ directpv-min-io                      │ RoleBinding              │
│ directpvdrives.directpv.min.io       │ CustomResourceDefinition │
│ directpvvolumes.directpv.min.io      │ CustomResourceDefinition │
│ directpvnodes.directpv.min.io        │ CustomResourceDefinition │
│ directpvinitrequests.directpv.min.io │ CustomResourceDefinition │
│ directpv-min-io                      │ CSIDriver                │
│ directpv-min-io                      │ StorageClass             │
│ node-server                          │ Daemonset                │
│ controller                           │ Deployment               │
└──────────────────────────────────────┴──────────────────────────┘

DirectPV installed successfully&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;install 명령과 함께 필요한 네임스페이스, 서비스 어카운트, RBAC, CRD와 각 컴포넌트들이 설치된 것을 확인할 수 있습니다.&lt;/p&gt;
&lt;pre class=&quot;prolog&quot;&gt;&lt;code&gt;# 설치 확인
kubectl get crd | grep min

(⎈|default:N/A) root@k3s-s:~# kubectl get crd | grep min
directpvdrives.directpv.min.io         2025-09-17T13:15:32Z
directpvinitrequests.directpv.min.io   2025-09-17T13:15:32Z
directpvnodes.directpv.min.io          2025-09-17T13:15:32Z
directpvvolumes.directpv.min.io        2025-09-17T13:15:32Z

kubectl get sc directpv-min-io -o yaml | yq
kubectl get sc

(⎈|default:N/A) root@k3s-s:~# kubectl get sc
NAME                   PROVISIONER             RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
directpv-min-io        directpv-min-io         Delete          WaitForFirstConsumer   true                   22m
local-path (default)   rancher.io/local-path   Delete          WaitForFirstConsumer   false                  80m

kubectl get all -n directpv
kubectl get deploy,ds,pod -n directpv
kubectl rolesum directpv-min-io -n directpv

(⎈|default:N/A) root@k3s-s:~# kubectl get all -n directpv
NAME                              READY   STATUS    RESTARTS   AGE
pod/controller-596844c67f-99p6c   3/3     Running   0          22m
pod/controller-596844c67f-fwwkd   3/3     Running   0          22m
pod/controller-596844c67f-psqr4   3/3     Running   0          22m
pod/node-server-cmjww             4/4     Running   0          22m

NAME                         DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
daemonset.apps/node-server   1         1         1       1            1           &amp;lt;none&amp;gt;          22m

NAME                         READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/controller   3/3     3            3           22m

NAME                                    DESIRED   CURRENT   READY   AGE
replicaset.apps/controller-596844c67f   3         3         3       22m

(⎈|default:N/A) root@k3s-s:~# kubectl get deploy,ds,pod -n directpv
NAME                         READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/controller   3/3     3            3           23m

NAME                         DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
daemonset.apps/node-server   1         1         1       1            1           &amp;lt;none&amp;gt;          23m

NAME                              READY   STATUS    RESTARTS   AGE
pod/controller-596844c67f-99p6c   3/3     Running   0          23m
pod/controller-596844c67f-fwwkd   3/3     Running   0          23m
pod/controller-596844c67f-psqr4   3/3     Running   0          23m
pod/node-server-cmjww             4/4     Running   0          23m
(⎈|default:N/A) root@k3s-s:~# kubectl rolesum directpv-min-io -n directpv
ServiceAccount: directpv/directpv-min-io
Secrets:

Policies:
&amp;bull; [RB] directpv/directpv-min-io ⟶  [R] directpv/directpv-min-io
  Resource                    Name  Exclude  Verbs  G L W C U P D DC
  leases.coordination.k8s.io  [*]     [-]     [-]   ✔ ✔ ✔ ✔ ✔ ✖ ✔ ✖


&amp;bull; [CRB] */directpv-min-io ⟶  [CR] */directpv-min-io
  Resource                                                          Name  Exclude  Verbs  G L W C U P D DC
  csinodes.storage.k8s.io                                           [*]     [-]     [-]   ✔ ✔ ✔ ✖ ✖ ✖ ✖ ✖
  customresourcedefinition.[apiextensions.k8s.io,directpv.min.io]   [*]     [-]     [-]   ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✖
  customresourcedefinitions.[apiextensions.k8s.io,directpv.min.io]  [*]     [-]     [-]   ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✖
  directpvdrives.directpv.min.io                                    [*]     [-]     [-]   ✔ ✔ ✔ ✔ ✔ ✖ ✔ ✖
  directpvinitrequests.directpv.min.io                              [*]     [-]     [-]   ✔ ✔ ✔ ✔ ✔ ✖ ✔ ✖
  directpvnodes.directpv.min.io                                     [*]     [-]     [-]   ✔ ✔ ✔ ✔ ✔ ✖ ✔ ✖
  directpvvolumes.directpv.min.io                                   [*]     [-]     [-]   ✔ ✔ ✔ ✔ ✔ ✖ ✔ ✖
  endpoints                                                         [*]     [-]     [-]   ✔ ✔ ✔ ✔ ✔ ✖ ✔ ✖
  events                                                            [*]     [-]     [-]   ✖ ✔ ✔ ✔ ✔ ✔ ✖ ✖
  leases.coordination.k8s.io                                        [*]     [-]     [-]   ✔ ✔ ✔ ✔ ✔ ✖ ✔ ✖
  nodes                                                             [*]     [-]     [-]   ✔ ✔ ✔ ✖ ✖ ✖ ✖ ✖
  persistentvolumeclaims                                            [*]     [-]     [-]   ✔ ✔ ✔ ✖ ✔ ✖ ✖ ✖
  persistentvolumeclaims/status                                     [*]     [-]     [-]   ✖ ✖ ✖ ✖ ✖ ✔ ✖ ✖
  persistentvolumes                                                 [*]     [-]     [-]   ✔ ✔ ✔ ✔ ✖ ✔ ✔ ✖
  pod                                                               [*]     [-]     [-]   ✔ ✔ ✔ ✖ ✖ ✖ ✖ ✖
  pods                                                              [*]     [-]     [-]   ✔ ✔ ✔ ✖ ✖ ✖ ✖ ✖
  secret                                                            [*]     [-]     [-]   ✔ ✔ ✔ ✖ ✖ ✖ ✖ ✖
  secrets                                                           [*]     [-]     [-]   ✔ ✔ ✔ ✖ ✖ ✖ ✖ ✖
  storageclasses.storage.k8s.io                                     [*]     [-]     [-]   ✔ ✔ ✔ ✖ ✖ ✖ ✖ ✖
  volumeattachments.storage.k8s.io                                  [*]     [-]     [-]   ✔ ✔ ✔ ✖ ✖ ✖ ✖ ✖
  volumesnapshotcontents.snapshot.storage.k8s.io                    [*]     [-]     [-]   ✔ ✔ ✖ ✖ ✖ ✖ ✖ ✖
  volumesnapshots.snapshot.storage.k8s.io                           [*]     [-]     [-]   ✔ ✔ ✖ ✖ ✖ ✖ ✖ ✖


kubectl get directpvnodes.directpv.min.io
kubectl get directpvnodes.directpv.min.io -o yaml | yq


(⎈|default:N/A) root@k3s-s:~# kubectl get directpvnodes.directpv.min.io
NAME    AGE
k3s-s   23m
(⎈|default:N/A) root@k3s-s:~# kubectl get directpvnodes.directpv.min.io -o yaml | yq
{
  &quot;apiVersion&quot;: &quot;v1&quot;,
  &quot;items&quot;: [
    {
      &quot;apiVersion&quot;: &quot;directpv.min.io/v1beta1&quot;,
      &quot;kind&quot;: &quot;DirectPVNode&quot;,
      &quot;metadata&quot;: {
        &quot;creationTimestamp&quot;: &quot;2025-09-17T13:15:45Z&quot;,
        &quot;generation&quot;: 1,
        &quot;labels&quot;: {
          &quot;directpv.min.io/created-by&quot;: &quot;node-controller&quot;,
          &quot;directpv.min.io/node&quot;: &quot;k3s-s&quot;,
          &quot;directpv.min.io/version&quot;: &quot;v1beta1&quot;
        },
        &quot;name&quot;: &quot;k3s-s&quot;,
        &quot;resourceVersion&quot;: &quot;1645&quot;,
        &quot;uid&quot;: &quot;dd1ed251-5c8a-458a-a95d-bdad87860a9c&quot;
      },
      &quot;spec&quot;: {},
      &quot;status&quot;: {
        &quot;devices&quot;: [
          {
            &quot;deniedReason&quot;: &quot;Mounted&quot;,
            &quot;fsType&quot;: &quot;ext4&quot;,
            &quot;fsuuid&quot;: &quot;1eb1aa76-4a46-48d9-95d8-a2ecf2d505c2&quot;,
            &quot;id&quot;: &quot;259:8$mazlJ+LaRoQIg1a49exnnvMXozxEHDXUhRE9kqh7nFI=&quot;,
            &quot;majorMinor&quot;: &quot;259:8&quot;,
            &quot;make&quot;: &quot;Amazon Elastic Block Store (Part 16)&quot;,
            &quot;name&quot;: &quot;nvme0n1p16&quot;,
            &quot;size&quot;: 957350400
          },
          {
            &quot;deniedReason&quot;: &quot;Partitioned&quot;,
            &quot;id&quot;: &quot;259:1$6MxZyk1Zu78Gv/zhogVdYs00zn4/BU1DlI138b97UtA=&quot;,
            &quot;majorMinor&quot;: &quot;259:1&quot;,
            &quot;make&quot;: &quot;Amazon Elastic Block Store&quot;,
            &quot;name&quot;: &quot;nvme0n1&quot;,
            &quot;size&quot;: 32212254720
          },
          {
            &quot;id&quot;: &quot;259:0$9UYmau9epZA5HOERZ5SrzW8yiDLBw80W30CKvLFHSQw=&quot;,
            &quot;majorMinor&quot;: &quot;259:0&quot;,
            &quot;make&quot;: &quot;Amazon Elastic Block Store&quot;,
            &quot;name&quot;: &quot;nvme1n1&quot;,
            &quot;size&quot;: 32212254720
          },
          {
            &quot;id&quot;: &quot;259:2$hpcB632lnzoNk7tcg6SAkOtjCkIsPVsLqPmitsEETwc=&quot;,
            &quot;majorMinor&quot;: &quot;259:2&quot;,
            &quot;make&quot;: &quot;Amazon Elastic Block Store&quot;,
            &quot;name&quot;: &quot;nvme4n1&quot;,
            &quot;size&quot;: 32212254720
          },
          {
            &quot;deniedReason&quot;: &quot;Too small&quot;,
            &quot;id&quot;: &quot;259:6$Gbb3k7exq4W2Ac+zU0j+krIvJwN/6OtH6f3ELuv26QY=&quot;,
            &quot;majorMinor&quot;: &quot;259:6&quot;,
            &quot;make&quot;: &quot;Amazon Elastic Block Store (Part 14)&quot;,
            &quot;name&quot;: &quot;nvme0n1p14&quot;,
            &quot;size&quot;: 4194304
          },
          {
            &quot;deniedReason&quot;: &quot;Mounted&quot;,
            &quot;fsType&quot;: &quot;ext4&quot;,
            &quot;fsuuid&quot;: &quot;0eec2352-4b50-40ec-ae93-7ce2911392bb&quot;,
            &quot;id&quot;: &quot;259:5$EXPke6wed3mGQmR+lWFoC0xHqmd2x0qaMPpqHrpDPGg=&quot;,
            &quot;majorMinor&quot;: &quot;259:5&quot;,
            &quot;make&quot;: &quot;Amazon Elastic Block Store (Part 1)&quot;,
            &quot;name&quot;: &quot;nvme0n1p1&quot;,
            &quot;size&quot;: 31137447424
          },
          {
            &quot;deniedReason&quot;: &quot;Too small; Mounted&quot;,
            &quot;fsType&quot;: &quot;vfat&quot;,
            &quot;fsuuid&quot;: &quot;2586-E57C&quot;,
            &quot;id&quot;: &quot;259:7$eq0TDPAeC5XpZxGD+35aPxmsqlyWCfYNKxBhvQbG3HY=&quot;,
            &quot;majorMinor&quot;: &quot;259:7&quot;,
            &quot;make&quot;: &quot;Amazon Elastic Block Store (Part 15)&quot;,
            &quot;name&quot;: &quot;nvme0n1p15&quot;,
            &quot;size&quot;: 111149056
          },
          {
            &quot;id&quot;: &quot;259:3$5cl6e3k8kuTX1H0jiWeh7s5VAWPy/6SkGszbjy0HrMI=&quot;,
            &quot;majorMinor&quot;: &quot;259:3&quot;,
            &quot;make&quot;: &quot;Amazon Elastic Block Store&quot;,
            &quot;name&quot;: &quot;nvme3n1&quot;,
            &quot;size&quot;: 32212254720
          },
          {
            &quot;id&quot;: &quot;259:4$CzcTd5LMbzvzg61lhMTENGsDQG+g47NsF0Ef4V0qWwk=&quot;,
            &quot;majorMinor&quot;: &quot;259:4&quot;,
            &quot;make&quot;: &quot;Amazon Elastic Block Store&quot;,
            &quot;name&quot;: &quot;nvme2n1&quot;,
            &quot;size&quot;: 32212254720
          }
        ]
      }
    }
  ],
  &quot;kind&quot;: &quot;List&quot;,
  &quot;metadata&quot;: {
    &quot;resourceVersion&quot;: &quot;&quot;
  }
}&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아직 DirectPV를 쿠버네티스 환경에 설치만 했을 뿐 아무런 작업을 진행한 것은 없습니다. controller 로그를 살펴보면 start 한 뒤 대기중인 것으로 보입니다.&lt;/p&gt;
&lt;pre class=&quot;routeros&quot;&gt;&lt;code&gt;(⎈|default:N/A) root@k3s-s:~# kubectl logs -f -n directpv controller-596844c67f-99p6c -c controller
I0917 13:15:45.092456       1 controller.go:57] Identity server started
I0917 13:15:45.092567       1 controller.go:60] Controller server started
I0917 13:15:45.093171       1 ready.go:42] Serving readiness endpoint at :30443
^C(⎈|default:N/A) root@k3s-s:~# kubectl logs -f -n directpv controller-596844c67f-fwwkd -c controller
I0917 13:15:45.084442       1 controller.go:57] Identity server started
I0917 13:15:45.084728       1 controller.go:60] Controller server started
I0917 13:15:45.085376       1 ready.go:42] Serving readiness endpoint at :30443
^C(⎈|default:N/A) root@k3s-s:~# kubectl logs -f -n directpv controller-596844c67f-psqr4 -c controller
I0917 13:15:45.157321       1 controller.go:57] Identity server started
I0917 13:15:45.157487       1 controller.go:60] Controller server started
I0917 13:15:45.157907       1 ready.go:42] Serving readiness endpoint at :30443&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;데몬 셋으로 실행 중인 node-server를 살펴보면 node-driver-registrar 컨테이너에서 kubelet으로 CSI driver를 등록하고 &quot;Received NotifyRegistrationStatus call&quot;을 받았으며, syslog를 살펴보면 &quot;Register new plugin with name: directpv-min-io&quot;와 같은 식으로 등록을 한 것을 알 수 있습니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;(⎈|default:N/A) root@k3s-s:~# kubectl logs -f -n directpv node-server-cmjww
Defaulted container &quot;node-driver-registrar&quot; out of: node-driver-registrar, node-server, node-controller, liveness-probe
I0917 13:15:37.752434    7285 main.go:150] &quot;Version&quot; version=&quot;unknown&quot;
I0917 13:15:37.752494    7285 main.go:151] &quot;Running node-driver-registrar&quot; mode=&quot;&quot;
I0917 13:15:37.752502    7285 main.go:172] &quot;Attempting to open a gRPC connection&quot; csiAddress=&quot;unix:///csi/csi.sock&quot;
I0917 13:15:46.254971    7285 main.go:180] &quot;Calling CSI driver to discover driver name&quot;
I0917 13:15:46.259223    7285 main.go:189] &quot;CSI driver name&quot; csiDriverName=&quot;directpv-min-io&quot;
I0917 13:15:46.259292    7285 node_register.go:56] &quot;Starting Registration Server&quot; socketPath=&quot;/registration/directpv-min-io-reg.sock&quot;
I0917 13:15:46.259923    7285 node_register.go:66] &quot;Registration Server started&quot; socketPath=&quot;/registration/directpv-min-io-reg.sock&quot;
I0917 13:15:46.260088    7285 node_register.go:96] &quot;Skipping HTTP server&quot;
I0917 13:15:47.056666    7285 main.go:96] &quot;Received GetInfo call&quot; request=&quot;&amp;amp;InfoRequest{}&quot;
I0917 13:15:47.079550    7285 main.go:108] &quot;Received NotifyRegistrationStatus call&quot; status=&quot;&amp;amp;RegistrationStatus{PluginRegistered:true,Error:,}&quot;
^C

# /var/log/syslog
2025-09-17T13:15:47.057473+00:00 ip-192-168-10-10 k3s[2794]: I0917 22:15:47.057063    2794 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: directpv-min-io endpoint: /var/lib/kubelet/plugins/directpv-min-io/csi.sock versions: 1.0.0
2025-09-17T13:15:47.057571+00:00 ip-192-168-10-10 k3s[2794]: I0917 22:15:47.057100    2794 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: directpv-min-io at endpoint: /var/lib/kubelet/plugins/directpv-min-io/csi.sock&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이제 Directpv를 통해서디스크를 discover하고 초기화 해보겠습니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;# EC2에 등록된 disk와 direct pv로 관리되는 드라이브 확인
lsblk
kubectl directpv info

(⎈|default:N/A) root@k3s-s:~# lsblk
NAME         MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTS
loop0          7:0    0 73.9M  1 loop /snap/core22/2111
loop1          7:1    0 27.6M  1 loop /snap/amazon-ssm-agent/11797
loop2          7:2    0 50.8M  1 loop /snap/snapd/25202
loop3          7:3    0   16M  0 loop
nvme1n1      259:0    0   30G  0 disk
nvme0n1      259:1    0   30G  0 disk
├─nvme0n1p1  259:5    0   29G  0 part /
├─nvme0n1p14 259:6    0    4M  0 part
├─nvme0n1p15 259:7    0  106M  0 part /boot/efi
└─nvme0n1p16 259:8    0  913M  0 part /boot
nvme4n1      259:2    0   30G  0 disk
nvme3n1      259:3    0   30G  0 disk
nvme2n1      259:4    0   30G  0 disk

(⎈|default:N/A) root@k3s-s:~# kubectl directpv info
┌─────────┬──────────┬───────────┬─────────┬────────┐
│ NODE    │ CAPACITY │ ALLOCATED │ VOLUMES │ DRIVES │
├─────────┼──────────┼───────────┼─────────┼────────┤
│ &amp;bull; k3s-s │ -        │ -         │ -       │ -      │
└─────────┴──────────┴───────────┴─────────┴────────┘

0 B/0 B used, 0 volumes, 0 drives

# discover 진행
kubectl directpv discover

(⎈|default:N/A) root@k3s-s:~# kubectl directpv discover

 Discovered node 'k3s-s' ✔

┌─────────────────────┬───────┬─────────┬────────┬────────────┬────────────────────────────┬───────────┬─────────────┐
│ ID                  │ NODE  │ DRIVE   │ SIZE   │ FILESYSTEM │ MAKE                       │ AVAILABLE │ DESCRIPTION │
├─────────────────────┼───────┼─────────┼────────┼────────────┼────────────────────────────┼───────────┼─────────────┤
│ 259:0$9UYmau9epZ... │ k3s-s │ nvme1n1 │ 30 GiB │ -          │ Amazon Elastic Block Store │ YES       │ -           │
│ 259:4$CzcTd5LMbz... │ k3s-s │ nvme2n1 │ 30 GiB │ -          │ Amazon Elastic Block Store │ YES       │ -           │
│ 259:3$5cl6e3k8ku... │ k3s-s │ nvme3n1 │ 30 GiB │ -          │ Amazon Elastic Block Store │ YES       │ -           │
│ 259:2$hpcB632lnz... │ k3s-s │ nvme4n1 │ 30 GiB │ -          │ Amazon Elastic Block Store │ YES       │ -           │
└─────────────────────┴───────┴─────────┴────────┴────────────┴────────────────────────────┴───────────┴─────────────┘

Generated 'drives.yaml' successfully.&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;discover 명령을 수행하면, 해당 폴더에 &lt;code&gt;drives.yaml&lt;/code&gt; 파일이 생성됩니다. 이 파일을 인자로 init하면 초기화가 이뤄집니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;# (참고) 적용 예외 설정 시 select: &quot;no&quot; 설정
cat drives.yaml

(⎈|default:N/A) root@k3s-s:~# cat drives.yaml
version: v1
nodes:
    - name: k3s-s
      drives:
        - id: 259:3$5cl6e3k8kuTX1H0jiWeh7s5VAWPy/6SkGszbjy0HrMI=
          name: nvme3n1
          size: 32212254720
          make: Amazon Elastic Block Store
          select: &quot;yes&quot;
        - id: 259:4$CzcTd5LMbzvzg61lhMTENGsDQG+g47NsF0Ef4V0qWwk=
          name: nvme2n1
          size: 32212254720
          make: Amazon Elastic Block Store
          select: &quot;yes&quot;
        - id: 259:0$9UYmau9epZA5HOERZ5SrzW8yiDLBw80W30CKvLFHSQw=
          name: nvme1n1
          size: 32212254720
          make: Amazon Elastic Block Store
          select: &quot;yes&quot;
        - id: 259:2$hpcB632lnzoNk7tcg6SAkOtjCkIsPVsLqPmitsEETwc=
          name: nvme4n1
          size: 32212254720
          make: Amazon Elastic Block Store
          select: &quot;yes&quot;

# 초기화 (Error 확인: 데이터가 지워짐!)
kubectl directpv init drives.yaml

(⎈|default:N/A) root@k3s-s:~# kubectl directpv init drives.yaml
ERROR Initializing the drives will permanently erase existing data. Please review carefully before performing this *DANGEROUS* operation and retry this command with --dangerous flag.

# 초기화 강제 진행
kubectl directpv init drives.yaml --dangerous

(⎈|default:N/A) root@k3s-s:~# kubectl directpv init drives.yaml --dangerous

 ███████████████████████████████████████████████████████████████████████████ 100%

 Processed initialization request '2774b71a-64d1-4c98-a307-50521b0e468f' for node 'k3s-s' ✔

┌──────────────────────────────────────┬───────┬─────────┬─────────┐
│ REQUEST_ID                           │ NODE  │ DRIVE   │ MESSAGE │
├──────────────────────────────────────┼───────┼─────────┼─────────┤
│ 2774b71a-64d1-4c98-a307-50521b0e468f │ k3s-s │ nvme1n1 │ Success │
│ 2774b71a-64d1-4c98-a307-50521b0e468f │ k3s-s │ nvme2n1 │ Success │
│ 2774b71a-64d1-4c98-a307-50521b0e468f │ k3s-s │ nvme3n1 │ Success │
│ 2774b71a-64d1-4c98-a307-50521b0e468f │ k3s-s │ nvme4n1 │ Success │
└──────────────────────────────────────┴───────┴─────────┴─────────┘

# 드라이브 확인
kubectl directpv list drives

(⎈|default:N/A) root@k3s-s:~# kubectl directpv list drives
┌───────┬─────────┬────────────────────────────┬────────┬────────┬─────────┬────────┐
│ NODE  │ NAME    │ MAKE                       │ SIZE   │ FREE   │ VOLUMES │ STATUS │
├───────┼─────────┼────────────────────────────┼────────┼────────┼─────────┼────────┤
│ k3s-s │ nvme1n1 │ Amazon Elastic Block Store │ 30 GiB │ 30 GiB │ -       │ Ready  │
│ k3s-s │ nvme2n1 │ Amazon Elastic Block Store │ 30 GiB │ 30 GiB │ -       │ Ready  │
│ k3s-s │ nvme3n1 │ Amazon Elastic Block Store │ 30 GiB │ 30 GiB │ -       │ Ready  │
│ k3s-s │ nvme4n1 │ Amazon Elastic Block Store │ 30 GiB │ 30 GiB │ -       │ Ready  │
└───────┴─────────┴────────────────────────────┴────────┴────────┴─────────┴────────┘

# 4개의 드라이브가 인식됨
kubectl directpv info

(⎈|default:N/A) root@k3s-s:~# kubectl directpv info
┌─────────┬──────────┬───────────┬─────────┬────────┐
│ NODE    │ CAPACITY │ ALLOCATED │ VOLUMES │ DRIVES │
├─────────┼──────────┼───────────┼─────────┼────────┤
│ &amp;bull; k3s-s │ 120 GiB  │ 0 B       │ 0       │ 4      │
└─────────┴──────────┴───────────┴─────────┴────────┘

0 B/120 GiB used, 0 volumes, 4 drives

# 확인
lsblk

(⎈|default:N/A) root@k3s-s:~# lsblk
NAME         MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTS
loop0          7:0    0 73.9M  1 loop /snap/core22/2111
loop1          7:1    0 27.6M  1 loop /snap/amazon-ssm-agent/11797
loop2          7:2    0 50.8M  1 loop /snap/snapd/25202
loop3          7:3    0   16M  0 loop
nvme1n1      259:0    0   30G  0 disk /var/lib/directpv/mnt/ffd730c8-c056-454a-830f-208b9529104c
nvme0n1      259:1    0   30G  0 disk
├─nvme0n1p1  259:5    0   29G  0 part /
├─nvme0n1p14 259:6    0    4M  0 part
├─nvme0n1p15 259:7    0  106M  0 part /boot/efi
└─nvme0n1p16 259:8    0  913M  0 part /boot
nvme4n1      259:2    0   30G  0 disk /var/lib/directpv/mnt/7f010ba0-6e36-4bac-8734-8101f5fc86cd
nvme3n1      259:3    0   30G  0 disk /var/lib/directpv/mnt/ff9fbf17-a2ca-475a-83c3-88b9c4c77140
nvme2n1      259:4    0   30G  0 disk /var/lib/directpv/mnt/d29e80c7-dc3b-4a48-9a81-82352886d63f


# 디스크가 xfs로 포맷팅되어 마운트 된 상태
df -hT --type xfs

(⎈|default:N/A) root@k3s-s:~# df -hT --type xfs
Filesystem     Type  Size  Used Avail Use% Mounted on
/dev/nvme3n1   xfs    30G  248M   30G   1% /var/lib/directpv/mnt/ff9fbf17-a2ca-475a-83c3-88b9c4c77140
/dev/nvme1n1   xfs    30G  248M   30G   1% /var/lib/directpv/mnt/ffd730c8-c056-454a-830f-208b9529104c
/dev/nvme4n1   xfs    30G  248M   30G   1% /var/lib/directpv/mnt/7f010ba0-6e36-4bac-8734-8101f5fc86cd
/dev/nvme2n1   xfs    30G  248M   30G   1% /var/lib/directpv/mnt/d29e80c7-dc3b-4a48-9a81-82352886d63f

tree -h /var/lib/directpv/

(⎈|default:N/A) root@k3s-s:~# tree -h /var/lib/directpv/
[4.0K]  /var/lib/directpv/
├── [4.0K]  mnt
│   ├── [  75]  7f010ba0-6e36-4bac-8734-8101f5fc86cd
│   ├── [  75]  d29e80c7-dc3b-4a48-9a81-82352886d63f
│   ├── [  75]  ff9fbf17-a2ca-475a-83c3-88b9c4c77140
│   └── [  75]  ffd730c8-c056-454a-830f-208b9529104c
└── [  40]  tmp

7 directories, 0 files

# 각 드라이브는 directpvdirves로 등록됨
kubectl get directpvdrives.directpv.min.io -o yaml | yq

(⎈|default:N/A) root@k3s-s:~# kubectl get directpvdrives.directpv.min.io
NAME                                   AGE
7f010ba0-6e36-4bac-8734-8101f5fc86cd   2m16s
d29e80c7-dc3b-4a48-9a81-82352886d63f   2m16s
ff9fbf17-a2ca-475a-83c3-88b9c4c77140   2m16s
ffd730c8-c056-454a-830f-208b9529104c   2m16s

# 다만 /etc/fstab에 직접 등록되지 않기 때문에 재시작하면 마운트가 자동으로 되지 않습니다.
cat /etc/fstab&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;MinIO를 설치하지 않은 상태에서 DirectPV 자체는 CSI driver로 동작합니다. 테스트 애플리케이션을 배포하여 DirectPV가 CSI 로 동작하는 과정을 살펴보겠습니다.&lt;/p&gt;
&lt;pre class=&quot;yaml&quot;&gt;&lt;code&gt;# 사전 확인
kubectl get directpvdrives,directpvvolumes

(⎈|default:N/A) root@k3s-s:~# kubectl get directpvdrives
NAME                                   AGE
7f010ba0-6e36-4bac-8734-8101f5fc86cd   5m12s
d29e80c7-dc3b-4a48-9a81-82352886d63f   5m12s
ff9fbf17-a2ca-475a-83c3-88b9c4c77140   5m12s
ffd730c8-c056-454a-830f-208b9529104c   5m12s

(⎈|default:N/A) root@k3s-s:~# kubectl get directpvvolumes
No resources found

# PVC, Pod 배포
cat &amp;lt;&amp;lt; EOF | kubectl apply -f -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: nginx-pvc
spec:
  volumeMode: Filesystem
  storageClassName: directpv-min-io
  accessModes: [ &quot;ReadWriteOnce&quot; ]
  resources:
    requests:
      storage: 8Mi
---
apiVersion: v1
kind: Pod
metadata:
  name: nginx-pod
spec:
  volumes:
    - name: nginx-volume
      persistentVolumeClaim:
        claimName: nginx-pvc
  containers:
    - name: nginx-container
      image: nginx:alpine
      volumeMounts:
        - mountPath: &quot;/mnt&quot;
          name: nginx-volume
EOF&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;PVC와 파드를 생성하고 실제 동작과정을 controller와 node-server 로그를 통해서 살펴보겠습니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;# leader controller의 로그를 확인
(⎈|default:N/A) root@k3s-s:~# kubectl get lease -n directpv
NAME                               HOLDER                               AGE
directpv-min-io                    1758114945188-9387-directpv-min-io   62m
external-resizer-directpv-min-io   controller-596844c67f-psqr4          62m

# csi-provisioner 로그 확인 -&amp;gt; PVC 요청에 대한 이벤트를 받고, 볼륨 생성을 Controller에 요청 전달
(⎈|default:N/A) root@k3s-s:~# kubectl logs -f -n directpv controller-596844c67f-99p6c
...
I0917 14:22:37.822827       1 event.go:389] &quot;Event occurred&quot; object=&quot;default/nginx-pvc&quot; fieldPath=&quot;&quot; kind=&quot;PersistentVolumeClaim&quot; apiVersion=&quot;v1&quot; type=&quot;Normal&quot; reason=&quot;Provisioning&quot; message=&quot;External provisioner is provisioning volume for claim \&quot;default/nginx-pvc\&quot;&quot;

# controller 로그 확인 -&amp;gt; volume 생성 요청 
(⎈|default:N/A) root@k3s-s:~# kubectl logs -f -n directpv controller-596844c67f-psqr4 -c controller
...
I0917 14:22:37.824140       1 server.go:136] &quot;Create volume requested&quot; name=&quot;pvc-2029b3f1-f1b8-4a49-a863-a54e5f77dc05&quot; requiredBytes=&quot;8,388,608&quot;

# csi-provisioner 로그 확인 -&amp;gt; PVC 요청에 대한 PV가 정상적으로 생성됨
I0917 14:22:37.849828       1 controller.go:853] create volume rep: {CapacityBytes:8388608 VolumeId:pvc-2029b3f1-f1b8-4a49-a863-a54e5f77dc05 VolumeContext:map[] ContentSource:&amp;lt;nil&amp;gt; AccessibleTopology:[segments:&amp;lt;key:&quot;directpv.min.io/identity&quot; value:&quot;directpv-min-io&quot; &amp;gt; segments:&amp;lt;key:&quot;directpv.min.io/node&quot; value:&quot;k3s-s&quot; &amp;gt; segments:&amp;lt;key:&quot;directpv.min.io/rack&quot; value:&quot;default&quot; &amp;gt; segments:&amp;lt;key:&quot;directpv.min.io/region&quot; value:&quot;default&quot; &amp;gt; segments:&amp;lt;key:&quot;directpv.min.io/zone&quot; value:&quot;default&quot; &amp;gt; ] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I0917 14:22:37.849922       1 controller.go:955] successfully created PV pvc-2029b3f1-f1b8-4a49-a863-a54e5f77dc05 for PVC nginx-pvc and csi volume name pvc-2029b3f1-f1b8-4a49-a863-a54e5f77dc05
I0917 14:22:37.856376       1 event.go:389] &quot;Event occurred&quot; object=&quot;default/nginx-pvc&quot; fieldPath=&quot;&quot; kind=&quot;PersistentVolumeClaim&quot; apiVersion=&quot;v1&quot; type=&quot;Normal&quot; reason=&quot;ProvisioningSucceeded&quot; message=&quot;Successfully provisioned volume pvc-2029b3f1-f1b8-4a49-a863-a54e5f77dc05&quot;

# node-server 로그 -&amp;gt; 디스크 초기화 및 마운트 수행(Stage volume requested &amp;rarr; SetQuota succeeded &amp;rarr; Publish volume requested)
...
I0917 14:22:39.040290    7463 stage_unstage.go:37] &quot;Stage volume requested&quot; volumeID=&quot;pvc-2029b3f1-f1b8-4a49-a863-a54e5f77dc05&quot; StagingTargetPath=&quot;/var/lib/kubelet/plugins/kubernetes.io/csi/directpv-min-io/a6cb579efbf41e427e8ae51ca80226e7aba2c6bc78d8c2bfddc941be9629fb79/globalmount&quot;
I0917 14:22:39.064127    7463 quota_linux.go:230] &quot;SetQuota succeeded&quot; Device=&quot;/dev/nvme3n1&quot; Path=&quot;/var/lib/directpv/mnt/ff9fbf17-a2ca-475a-83c3-88b9c4c77140/.FSUUID.ff9fbf17-a2ca-475a-83c3-88b9c4c77140/pvc-2029b3f1-f1b8-4a49-a863-a54e5f77dc05&quot; VolumeID=&quot;pvc-2029b3f1-f1b8-4a49-a863-a54e5f77dc05&quot; ProjectID=2656553894 HardLimit=8388608
I0917 14:22:39.079767    7463 publish_unpublish.go:96] &quot;Publish volume requested&quot; volumeID=&quot;pvc-2029b3f1-f1b8-4a49-a863-a54e5f77dc05&quot; stagingTargetPath=&quot;/var/lib/kubelet/plugins/kubernetes.io/csi/directpv-min-io/a6cb579efbf41e427e8ae51ca80226e7aba2c6bc78d8c2bfddc941be9629fb79/globalmount&quot; targetPath=&quot;/var/lib/kubelet/pods/71cf2e1b-bf1b-47ec-9e54-019daa1c6e6b/volumes/kubernetes.io~csi/pvc-2029b3f1-f1b8-4a49-a863-a54e5f77dc05/mount&quot;

# syslog 확인 -&amp;gt; 파드가 실행되고 볼륨 마운트 완료(MountVolume.MountDevice succeeded)
..
2025-09-17T14:22:38.838299+00:00 ip-192-168-10-10 systemd[1]: Created slice kubepods-besteffort-pod71cf2e1b_bf1b_47ec_9e54_019daa1c6e6b.slice - libcontainer container kubepods-besteffort-pod71cf2e1b_bf1b_47ec_9e54_019daa1c6e6b.slice.
2025-09-17T14:22:38.916842+00:00 ip-192-168-10-10 k3s[2794]: I0917 23:22:38.916450    2794 reconciler_common.go:251] &quot;operationExecutor.VerifyControllerAttachedVolume started for volume \&quot;pvc-2029b3f1-f1b8-4a49-a863-a54e5f77dc05\&quot; (UniqueName: \&quot;kubernetes.io/csi/directpv-min-io^pvc-2029b3f1-f1b8-4a49-a863-a54e5f77dc05\&quot;) pod \&quot;nginx-pod\&quot; (UID: \&quot;71cf2e1b-bf1b-47ec-9e54-019daa1c6e6b\&quot;) &quot; pod=&quot;default/nginx-pod&quot;
2025-09-17T14:22:38.916974+00:00 ip-192-168-10-10 k3s[2794]: I0917 23:22:38.916501    2794 reconciler_common.go:251] &quot;operationExecutor.VerifyControllerAttachedVolume started for volume \&quot;kube-api-access-n4pvd\&quot; (UniqueName: \&quot;kubernetes.io/projected/71cf2e1b-bf1b-47ec-9e54-019daa1c6e6b-kube-api-access-n4pvd\&quot;) pod \&quot;nginx-pod\&quot; (UID: \&quot;71cf2e1b-bf1b-47ec-9e54-019daa1c6e6b\&quot;) &quot; pod=&quot;default/nginx-pod&quot;
2025-09-17T14:22:39.074301+00:00 ip-192-168-10-10 k3s[2794]: I0917 23:22:39.074180    2794 operation_generator.go:557] &quot;MountVolume.MountDevice succeeded for volume \&quot;pvc-2029b3f1-f1b8-4a49-a863-a54e5f77dc05\&quot; (UniqueName: \&quot;kubernetes.io/csi/directpv-min-io^pvc-2029b3f1-f1b8-4a49-a863-a54e5f77dc05\&quot;) pod \&quot;nginx-pod\&quot; (UID: \&quot;71cf2e1b-bf1b-47ec-9e54-019daa1c6e6b\&quot;) device mount path \&quot;/var/lib/kubelet/plugins/kubernetes.io/csi/directpv-min-io/a6cb579efbf41e427e8ae51ca80226e7aba2c6bc78d8c2bfddc941be9629fb79/globalmount\&quot;&quot; pod=&quot;default/nginx-pod&quot;&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;k3s에서는 kubelet으로 로그가 기록되지 않아서, 파드 관련 생성 기록은 별도로 기록이 안되는 것 같습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;생성된 PV, 파드 정보를 쿠버네티스에서 살펴보겠습니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;# 확인
kubectl get pod,pvc,pv

(⎈|default:N/A) root@k3s-s:~# kubectl get pod,pvc,pv
NAME            READY   STATUS    RESTARTS   AGE
pod/nginx-pod   1/1     Running   0          13m

NAME                              STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS      VOLUMEATTRIBUTESCLASS   AGE
persistentvolumeclaim/nginx-pvc   Bound    pvc-2029b3f1-f1b8-4a49-a863-a54e5f77dc05   8Mi        RWO            directpv-min-io   &amp;lt;unset&amp;gt;                 13m

NAME                                                        CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM               STORAGECLASS      VOLUMEATTRIBUTESCLASS   REASON   AGE
persistentvolume/pvc-2029b3f1-f1b8-4a49-a863-a54e5f77dc05   8Mi        RWO            Delete           Bound    default/nginx-pvc   directpv-min-io   &amp;lt;unset&amp;gt;                          13m


kubectl exec -it nginx-pod -- df -hT -t xfs

(⎈|default:N/A) root@k3s-s:~# kubectl exec -it nginx-pod -- df -hT -t xfs
Filesystem           Type            Size      Used Available Use% Mounted on
/dev/nvme3n1         xfs             8.0M         0      8.0M   0% /mnt

kubectl exec -it nginx-pod -- sh -c 'echo hello &amp;gt; /mnt/hello.txt'
kubectl exec -it nginx-pod -- sh -c 'cat /mnt/hello.txt'

(⎈|default:N/A) root@k3s-s:~# kubectl exec -it nginx-pod -- sh -c 'echo hello &amp;gt; /mnt/hello.txt'
(⎈|default:N/A) root@k3s-s:~# kubectl exec -it nginx-pod -- sh -c 'cat /mnt/hello.txt'
hello


# 확인
lsblk

(⎈|default:N/A) root@k3s-s:~# lsblk
NAME         MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTS
loop0          7:0    0 73.9M  1 loop /snap/core22/2111
loop1          7:1    0 27.6M  1 loop /snap/amazon-ssm-agent/11797
loop2          7:2    0 50.8M  1 loop /snap/snapd/25202
loop3          7:3    0   16M  0 loop
nvme1n1      259:0    0   30G  0 disk /var/lib/directpv/mnt/ffd730c8-c056-454a-830f-208b9529104c
nvme0n1      259:1    0   30G  0 disk
├─nvme0n1p1  259:5    0   29G  0 part /
├─nvme0n1p14 259:6    0    4M  0 part
├─nvme0n1p15 259:7    0  106M  0 part /boot/efi
└─nvme0n1p16 259:8    0  913M  0 part /boot
nvme4n1      259:2    0   30G  0 disk /var/lib/directpv/mnt/7f010ba0-6e36-4bac-8734-8101f5fc86cd
nvme3n1      259:3    0   30G  0 disk /var/lib/kubelet/pods/71cf2e1b-bf1b-47ec-9e54-019daa1c6e6b/volumes/kubernetes.io~csi/pvc-2029b3f1-f1b8-4a49-a863-a54e5f77dc05/mount
                                      /var/lib/kubelet/plugins/kubernetes.io/csi/directpv-min-io/a6cb579efbf41e427e8ae51ca80226e7aba2c6bc78d8c2bfddc941be9629fb79/globalmount
                                      /var/lib/directpv/mnt/ff9fbf17-a2ca-475a-83c3-88b9c4c77140
nvme2n1      259:4    0   30G  0 disk /var/lib/directpv/mnt/d29e80c7-dc3b-4a48-9a81-82352886d63f

tree -a /var/lib/directpv/mnt

(⎈|default:N/A) root@k3s-s:~# tree -a /var/lib/directpv/mnt
/var/lib/directpv/mnt
├── 7f010ba0-6e36-4bac-8734-8101f5fc86cd
│   ├── .FSUUID.7f010ba0-6e36-4bac-8734-8101f5fc86cd -&amp;gt; .
│   └── .directpv
│       └── meta.info
├── d29e80c7-dc3b-4a48-9a81-82352886d63f
│   ├── .FSUUID.d29e80c7-dc3b-4a48-9a81-82352886d63f -&amp;gt; .
│   └── .directpv
│       └── meta.info
├── ff9fbf17-a2ca-475a-83c3-88b9c4c77140
│   ├── .FSUUID.ff9fbf17-a2ca-475a-83c3-88b9c4c77140 -&amp;gt; .
│   ├── .directpv
│   │   └── meta.info
│   └── pvc-2029b3f1-f1b8-4a49-a863-a54e5f77dc05
│       └── hello.txt
└── ffd730c8-c056-454a-830f-208b9529104c
    ├── .FSUUID.ffd730c8-c056-454a-830f-208b9529104c -&amp;gt; .
    └── .directpv
        └── meta.info


cat /var/lib/directpv/mnt/*/pvc*/hello.txt

(⎈|default:N/A) root@k3s-s:~# cat /var/lib/directpv/mnt/*/pvc*/hello.txt
hello

# 볼륨 생성 확인
kubectl get directpvvolumes
kubectl get directpvvolumes -o yaml | yq

(⎈|default:N/A) root@k3s-s:~# kubectl get directpvvolumes
NAME                                       AGE
pvc-2029b3f1-f1b8-4a49-a863-a54e5f77dc05   17m
(⎈|default:N/A) root@k3s-s:~# kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM               STORAGECLASS      VOLUMEATTRIBUTESCLASS   REASON   AGE
pvc-2029b3f1-f1b8-4a49-a863-a54e5f77dc05   8Mi        RWO            Delete           Bound    default/nginx-pvc   directpv-min-io   &amp;lt;unset&amp;gt;                          17m&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;테스트로 생성한 PVC와 파드를 정리하겠습니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;# 삭제
kubectl delete pod nginx-pod
kubectl get pvc,pv
kubectl delete pvc nginx-pvc
kubectl get pv

(⎈|default:N/A) root@k3s-s:~# kubectl delete pod nginx-pod
pod &quot;nginx-pod&quot; deleted
(⎈|default:N/A) root@k3s-s:~# kubectl get pvc,pv
NAME                              STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS      VOLUMEATTRIBUTESCLASS   AGE
persistentvolumeclaim/nginx-pvc   Bound    pvc-2029b3f1-f1b8-4a49-a863-a54e5f77dc05   8Mi        RWO            directpv-min-io   &amp;lt;unset&amp;gt;                 18m

NAME                                                        CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM               STORAGECLASS      VOLUMEATTRIBUTESCLASS   REASON   AGE
persistentvolume/pvc-2029b3f1-f1b8-4a49-a863-a54e5f77dc05   8Mi        RWO            Delete           Bound    default/nginx-pvc   directpv-min-io   &amp;lt;unset&amp;gt;                          18m
(⎈|default:N/A) root@k3s-s:~# kubectl delete pvc nginx-pvc
persistentvolumeclaim &quot;nginx-pvc&quot; deleted
(⎈|default:N/A) root@k3s-s:~# kubectl get pv
No resources found

# 확인
lsblk
tree -a /var/lib/directpv/mnt

(⎈|default:N/A) root@k3s-s:~# lsblk
NAME         MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTS
loop0          7:0    0 73.9M  1 loop /snap/core22/2111
loop1          7:1    0 27.6M  1 loop /snap/amazon-ssm-agent/11797
loop2          7:2    0 50.8M  1 loop /snap/snapd/25202
loop3          7:3    0   16M  0 loop
nvme1n1      259:0    0   30G  0 disk /var/lib/directpv/mnt/ffd730c8-c056-454a-830f-208b9529104c
nvme0n1      259:1    0   30G  0 disk
├─nvme0n1p1  259:5    0   29G  0 part /
├─nvme0n1p14 259:6    0    4M  0 part
├─nvme0n1p15 259:7    0  106M  0 part /boot/efi
└─nvme0n1p16 259:8    0  913M  0 part /boot
nvme4n1      259:2    0   30G  0 disk /var/lib/directpv/mnt/7f010ba0-6e36-4bac-8734-8101f5fc86cd
nvme3n1      259:3    0   30G  0 disk /var/lib/directpv/mnt/ff9fbf17-a2ca-475a-83c3-88b9c4c77140
nvme2n1      259:4    0   30G  0 disk /var/lib/directpv/mnt/d29e80c7-dc3b-4a48-9a81-82352886d63f
(⎈|default:N/A) root@k3s-s:~# tree -a /var/lib/directpv/mnt
/var/lib/directpv/mnt
├── 7f010ba0-6e36-4bac-8734-8101f5fc86cd
│   ├── .FSUUID.7f010ba0-6e36-4bac-8734-8101f5fc86cd -&amp;gt; .
│   └── .directpv
│       └── meta.info
├── d29e80c7-dc3b-4a48-9a81-82352886d63f
│   ├── .FSUUID.d29e80c7-dc3b-4a48-9a81-82352886d63f -&amp;gt; .
│   └── .directpv
│       └── meta.info
├── ff9fbf17-a2ca-475a-83c3-88b9c4c77140
│   ├── .FSUUID.ff9fbf17-a2ca-475a-83c3-88b9c4c77140 -&amp;gt; .
│   └── .directpv
│       └── meta.info
└── ffd730c8-c056-454a-830f-208b9529104c
    ├── .FSUUID.ffd730c8-c056-454a-830f-208b9529104c -&amp;gt; .
    └── .directpv
        └── meta.info

13 directories, 4 files&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;여기까지 MinIO를 설치하지 않은 상태에서 로컬 드라이브를 CSI driver로 활용하는 DirectPV를 살펴봤습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;4. DirectPV 환경의 MinIO 실습&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이제 MinIO를 설치해보고 실습을 이어가겠습니다.&lt;/p&gt;
&lt;pre class=&quot;yaml&quot;&gt;&lt;code&gt;# helm repo 등록
helm repo add minio-operator https://operator.min.io

# https://github.com/minio/operator/blob/master/helm/operator/values.yaml
cat &amp;lt;&amp;lt; EOF &amp;gt; minio-operator-values.yaml
operator:  
  env:
  - name: MINIO_OPERATOR_RUNTIME
    value: &quot;Rancher&quot;
  replicaCount: 1
EOF
helm install --namespace minio-operator --create-namespace minio-operator minio-operator/operator --values minio-operator-values.yaml

# 확인 : 참고로 현재는 오퍼레이터 관리 웹 미제공
kubectl get all -n minio-operator
kubectl get pod,svc,ep -n minio-operator
kubectl get crd
kubectl exec -it -n minio-operator deploy/minio-operator -- env | grep MINIO

(⎈|default:N/A) root@k3s-s:~# kubectl get all -n minio-operator
NAME                                  READY   STATUS    RESTARTS   AGE
pod/minio-operator-75946dc4db-pk9qh   1/1     Running   0          25s

NAME               TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
service/operator   ClusterIP   10.43.202.129   &amp;lt;none&amp;gt;        4221/TCP   25s
service/sts        ClusterIP   10.43.130.196   &amp;lt;none&amp;gt;        4223/TCP   25s

NAME                             READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/minio-operator   1/1     1            1           25s

NAME                                        DESIRED   CURRENT   READY   AGE
replicaset.apps/minio-operator-75946dc4db   1         1         1       25s

(⎈|default:N/A) root@k3s-s:~# kubectl get pod,svc,ep -n minio-operator
Warning: v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
NAME                                  READY   STATUS    RESTARTS   AGE
pod/minio-operator-75946dc4db-pk9qh   1/1     Running   0          46s

NAME               TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
service/operator   ClusterIP   10.43.202.129   &amp;lt;none&amp;gt;        4221/TCP   46s
service/sts        ClusterIP   10.43.130.196   &amp;lt;none&amp;gt;        4223/TCP   46s

NAME                 ENDPOINTS         AGE
endpoints/operator   10.42.0.10:4221   46s
endpoints/sts        10.42.0.10:4223   46s

(⎈|default:N/A) root@k3s-s:~# kubectl get crd
NAME                                   CREATED AT
addons.k3s.cattle.io                   2025-09-17T12:17:23Z
directpvdrives.directpv.min.io         2025-09-17T13:15:32Z
directpvinitrequests.directpv.min.io   2025-09-17T13:15:32Z
directpvnodes.directpv.min.io          2025-09-17T13:15:32Z
directpvvolumes.directpv.min.io        2025-09-17T13:15:32Z
etcdsnapshotfiles.k3s.cattle.io        2025-09-17T12:17:23Z
helmchartconfigs.helm.cattle.io        2025-09-17T12:17:23Z
helmcharts.helm.cattle.io              2025-09-17T12:17:23Z
policybindings.sts.min.io              2025-09-17T14:51:51Z
tenants.minio.min.io                   2025-09-17T14:51:51Z

(⎈|default:N/A) root@k3s-s:~# kubectl exec -it -n minio-operator deploy/minio-operator -- env | grep MINIO
MINIO_OPERATOR_RUNTIME=Rancher&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;MinIO Operator 설치가 완료되었고, tenant 설치를 진행하겠습니다.&lt;/p&gt;
&lt;pre class=&quot;routeros&quot;&gt;&lt;code&gt;# If using Amazon Elastic Block Store (EBS) CSI driver : Please make sure to set xfs for &quot;csi.storage.k8s.io/fstype&quot; parameter under StorageClass.parameters.
kubectl get sc directpv-min-io -o yaml | grep -i fstype
  csi.storage.k8s.io/fstype: xfs

# tenant values : https://github.com/minio/operator/blob/master/helm/tenant/values.yaml
cat &amp;lt;&amp;lt; EOF &amp;gt; minio-tenant-1-values.yaml
tenant:
  name: tenant1

  configSecret:
    name: tenant1-env-configuration
    accessKey: minio
    secretKey: minio123

  pools:
    - servers: 1
      name: pool-0
      volumesPerServer: 4
      size: 10Gi 
      storageClassName: directpv-min-io # directpv를 storageclass를 사용함을 명시
  env:
    - name: MINIO_STORAGE_CLASS_STANDARD
      value: &quot;EC:1&quot;

  metrics:
    enabled: true
    port: 9000
    protocol: http
EOF

helm install --namespace tenant1 --create-namespace --values minio-tenant-1-values.yaml tenant1 minio-operator/tenant \
 &amp;amp;&amp;amp; kubectl get tenants -A -w

(⎈|default:N/A) root@k3s-s:~# helm install --namespace tenant1 --create-namespace --values minio-tenant-1-values.yaml tenant1 minio-operator/tenant \
 &amp;amp;&amp;amp; kubectl get tenants -A -w
NAME: tenant1
LAST DEPLOYED: Wed Sep 17 23:56:25 2025
NAMESPACE: tenant1
STATUS: deployed
REVISION: 1
TEST SUITE: None
NAMESPACE   NAME      STATE   HEALTH   AGE
tenant1     tenant1                    1s
tenant1     tenant1                    5s
tenant1     tenant1                    5s
tenant1     tenant1   Waiting for MinIO TLS Certificate            5s
tenant1     tenant1   Provisioning MinIO Cluster IP Service            15s
tenant1     tenant1   Provisioning Console Service                     16s
tenant1     tenant1   Provisioning MinIO Headless Service              16s
tenant1     tenant1   Provisioning MinIO Headless Service              16s
tenant1     tenant1   Provisioning MinIO Statefulset                   16s
tenant1     tenant1   Provisioning MinIO Statefulset                   17s
tenant1     tenant1   Provisioning MinIO Statefulset                   17s
tenant1     tenant1   Waiting for Tenant to be healthy                 17s
tenant1     tenant1   Waiting for Tenant to be healthy        green    34s
tenant1     tenant1   Waiting for Tenant to be healthy        green    34s
tenant1     tenant1   Initialized                             green    36s

(⎈|default:N/A) root@k3s-s:~# kubectl describe tenants -n tenant1
Name:         tenant1
Namespace:    tenant1
Labels:       app=minio
              app.kubernetes.io/managed-by=Helm
Annotations:  meta.helm.sh/release-name: tenant1
              meta.helm.sh/release-namespace: tenant1
              prometheus.io/path: /minio/v2/metrics/cluster
              prometheus.io/port: 9000
              prometheus.io/scheme: http
              prometheus.io/scrape: true
API Version:  minio.min.io/v2
Kind:         Tenant
Metadata:
  Creation Timestamp:  2025-09-17T14:56:25Z
  Generation:          1
  Resource Version:    6390
  UID:                 12a0ce88-64ad-4212-bcfb-63ca4269b203
Spec:
  Configuration:
    Name:  tenant1-env-configuration
  Env:
    Name:   MINIO_STORAGE_CLASS_STANDARD
    Value:  EC:1
  Features:
    Bucket DNS:           false
    Enable SFTP:          false
  Image:                  quay.io/minio/minio:RELEASE.2025-04-08T15-41-24Z
  Image Pull Policy:      IfNotPresent
  Mount Path:             /export
  Pod Management Policy:  Parallel
  Pools:
    Name:     pool-0
    Servers:  1
    Volume Claim Template:
      Metadata:
        Name:  data
      Spec:
        Access Modes:
          ReadWriteOnce
        Resources:
          Requests:
            Storage:         10Gi
        Storage Class Name:  directpv-min-io
    Volumes Per Server:      4
  Pools Metadata:
    Annotations:
    Labels:
  Prometheus Operator:  false
  Request Auto Cert:    true
  Sub Path:             /data
Status:
  Available Replicas:  1
  Certificates:
    Auto Cert Enabled:  true
    Custom Certificates:
  Current State:  Initialized
  Drives Online:  4
  Health Status:  green
  Pools:
    Legacy Security Context:  false
    Ss Name:                  tenant1-pool-0
    State:                    PoolInitialized
  Revision:                   0
  Sync Version:               v6.0.0
  Usage:
    Capacity:      32212193280
    Raw Capacity:  42949591040
    Raw Usage:     81920
    Usage:         61440
  Write Quorum:    3
Events:
  Type     Reason                 Age                  From            Message
  ----     ------                 ----                 ----            -------
  Normal   CSRCreated             2m20s                minio-operator  MinIO CSR Created
  Normal   SvcCreated             2m9s                 minio-operator  MinIO Service Created
  Normal   SvcCreated             2m9s                 minio-operator  Console Service Created
  Normal   SvcCreated             2m9s                 minio-operator  Headless Service created
  Normal   PoolCreated            2m9s                 minio-operator  Tenant pool pool-0 created
  Normal   Updated                2m4s                 minio-operator  Headless Service Updated
  Warning  WaitingMinIOIsHealthy  114s (x4 over 2m8s)  minio-operator  Waiting for MinIO to be ready&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;앞서 tenant에서 pool에 4개 볼륨(volumesPerServer: 4)이 생성되고, 또한 스토리지 사이즈를 10Gi(size: 10Gi)으로 지정했습니다. 이후 정보가 어떻게 변경되었는지 확인해보겠습니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;# 확인
lsblk
kubectl directpv info
kubectl directpv list drives
kubectl directpv list volumes

(⎈|default:N/A) root@k3s-s:~# lsblk
NAME         MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTS
loop0          7:0    0 73.9M  1 loop /snap/core22/2111
loop1          7:1    0 27.6M  1 loop /snap/amazon-ssm-agent/11797
loop2          7:2    0 50.8M  1 loop /snap/snapd/25202
loop3          7:3    0   16M  0 loop
nvme1n1      259:0    0   30G  0 disk /var/lib/kubelet/pods/72ee3a1a-4a61-44f0-94b6-a7cc861eb829/volumes/kubernetes.io~csi/pvc-9eba11d7-3331-423c-91d7-2a8f45a08ce3/mount
                                      /var/lib/kubelet/plugins/kubernetes.io/csi/directpv-min-io/f94049e38beb31a7b9cf88a9d48e54c8af90509d141e70ff851eb8cdf87b09f2/globalmount
                                      /var/lib/directpv/mnt/ffd730c8-c056-454a-830f-208b9529104c
nvme0n1      259:1    0   30G  0 disk
├─nvme0n1p1  259:5    0   29G  0 part /
├─nvme0n1p14 259:6    0    4M  0 part
├─nvme0n1p15 259:7    0  106M  0 part /boot/efi
└─nvme0n1p16 259:8    0  913M  0 part /boot
nvme4n1      259:2    0   30G  0 disk /var/lib/kubelet/pods/72ee3a1a-4a61-44f0-94b6-a7cc861eb829/volumes/kubernetes.io~csi/pvc-88ff8de1-0702-4783-9a24-f63af88dda30/mount
                                      /var/lib/kubelet/plugins/kubernetes.io/csi/directpv-min-io/20cd114efbb71cad4c72f66f980b71335e29a50b57ad159a6c18566c3d01eaf9/globalmount
                                      /var/lib/directpv/mnt/7f010ba0-6e36-4bac-8734-8101f5fc86cd
nvme3n1      259:3    0   30G  0 disk /var/lib/kubelet/pods/72ee3a1a-4a61-44f0-94b6-a7cc861eb829/volumes/kubernetes.io~csi/pvc-08bdde0c-b472-4dfe-8b95-09e59e6aa4d8/mount
                                      /var/lib/kubelet/plugins/kubernetes.io/csi/directpv-min-io/3f4d3fabd87e625fc0d887fdf2f9c90a2743b72354a7de4a6ab53ac502d291c6/globalmount
                                      /var/lib/directpv/mnt/ff9fbf17-a2ca-475a-83c3-88b9c4c77140
nvme2n1      259:4    0   30G  0 disk /var/lib/kubelet/pods/72ee3a1a-4a61-44f0-94b6-a7cc861eb829/volumes/kubernetes.io~csi/pvc-e846556e-da9f-4670-8c69-7479a723af37/mount
                                      /var/lib/kubelet/plugins/kubernetes.io/csi/directpv-min-io/28f2fa689cc75aff33f7429c65d5912fb23dfa3394a23dbc6ff22fbaacc112e4/globalmount
                                      /var/lib/directpv/mnt/d29e80c7-dc3b-4a48-9a81-82352886d63f
(⎈|default:N/A) root@k3s-s:~# kubectl directpv info
┌─────────┬──────────┬───────────┬─────────┬────────┐
│ NODE    │ CAPACITY │ ALLOCATED │ VOLUMES │ DRIVES │
├─────────┼──────────┼───────────┼─────────┼────────┤
│ &amp;bull; k3s-s │ 120 GiB  │ 40 GiB    │ 4       │ 4      │
└─────────┴──────────┴───────────┴─────────┴────────┘

40 GiB/120 GiB used, 4 volumes, 4 drives
(⎈|default:N/A) root@k3s-s:~# kubectl directpv list drives
┌───────┬─────────┬────────────────────────────┬────────┬────────┬─────────┬────────┐
│ NODE  │ NAME    │ MAKE                       │ SIZE   │ FREE   │ VOLUMES │ STATUS │
├───────┼─────────┼────────────────────────────┼────────┼────────┼─────────┼────────┤
│ k3s-s │ nvme1n1 │ Amazon Elastic Block Store │ 30 GiB │ 20 GiB │ 1       │ Ready  │
│ k3s-s │ nvme2n1 │ Amazon Elastic Block Store │ 30 GiB │ 20 GiB │ 1       │ Ready  │
│ k3s-s │ nvme3n1 │ Amazon Elastic Block Store │ 30 GiB │ 20 GiB │ 1       │ Ready  │
│ k3s-s │ nvme4n1 │ Amazon Elastic Block Store │ 30 GiB │ 20 GiB │ 1       │ Ready  │
└───────┴─────────┴────────────────────────────┴────────┴────────┴─────────┴────────┘
(⎈|default:N/A) root@k3s-s:~# kubectl directpv list volumes
┌──────────────────────────────────────────┬──────────┬───────┬─────────┬──────────────────┬──────────────┬─────────┐
│ VOLUME                                   │ CAPACITY │ NODE  │ DRIVE   │ PODNAME          │ PODNAMESPACE │ STATUS  │
├──────────────────────────────────────────┼──────────┼───────┼─────────┼──────────────────┼──────────────┼─────────┤
│ pvc-9eba11d7-3331-423c-91d7-2a8f45a08ce3 │ 10 GiB   │ k3s-s │ nvme1n1 │ tenant1-pool-0-0 │ tenant1      │ Bounded │
│ pvc-e846556e-da9f-4670-8c69-7479a723af37 │ 10 GiB   │ k3s-s │ nvme2n1 │ tenant1-pool-0-0 │ tenant1      │ Bounded │
│ pvc-08bdde0c-b472-4dfe-8b95-09e59e6aa4d8 │ 10 GiB   │ k3s-s │ nvme3n1 │ tenant1-pool-0-0 │ tenant1      │ Bounded │
│ pvc-88ff8de1-0702-4783-9a24-f63af88dda30 │ 10 GiB   │ k3s-s │ nvme4n1 │ tenant1-pool-0-0 │ tenant1      │ Bounded │
└──────────────────────────────────────────┴──────────┴───────┴─────────┴──────────────────┴──────────────┴─────────┘&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;지정된 정보와 같이 4개의 볼륨이 각 10Gi씩 생성된 것을 확인할 수 있습니다. 추가로 정보를 확인해보겠습니다.&lt;/p&gt;
&lt;pre class=&quot;subunit&quot;&gt;&lt;code&gt;# 확인
kubectl get directpvvolumes.directpv.min.io
kubectl get directpvvolumes.directpv.min.io -o yaml | yq
kubectl describe directpvvolumes
tree -ah /var/lib/kubelet/plugins
tree -ah /var/lib/directpv/mnt
cat /var/lib/kubelet/plugins/kubernetes.io/csi/directpv-min-io/*/vol_data.json

(⎈|default:N/A) root@k3s-s:~# kubectl get directpvvolumes.directpv.min.io
NAME                                       AGE
pvc-08bdde0c-b472-4dfe-8b95-09e59e6aa4d8   26m
pvc-88ff8de1-0702-4783-9a24-f63af88dda30   26m
pvc-9eba11d7-3331-423c-91d7-2a8f45a08ce3   26m
pvc-e846556e-da9f-4670-8c69-7479a723af37   26m

(⎈|default:N/A) root@k3s-s:~# tree -h /var/lib/kubelet/plugins
[4.0K]  /var/lib/kubelet/plugins
├── [4.0K]  controller-controller
│   └── [   0]  csi.sock
├── [4.0K]  directpv-min-io
│   └── [   0]  csi.sock
└── [4.0K]  kubernetes.io
    └── [4.0K]  csi
        └── [4.0K]  directpv-min-io
            ├── [4.0K]  20cd114efbb71cad4c72f66f980b71335e29a50b57ad159a6c18566c3d01eaf9
            │   ├── [  18]  globalmount
            │   │   └── [  24]  data
            │   └── [  91]  vol_data.json
            ├── [4.0K]  28f2fa689cc75aff33f7429c65d5912fb23dfa3394a23dbc6ff22fbaacc112e4
            │   ├── [  18]  globalmount
            │   │   └── [  24]  data
            │   └── [  91]  vol_data.json
            ├── [4.0K]  3f4d3fabd87e625fc0d887fdf2f9c90a2743b72354a7de4a6ab53ac502d291c6
            │   ├── [  18]  globalmount
            │   │   └── [  24]  data
            │   └── [  91]  vol_data.json
            └── [4.0K]  f94049e38beb31a7b9cf88a9d48e54c8af90509d141e70ff851eb8cdf87b09f2
                ├── [  18]  globalmount
                │   └── [  24]  data
                └── [  91]  vol_data.json

18 directories, 6 files

(⎈|default:N/A) root@k3s-s:~# tree -h /var/lib/directpv/mnt
[4.0K]  /var/lib/directpv/mnt
├── [ 123]  7f010ba0-6e36-4bac-8734-8101f5fc86cd
│   └── [  18]  pvc-88ff8de1-0702-4783-9a24-f63af88dda30
│       └── [  24]  data
├── [ 123]  d29e80c7-dc3b-4a48-9a81-82352886d63f
│   └── [  18]  pvc-e846556e-da9f-4670-8c69-7479a723af37
│       └── [  24]  data
├── [ 123]  ff9fbf17-a2ca-475a-83c3-88b9c4c77140
│   └── [  18]  pvc-08bdde0c-b472-4dfe-8b95-09e59e6aa4d8
│       └── [  24]  data
└── [ 123]  ffd730c8-c056-454a-830f-208b9529104c
    └── [  18]  pvc-9eba11d7-3331-423c-91d7-2a8f45a08ce3
        └── [  24]  data

13 directories, 0 files

(⎈|default:N/A) root@k3s-s:~# cat /var/lib/kubelet/plugins/kubernetes.io/csi/directpv-min-io/*/vol_data.json
{&quot;driverName&quot;:&quot;directpv-min-io&quot;,&quot;volumeHandle&quot;:&quot;pvc-88ff8de1-0702-4783-9a24-f63af88dda30&quot;}
{&quot;driverName&quot;:&quot;directpv-min-io&quot;,&quot;volumeHandle&quot;:&quot;pvc-e846556e-da9f-4670-8c69-7479a723af37&quot;}
{&quot;driverName&quot;:&quot;directpv-min-io&quot;,&quot;volumeHandle&quot;:&quot;pvc-08bdde0c-b472-4dfe-8b95-09e59e6aa4d8&quot;}
{&quot;driverName&quot;:&quot;directpv-min-io&quot;,&quot;volumeHandle&quot;:&quot;pvc-9eba11d7-3331-423c-91d7-2a8f45a08ce3&quot;}

# PVC 정보
kubectl get pvc -n tenant1
kubectl get pvc -n tenant1 -o yaml | yq
kubectl describe pvc -n tenant1

(⎈|default:N/A) root@k3s-s:~# kubectl get pvc -n tenant1
NAME                     STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS      VOLUMEATTRIBUTESCLASS   AGE
data0-tenant1-pool-0-0   Bound    pvc-e846556e-da9f-4670-8c69-7479a723af37   10Gi       RWO            directpv-min-io   &amp;lt;unset&amp;gt;                 28m
data1-tenant1-pool-0-0   Bound    pvc-08bdde0c-b472-4dfe-8b95-09e59e6aa4d8   10Gi       RWO            directpv-min-io   &amp;lt;unset&amp;gt;                 28m
data2-tenant1-pool-0-0   Bound    pvc-9eba11d7-3331-423c-91d7-2a8f45a08ce3   10Gi       RWO            directpv-min-io   &amp;lt;unset&amp;gt;                 28m
data3-tenant1-pool-0-0   Bound    pvc-88ff8de1-0702-4783-9a24-f63af88dda30   10Gi       RWO            directpv-min-io   &amp;lt;unset&amp;gt;                 28m&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;지난 게시물에서 살펴본 바와 같이 tenant를 생성해야 실제로 MinIO 오브젝트 스토리지가 설치됩니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;tenant를 생성하면서 MinIO가 배포된 상태를 확인해보겠습니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;# tenant 확인
kubectl get sts,pod,svc,ep,pvc,secret -n tenant1
kubectl get pod -n tenant1 -l v1.min.io/pool=pool-0 -owide
kubectl describe pod -n tenant1 -l v1.min.io/pool=pool-0
kubectl logs -n tenant1 -l v1.min.io/pool=pool-0
kubectl exec -it -n tenant1 sts/tenant1-pool-0 -c minio -- id
kubectl exec -it -n tenant1 sts/tenant1-pool-0 -c minio -- env
kubectl exec -it -n tenant1 sts/tenant1-pool-0 -c minio -- cat /tmp/minio/config.env
kubectl get secret -n tenant1 tenant1-env-configuration -o jsonpath='{.data.config\.env}' | base64 -d ; echo
kubectl get secret -n tenant1 tenant1-tls -o jsonpath='{.data.public\.crt}' | base64 -d
kubectl get secret -n tenant1 tenant1-tls -o jsonpath='{.data.public\.crt}' | base64 -d | openssl x509 -noout -text


(⎈|default:N/A) root@k3s-s:~# kubectl get sts,pod,svc,ep,pvc,secret -n tenant1
Warning: v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
NAME                              READY   AGE
statefulset.apps/tenant1-pool-0   1/1     28m

NAME                   READY   STATUS    RESTARTS   AGE
pod/tenant1-pool-0-0   2/2     Running   0          28m

NAME                      TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
service/minio             ClusterIP   10.43.137.186   &amp;lt;none&amp;gt;        443/TCP    28m
service/tenant1-console   ClusterIP   10.43.8.75      &amp;lt;none&amp;gt;        9443/TCP   28m
service/tenant1-hl        ClusterIP   None            &amp;lt;none&amp;gt;        9000/TCP   28m

NAME                        ENDPOINTS         AGE
endpoints/minio             10.42.0.11:9000   28m
endpoints/tenant1-console   10.42.0.11:9443   28m
endpoints/tenant1-hl        10.42.0.11:9000   28m

NAME                                           STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS      VOLUMEATTRIBUTESCLASS   AGE
persistentvolumeclaim/data0-tenant1-pool-0-0   Bound    pvc-e846556e-da9f-4670-8c69-7479a723af37   10Gi       RWO            directpv-min-io   &amp;lt;unset&amp;gt;                 28m
persistentvolumeclaim/data1-tenant1-pool-0-0   Bound    pvc-08bdde0c-b472-4dfe-8b95-09e59e6aa4d8   10Gi       RWO            directpv-min-io   &amp;lt;unset&amp;gt;                 28m
persistentvolumeclaim/data2-tenant1-pool-0-0   Bound    pvc-9eba11d7-3331-423c-91d7-2a8f45a08ce3   10Gi       RWO            directpv-min-io   &amp;lt;unset&amp;gt;                 28m
persistentvolumeclaim/data3-tenant1-pool-0-0   Bound    pvc-88ff8de1-0702-4783-9a24-f63af88dda30   10Gi       RWO            directpv-min-io   &amp;lt;unset&amp;gt;                 28m

NAME                                   TYPE                 DATA   AGE
secret/sh.helm.release.v1.tenant1.v1   helm.sh/release.v1   1      28m
secret/tenant1-env-configuration       Opaque               1      28m
secret/tenant1-tls                     Opaque               2      28m

(⎈|default:N/A) root@k3s-s:~# kubectl logs -n tenant1 -l v1.min.io/pool=pool-0
Defaulted container &quot;minio&quot; out of: minio, sidecar, validate-arguments (init)

API: https://minio.tenant1.svc.cluster.local
WebUI: https://10.42.0.11:9443 https://127.0.0.1:9443

Docs: https://docs.min.io
INFO:
 You are running an older version of MinIO released 5 months before the latest release
 Update: Run `mc admin update ALIAS`

(⎈|default:N/A) root@k3s-s:~# kubectl get secret -n tenant1 tenant1-env-configuration -o jsonpath='{.data.config\.env}' | base64 -d ; echo
export MINIO_ROOT_USER=&quot;minio&quot;
export MINIO_ROOT_PASSWORD=&quot;minio123&quot;

(⎈|default:N/A) root@k3s-s:~# kubectl get secret -n tenant1 tenant1-tls -o jsonpath='{.data.public\.crt}' | base64 -d | openssl x509 -noout -text
Certificate:
    Data:
        Version: 3 (0x2)
        Serial Number:
            74:8c:ce:e1:d7:27:e8:c5:7d:c4:ea:78:a2:51:f3:84
        Signature Algorithm: ecdsa-with-SHA256
        Issuer: CN = k3s-server-ca@1758111433
        Validity
            Not Before: Sep 17 14:51:30 2025 GMT
            Not After : Sep 17 14:51:30 2026 GMT
        Subject: O = system:nodes, CN = system:node:*.tenant1-hl.tenant1.svc.cluster.local
        Subject Public Key Info:
            Public Key Algorithm: id-ecPublicKey
                Public-Key: (256 bit)
                pub:
                    04:7f:4f:8f:41:0e:87:a0:8b:74:a4:2e:0d:e6:5a:
                    22:ae:93:63:7b:4a:cf:69:0f:56:98:8a:80:70:38:
                    16:58:d0:a8:57:f2:da:2e:18:55:a7:ff:b6:c3:91:
                    88:e4:2c:8f:0a:ca:43:e6:01:0c:1e:b8:3b:8b:0c:
                    d5:de:48:7a:be
                ASN1 OID: prime256v1
                NIST CURVE: P-256
        X509v3 extensions:
            X509v3 Key Usage: critical
                Digital Signature, Key Encipherment
            X509v3 Extended Key Usage:
                TLS Web Server Authentication
            X509v3 Basic Constraints: critical
                CA:FALSE
            X509v3 Authority Key Identifier:
                60:84:45:7C:F2:CA:FC:34:C5:B2:89:5A:D8:51:2F:86:5B:24:29:78
            X509v3 Subject Alternative Name:
                DNS:tenant1-pool-0-0.tenant1-hl.tenant1.svc.cluster.local, DNS:minio.tenant1.svc.cluster.local, DNS:minio.tenant1, DNS:minio.tenant1.svc, DNS:*., DNS:*.tenant1.svc.cluster.local
    Signature Algorithm: ecdsa-with-SHA256
    Signature Value:
        30:44:02:20:74:74:3e:91:03:43:f7:f0:1d:90:75:bc:65:3d:
        c0:8a:3a:a6:6a:57:bb:10:8d:82:f5:7a:1e:2a:50:76:68:b9:
        02:20:7d:76:5c:5c:ef:bc:1c:c2:09:89:a6:f5:55:72:87:3f:
        55:dd:89:5f:8c:4c:bf:a1:f1:08:93:f0:3c:dc:4c:71&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;MinIO의 webUI를 접속해보겠습니다.&lt;/p&gt;
&lt;pre class=&quot;jboss-cli&quot;&gt;&lt;code&gt;# console을 nodeport로 변경
kubectl patch svc -n tenant1 tenant1-console -p '{&quot;spec&quot;: {&quot;type&quot;: &quot;NodePort&quot;, &quot;ports&quot;: [{&quot;port&quot;: 9443, &quot;targetPort&quot;: 9443, &quot;nodePort&quot;: 30001}]}}'

# 기본키(minio , minio123)
echo &quot;https://$(curl -s ipinfo.io/ip):30001&quot;

(⎈|default:N/A) root@k3s-s:~# echo &quot;https://$(curl -s ipinfo.io/ip):30001&quot;
https://15.164.244.91:30001

# minio API도 nodeport로 변경
kubectl patch svc -n tenant1 minio -p '{&quot;spec&quot;: {&quot;type&quot;: &quot;NodePort&quot;, &quot;ports&quot;: [{&quot;port&quot;: 443, &quot;targetPort&quot;: 9000, &quot;nodePort&quot;: 30002}]}}'

# mc alias
mc alias set k8s-tenant1 https://127.0.0.1:30002 minio minio123 --insecure
mc alias list
mc admin info k8s-tenant1 --insecure

(⎈|default:N/A) root@k3s-s:~# mc admin info k8s-tenant1 --insecure
●  127.0.0.1:30002
   Uptime: 36 minutes
   Version: 2025-04-08T15:41:24Z
   Network: 1/1 OK
   Drives: 4/4 OK
   Pool: 1

┌──────┬──────────────────────┬─────────────────────┬──────────────┐
│ Pool │ Drives Usage         │ Erasure stripe size │ Erasure sets │
│ 1st  │ 0.0% (total: 30 GiB) │ 4                   │ 1            │
└──────┴──────────────────────┴─────────────────────┴──────────────┘

4 drives online, 0 drives offline, EC:1

# 버킷 생성
mc mb k8s-tenant1/mybucket --insecure
mc ls k8s-tenant1 --insecure

(⎈|default:N/A) root@k3s-s:~# mc mb k8s-tenant1/mybucket --insecure
Bucket created successfully `k8s-tenant1/mybucket`.
(⎈|default:N/A) root@k3s-s:~# mc ls k8s-tenant1 --insecure
[2025-09-18 00:33:44 KST]     0B mybucket/&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;webUI에서도 로그인 후 버킷 생성을 확인할 수 있습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;2068&quot; data-origin-height=&quot;1383&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bLM68c/btsQIXLbH40/jAPPYtr84nzHD8BGiwki3k/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bLM68c/btsQIXLbH40/jAPPYtr84nzHD8BGiwki3k/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bLM68c/btsQIXLbH40/jAPPYtr84nzHD8BGiwki3k/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbLM68c%2FbtsQIXLbH40%2FjAPPYtr84nzHD8BGiwki3k%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;2068&quot; height=&quot;1383&quot; data-origin-width=&quot;2068&quot; data-origin-height=&quot;1383&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;새로고침을 해보면 mybuscket이 생성된 것을 확인할 수 있습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;2044&quot; data-origin-height=&quot;1006&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/Va4cD/btsQJDeC75C/ZrZTkWiUcRWIGPKgMz5kXK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/Va4cD/btsQJDeC75C/ZrZTkWiUcRWIGPKgMz5kXK/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/Va4cD/btsQJDeC75C/ZrZTkWiUcRWIGPKgMz5kXK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FVa4cD%2FbtsQJDeC75C%2FZrZTkWiUcRWIGPKgMz5kXK%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;2044&quot; height=&quot;1006&quot; data-origin-width=&quot;2044&quot; data-origin-height=&quot;1006&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;다음으로 DirectPV에서 볼륨 확장을 추가로 테스트 해보겠습니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;# 현재 정보 (10Gi씩 4개의 볼륨 사용 중)
kubectl get pvc -n tenant1
kubectl directpv list drives
kubectl directpv info

(⎈|default:N/A) root@k3s-s:~# kubectl get pvc -n tenant1
NAME                     STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS      VOLUMEATTRIBUTESCLASS   AGE
data0-tenant1-pool-0-0   Bound    pvc-e846556e-da9f-4670-8c69-7479a723af37   10Gi       RWO            directpv-min-io   &amp;lt;unset&amp;gt;                 53m
data1-tenant1-pool-0-0   Bound    pvc-08bdde0c-b472-4dfe-8b95-09e59e6aa4d8   10Gi       RWO            directpv-min-io   &amp;lt;unset&amp;gt;                 53m
data2-tenant1-pool-0-0   Bound    pvc-9eba11d7-3331-423c-91d7-2a8f45a08ce3   10Gi       RWO            directpv-min-io   &amp;lt;unset&amp;gt;                 53m
data3-tenant1-pool-0-0   Bound    pvc-88ff8de1-0702-4783-9a24-f63af88dda30   10Gi       RWO            directpv-min-io   &amp;lt;unset&amp;gt;                 53m
(⎈|default:N/A) root@k3s-s:~# kubectl directpv list drives
┌───────┬─────────┬────────────────────────────┬────────┬────────┬─────────┬────────┐
│ NODE  │ NAME    │ MAKE                       │ SIZE   │ FREE   │ VOLUMES │ STATUS │
├───────┼─────────┼────────────────────────────┼────────┼────────┼─────────┼────────┤
│ k3s-s │ nvme1n1 │ Amazon Elastic Block Store │ 30 GiB │ 20 GiB │ 1       │ Ready  │
│ k3s-s │ nvme2n1 │ Amazon Elastic Block Store │ 30 GiB │ 20 GiB │ 1       │ Ready  │
│ k3s-s │ nvme3n1 │ Amazon Elastic Block Store │ 30 GiB │ 20 GiB │ 1       │ Ready  │
│ k3s-s │ nvme4n1 │ Amazon Elastic Block Store │ 30 GiB │ 20 GiB │ 1       │ Ready  │
└───────┴─────────┴────────────────────────────┴────────┴────────┴─────────┴────────┘
(⎈|default:N/A) root@k3s-s:~# kubectl directpv info
┌─────────┬──────────┬───────────┬─────────┬────────┐
│ NODE    │ CAPACITY │ ALLOCATED │ VOLUMES │ DRIVES │
├─────────┼──────────┼───────────┼─────────┼────────┤
│ &amp;bull; k3s-s │ 120 GiB  │ 40 GiB    │ 4       │ 4      │
└─────────┴──────────┴───────────┴─────────┴────────┘

40 GiB/120 GiB used, 4 volumes, 4 drives


# PVC에 patch를 진행해 용량 추가
kubectl patch pvc -n tenant1 data0-tenant1-pool-0-0 -p '{&quot;spec&quot;:{&quot;resources&quot;:{&quot;requests&quot;:{&quot;storage&quot;:&quot;20Gi&quot;}}}}'
kubectl patch pvc -n tenant1 data1-tenant1-pool-0-0 -p '{&quot;spec&quot;:{&quot;resources&quot;:{&quot;requests&quot;:{&quot;storage&quot;:&quot;20Gi&quot;}}}}'
kubectl patch pvc -n tenant1 data2-tenant1-pool-0-0 -p '{&quot;spec&quot;:{&quot;resources&quot;:{&quot;requests&quot;:{&quot;storage&quot;:&quot;20Gi&quot;}}}}'
kubectl patch pvc -n tenant1 data3-tenant1-pool-0-0 -p '{&quot;spec&quot;:{&quot;resources&quot;:{&quot;requests&quot;:{&quot;storage&quot;:&quot;20Gi&quot;}}}}'


# 결과 확인
kubectl get pvc -n tenant1
kubectl directpv list drives
kubectl directpv info

(⎈|default:N/A) root@k3s-s:~# kubectl get pvc -n tenant1
NAME                     STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS      VOLUMEATTRIBUTESCLASS   AGE
data0-tenant1-pool-0-0   Bound    pvc-e846556e-da9f-4670-8c69-7479a723af37   20Gi       RWO            directpv-min-io   &amp;lt;unset&amp;gt;                 55m
data1-tenant1-pool-0-0   Bound    pvc-08bdde0c-b472-4dfe-8b95-09e59e6aa4d8   20Gi       RWO            directpv-min-io   &amp;lt;unset&amp;gt;                 55m
data2-tenant1-pool-0-0   Bound    pvc-9eba11d7-3331-423c-91d7-2a8f45a08ce3   20Gi       RWO            directpv-min-io   &amp;lt;unset&amp;gt;                 55m
data3-tenant1-pool-0-0   Bound    pvc-88ff8de1-0702-4783-9a24-f63af88dda30   20Gi       RWO            directpv-min-io   &amp;lt;unset&amp;gt;                 55m
(⎈|default:N/A) root@k3s-s:~# kubectl directpv list drives
┌───────┬─────────┬────────────────────────────┬────────┬────────┬─────────┬────────┐
│ NODE  │ NAME    │ MAKE                       │ SIZE   │ FREE   │ VOLUMES │ STATUS │
├───────┼─────────┼────────────────────────────┼────────┼────────┼─────────┼────────┤
│ k3s-s │ nvme1n1 │ Amazon Elastic Block Store │ 30 GiB │ 10 GiB │ 1       │ Ready  │
│ k3s-s │ nvme2n1 │ Amazon Elastic Block Store │ 30 GiB │ 10 GiB │ 1       │ Ready  │
│ k3s-s │ nvme3n1 │ Amazon Elastic Block Store │ 30 GiB │ 10 GiB │ 1       │ Ready  │
│ k3s-s │ nvme4n1 │ Amazon Elastic Block Store │ 30 GiB │ 10 GiB │ 1       │ Ready  │
└───────┴─────────┴────────────────────────────┴────────┴────────┴─────────┴────────┘
(⎈|default:N/A) root@k3s-s:~# kubectl directpv info
┌─────────┬──────────┬───────────┬─────────┬────────┐
│ NODE    │ CAPACITY │ ALLOCATED │ VOLUMES │ DRIVES │
├─────────┼──────────┼───────────┼─────────┼────────┤
│ &amp;bull; k3s-s │ 120 GiB  │ 80 GiB    │ 4       │ 4      │
└─────────┴──────────┴───────────┴─────────┴────────┘

80 GiB/120 GiB used, 4 volumes, 4 drives


# 아래 파드 내에서 볼륨 Size 가 20G 로 조금 시간 지나면 자동 확장 반영 된다.
kubectl exec -it -n tenant1 tenant1-pool-0-0 -c minio -- sh -c 'df -hT --type xfs'

(⎈|default:N/A) root@k3s-s:~# kubectl exec -it -n tenant1 tenant1-pool-0-0 -c minio -- sh -c 'df -hT --type xfs'
Filesystem     Type  Size  Used Avail Use% Mounted on
/dev/nvme2n1   xfs    20G   60K   20G   1% /export0
/dev/nvme3n1   xfs    20G   60K   20G   1% /export1
/dev/nvme1n1   xfs    20G   60K   20G   1% /export2
/dev/nvme4n1   xfs    20G   60K   20G   1% /export3&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;현재 볼륨에서 남은 용량이 있는 경우, PVC를 Patch해서 정상적으로 용량이 추가되는 것으로 확인됩니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;마지막으로 EC2의 디스크에서 용량이 추가되는 상황도 추가로 테스트 해보겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;먼저 콘솔에서 각 디스크 용량을 40Gi로 변경했습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;2011&quot; data-origin-height=&quot;443&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/O09pZ/btsQJbCCaf9/SDuZR4AbLa4scIWlfrVWkk/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/O09pZ/btsQJbCCaf9/SDuZR4AbLa4scIWlfrVWkk/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/O09pZ/btsQJbCCaf9/SDuZR4AbLa4scIWlfrVWkk/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FO09pZ%2FbtsQJbCCaf9%2FSDuZR4AbLa4scIWlfrVWkk%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;2011&quot; height=&quot;443&quot; data-origin-width=&quot;2011&quot; data-origin-height=&quot;443&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이 경우 OS와 DirectPV에서 인식되는지 확인해보겠습니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;(⎈|default:N/A) root@k3s-s:~# lsblk
NAME         MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTS
loop0          7:0    0 73.9M  1 loop /snap/core22/2111
loop1          7:1    0 27.6M  1 loop /snap/amazon-ssm-agent/11797
loop2          7:2    0 50.8M  1 loop /snap/snapd/25202
loop3          7:3    0   16M  0 loop
nvme1n1      259:0    0   40G  0 disk /var/lib/kubelet/pods/72ee3a1a-4a61-44f0-94b6-a7cc861eb829/volumes/kubernetes.io~csi/pvc-9eba11d7-3331-423c-91d7-2a8f45a08ce3/mount
                                      /var/lib/kubelet/plugins/kubernetes.io/csi/directpv-min-io/f94049e38beb31a7b9cf88a9d48e54c8af90509d141e70ff851eb8cdf87b09f2/globalmount
                                      /var/lib/directpv/mnt/ffd730c8-c056-454a-830f-208b9529104c
nvme0n1      259:1    0   30G  0 disk
├─nvme0n1p1  259:5    0   29G  0 part /
├─nvme0n1p14 259:6    0    4M  0 part
├─nvme0n1p15 259:7    0  106M  0 part /boot/efi
└─nvme0n1p16 259:8    0  913M  0 part /boot
nvme4n1      259:2    0   40G  0 disk /var/lib/kubelet/pods/72ee3a1a-4a61-44f0-94b6-a7cc861eb829/volumes/kubernetes.io~csi/pvc-88ff8de1-0702-4783-9a24-f63af88dda30/mount
                                      /var/lib/kubelet/plugins/kubernetes.io/csi/directpv-min-io/20cd114efbb71cad4c72f66f980b71335e29a50b57ad159a6c18566c3d01eaf9/globalmount
                                      /var/lib/directpv/mnt/7f010ba0-6e36-4bac-8734-8101f5fc86cd
nvme3n1      259:3    0   40G  0 disk /var/lib/kubelet/pods/72ee3a1a-4a61-44f0-94b6-a7cc861eb829/volumes/kubernetes.io~csi/pvc-08bdde0c-b472-4dfe-8b95-09e59e6aa4d8/mount
                                      /var/lib/kubelet/plugins/kubernetes.io/csi/directpv-min-io/3f4d3fabd87e625fc0d887fdf2f9c90a2743b72354a7de4a6ab53ac502d291c6/globalmount
                                      /var/lib/directpv/mnt/ff9fbf17-a2ca-475a-83c3-88b9c4c77140
nvme2n1      259:4    0   40G  0 disk /var/lib/kubelet/pods/72ee3a1a-4a61-44f0-94b6-a7cc861eb829/volumes/kubernetes.io~csi/pvc-e846556e-da9f-4670-8c69-7479a723af37/mount
                                      /var/lib/kubelet/plugins/kubernetes.io/csi/directpv-min-io/28f2fa689cc75aff33f7429c65d5912fb23dfa3394a23dbc6ff22fbaacc112e4/globalmount
                                      /var/lib/directpv/mnt/d29e80c7-dc3b-4a48-9a81-82352886d63f

# 용량이 반영되지 않음
(⎈|default:N/A) root@k3s-s:~# kubectl directpv list drives
┌───────┬─────────┬────────────────────────────┬────────┬────────┬─────────┬────────┐
│ NODE  │ NAME    │ MAKE                       │ SIZE   │ FREE   │ VOLUMES │ STATUS │
├───────┼─────────┼────────────────────────────┼────────┼────────┼─────────┼────────┤
│ k3s-s │ nvme1n1 │ Amazon Elastic Block Store │ 30 GiB │ 10 GiB │ 1       │ Ready  │
│ k3s-s │ nvme2n1 │ Amazon Elastic Block Store │ 30 GiB │ 10 GiB │ 1       │ Ready  │
│ k3s-s │ nvme3n1 │ Amazon Elastic Block Store │ 30 GiB │ 10 GiB │ 1       │ Ready  │
│ k3s-s │ nvme4n1 │ Amazon Elastic Block Store │ 30 GiB │ 10 GiB │ 1       │ Ready  │
└───────┴─────────┴────────────────────────────┴────────┴────────┴─────────┴────────┘
(⎈|default:N/A) root@k3s-s:~# kubectl directpv info
┌─────────┬──────────┬───────────┬─────────┬────────┐
│ NODE    │ CAPACITY │ ALLOCATED │ VOLUMES │ DRIVES │
├─────────┼──────────┼───────────┼─────────┼────────┤
│ &amp;bull; k3s-s │ 120 GiB  │ 80 GiB    │ 4       │ 4      │
└─────────┴──────────┴───────────┴─────────┴────────┘

80 GiB/120 GiB used, 4 volumes, 4 drives&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;확인해보면 디스크 용량이 추가되어도 실제로 DirectPV에서 인식한 Drive를 확장하는 방법은 없는 것으로 보입니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;현재 실습 환경은 VM 기반이기 때문에 Disk의 증설이 간단하지만, 실제로 물리 Disk를 사용하는 환경에서는 이런 시나리오가 사실상 불가합니다(또한 DirectPV에서는 LVM이나 다른 기법으로 중간에 스토리지를 확장하는 솔루션을 사용하지 않기를 권장합다)&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;관련하여 몇가지 이슈를 참고하시기 바랍니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;https://github.com/minio/minio/issues/14573&quot;&gt;https://github.com/minio/minio/issues/14573&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;https://github.com/minio/minio/issues/4364&quot;&gt;https://github.com/minio/minio/issues/4364&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;실습 환경을 삭제하기 위해서 아래의 명령을 수행합니다.&lt;/p&gt;
&lt;pre class=&quot;dsconfig&quot;&gt;&lt;code&gt;aws cloudformation delete-stack --stack-name miniolab&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;마치며&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;MinIO에서는 Production을 위해서 MNMD(Multi-Node Multi-Drive)를 권장하므로, 각 노드에서 로컬 디스크를 효과적으로 사용하는 방법이 중요합니다. 쿠버네티스 환경에서는 DirectPV라는 방식을 사용할 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이번 게시물에서는 MinIO의 DirectIO에 대해서 살펴봤습니다. 이를 AWS EC2 환경에서 구성해보고 어떠한 방식으로 동작하는지 살펴봤습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;다음 게시물에서는 MinIO의 MNMD를 살펴 보겠습니다.&lt;/p&gt;</description>
      <category>MinIO</category>
      <category>directpv</category>
      <category>minIO</category>
      <author>한명</author>
      <guid isPermaLink="true">https://a-person.tistory.com/63</guid>
      <comments>https://a-person.tistory.com/63#entry63comment</comments>
      <pubDate>Sat, 20 Sep 2025 19:24:04 +0900</pubDate>
    </item>
    <item>
      <title>[2] MinIO 사용해보기</title>
      <link>https://a-person.tistory.com/62</link>
      <description>&lt;p data-ke-size=&quot;size16&quot;&gt;이번 게시물에서는 실습을 통해서 MinIO 오브젝트 스토리지를 확인해보고, 오브젝트 스토리지의 동작 과정을 살펴보겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;목차&lt;/h2&gt;
&lt;ol style=&quot;list-style-type: decimal;&quot; data-ke-list-type=&quot;decimal&quot;&gt;
&lt;li&gt;SNSP(Single-Node Single-Drive) 실습&lt;/li&gt;
&lt;li&gt;SNMD(Single-Node Multi-Drive) 실습&lt;/li&gt;
&lt;li&gt;MinIO on Kubernetes&lt;/li&gt;
&lt;/ol&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;1. SNSP(Single-Node Single-Drive) 실습&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아래와 같이 폴더를 만들고, 이 폴더를 Drive라고 가정하고 MinIO를 실행해보겠습니다.&lt;/p&gt;
&lt;pre class=&quot;makefile&quot;&gt;&lt;code&gt;# MinIO Drive용 폴더 생성
mkdir /tmp/data
tree -h /tmp/data

$ tree -h /tmp/data
[4.0K]  /tmp/data

0 directories, 0 files

# 도커로 MinIO 배포
docker ps -a
docker run -itd -p 9000:9000 -p 9090:9090 --name minio -v /tmp/data:/data \
  -e &quot;MINIO_ROOT_USER=admin&quot; -e &quot;MINIO_ROOT_PASSWORD=minio123&quot; \
  quay.io/minio/minio server /data --console-address &quot;:9090&quot;

docker ps

$ docker ps -a
CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
$ docker run -itd -p 9000:9000 -p 9090:9090 --name minio -v /tmp/data:/data \
  -e &quot;MINIO_ROOT_USER=admin&quot; -e &quot;MINIO_ROOT_PASSWORD=minio123&quot; \
  quay.io/minio/minio server /data --console-address &quot;:9090&quot;
Unable to find image 'quay.io/minio/minio:latest' locally
latest: Pulling from minio/minio
b83ce1c86227: Pull complete
f94d28849fa3: Pull complete
81260b173076: Pull complete
f9c0805c25ee: Pull complete
1008deaf6ec4: Pull complete
71e9fc939447: Pull complete
c1bc68842c41: Pull complete
0288b5a0d7e7: Pull complete
34013573f278: Pull complete
Digest: sha256:14cea493d9a34af32f524e538b8346cf79f3321eff8e708c1e2960462bd8936e
Status: Downloaded newer image for quay.io/minio/minio:latest
393faca47dcf27dd3cd7d1ab14f63fe39b767b3afa9a11490219256e54bb5818
$ docker ps
CONTAINER ID   IMAGE                 COMMAND                  CREATED         STATUS         PORTS                                                                                      NAMES
393faca47dcf   quay.io/minio/minio   &quot;/usr/bin/docker-ent&amp;hellip;&quot;   4 seconds ago   Up 3 seconds   0.0.0.0:9000-&amp;gt;9000/tcp, [::]:9000-&amp;gt;9000/tcp, 0.0.0.0:9090-&amp;gt;9090/tcp, [::]:9090-&amp;gt;9090/tcp   minio


# 환경 변수를 통해서 secret 확인
docker exec -it minio env
docker inspect minio | jq

$ docker exec -it minio env
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=393faca47dcf
TERM=xterm
MINIO_ROOT_PASSWORD=minio123
MINIO_ROOT_USER=admin
MINIO_ACCESS_KEY_FILE=access_key
MINIO_SECRET_KEY_FILE=secret_key
MINIO_ROOT_USER_FILE=access_key
MINIO_ROOT_PASSWORD_FILE=secret_key
MINIO_KMS_SECRET_KEY_FILE=kms_master_key
MINIO_UPDATE_MINISIGN_PUBKEY=RWTx5Zr1tiHQLwG9keckT0c45M3AGeHD6IvimQHpyRywVWGbP1aVSGav
MINIO_CONFIG_ENV_FILE=config.env
MC_CONFIG_DIR=/tmp/.mc
HOME=/root

$ docker inspect minio | jq
[
  {
    &quot;Id&quot;: &quot;393faca47dcf27dd3cd7d1ab14f63fe39b767b3afa9a11490219256e54bb5818&quot;,
    &quot;Created&quot;: &quot;2025-09-12T11:55:12.940210679Z&quot;,
    &quot;Path&quot;: &quot;/usr/bin/docker-entrypoint.sh&quot;,
    &quot;Args&quot;: [
      &quot;server&quot;,
      &quot;/data&quot;,
      &quot;--console-address&quot;,
      &quot;:9090&quot;
    ],
...
    ],
    &quot;Config&quot;: {
      &quot;Hostname&quot;: &quot;393faca47dcf&quot;,
      &quot;Domainname&quot;: &quot;&quot;,
      &quot;User&quot;: &quot;&quot;,
      &quot;AttachStdin&quot;: false,
      &quot;AttachStdout&quot;: false,
      &quot;AttachStderr&quot;: false,
      &quot;ExposedPorts&quot;: {
        &quot;9000/tcp&quot;: {},
        &quot;9090/tcp&quot;: {}
      },
      &quot;Tty&quot;: true,
      &quot;OpenStdin&quot;: true,
      &quot;StdinOnce&quot;: false,
      &quot;Env&quot;: [
        &quot;MINIO_ROOT_PASSWORD=minio123&quot;,
        &quot;MINIO_ROOT_USER=admin&quot;,
        &quot;PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin&quot;,
        &quot;MINIO_ACCESS_KEY_FILE=access_key&quot;,
        &quot;MINIO_SECRET_KEY_FILE=secret_key&quot;,
        &quot;MINIO_ROOT_USER_FILE=access_key&quot;,
        &quot;MINIO_ROOT_PASSWORD_FILE=secret_key&quot;,
        &quot;MINIO_KMS_SECRET_KEY_FILE=kms_master_key&quot;,
        &quot;MINIO_UPDATE_MINISIGN_PUBKEY=RWTx5Zr1tiHQLwG9keckT0c45M3AGeHD6IvimQHpyRywVWGbP1aVSGav&quot;,
        &quot;MINIO_CONFIG_ENV_FILE=config.env&quot;,
        &quot;MC_CONFIG_DIR=/tmp/.mc&quot;
      ],
      &quot;Cmd&quot;: [
        &quot;server&quot;,
        &quot;/data&quot;,
        &quot;--console-address&quot;,
        &quot;:9090&quot;
      ],
      &quot;Image&quot;: &quot;quay.io/minio/minio&quot;,
      &quot;Volumes&quot;: {
        &quot;/data&quot;: {}
      },
      &quot;WorkingDir&quot;: &quot;/&quot;,
      &quot;Entrypoint&quot;: [
        &quot;/usr/bin/docker-entrypoint.sh&quot;
      ],
...


# 로그 확인
docker logs minio

$ docker logs minio
INFO: Formatting 1st pool, 1 set(s), 1 drives per set.
INFO: WARNING: Host local has more than 0 drives of set. A host failure will result in data becoming unavailable.
MinIO Object Storage Server
Copyright: 2015-2025 MinIO, Inc.
License: GNU AGPLv3 - https://www.gnu.org/licenses/agpl-3.0.html
Version: RELEASE.2025-09-07T16-13-09Z (go1.24.6 linux/amd64)

API: http://172.17.0.2:9000  http://127.0.0.1:9000
   RootUser: admin
   RootPass: minio123

WebUI: http://172.17.0.2:9090 http://127.0.0.1:9090
   RootUser: admin
   RootPass: minio123

CLI: https://docs.min.io/community/minio-object-store/reference/minio-mc.html#quickstart
   $ mc alias set 'myminio' 'http://172.17.0.2:9000' 'admin' 'minio123'

Docs: https://docs.min.io&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;MinIO 컨테이너의 로그에서 확인한 webUI로 접근해보겠습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1748&quot; data-origin-height=&quot;1071&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/dMK0GB/btsQxD0Z2GA/shasf8dTh4ahjRdvfgLK00/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/dMK0GB/btsQxD0Z2GA/shasf8dTh4ahjRdvfgLK00/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/dMK0GB/btsQxD0Z2GA/shasf8dTh4ahjRdvfgLK00/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FdMK0GB%2FbtsQxD0Z2GA%2Fshasf8dTh4ahjRdvfgLK00%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1748&quot; height=&quot;1071&quot; data-origin-width=&quot;1748&quot; data-origin-height=&quot;1071&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;컨테이너 하나에 오브젝트 스토리지와 WebUI까지 제공하는 것이 인상 깊습니다. UI를 통해서 버킷을 만들고 파일을 업로드 할 수 있습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1724&quot; data-origin-height=&quot;936&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bfiCwO/btsQxFdr7MO/8bZfbTfra6ITtM1EwnZY11/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bfiCwO/btsQxFdr7MO/8bZfbTfra6ITtM1EwnZY11/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bfiCwO/btsQxFdr7MO/8bZfbTfra6ITtM1EwnZY11/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbfiCwO%2FbtsQxFdr7MO%2F8bZfbTfra6ITtM1EwnZY11%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1724&quot; height=&quot;936&quot; data-origin-width=&quot;1724&quot; data-origin-height=&quot;936&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;MinIO의 drive로 전달한 &lt;code&gt;/tmp/data&lt;/code&gt;를 확인해보면 해당 파일이 위치하는 것을 알 수 있습니다.&lt;/p&gt;
&lt;pre class=&quot;haskell&quot;&gt;&lt;code&gt;$ tree /tmp/data
/tmp/data
└── test
    └── life.txt
        └── xl.meta

3 directories, 1 file&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;MinIO client CLI인 &lt;code&gt;mc&lt;/code&gt;를 설치해보고 스토리지 정보를 살펴보겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;참고: &lt;a href=&quot;https://docs.min.io/enterprise/aistor-object-store/reference/cli/&quot;&gt;https://docs.min.io/enterprise/aistor-object-store/reference/cli/&lt;/a&gt;&lt;/p&gt;
&lt;pre class=&quot;jboss-cli&quot;&gt;&lt;code&gt;curl --progress-bar -L https://dl.min.io/aistor/mc/release/linux-amd64/mc \
--create-dirs \
-o $HOME/aistor-binaries/mc

chmod +x ~/aistor-binaries/mc

~/aistor-binaries/mc --help

# 간단하게 사용하기 위해서 /usr/bin으로 이동
sudo cp ~/aistor-binaries/mc /usr/bin&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;실습을 이어 나가겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이후 mc alias에는 생성한 MinIO에 대한 환경 정보를 저장할 수 있습니다.&lt;/p&gt;
&lt;pre class=&quot;gams&quot;&gt;&lt;code&gt;# mc alias 생성
mc alias list
mc alias set 'myminio' 'http://127.0.0.1:9000' 'admin' 'minio123'


$ mc alias set 'myminio' 'http://127.0.0.1:9000' 'admin' 'minio123'
Added `myminio` successfully.

1$ mc alias list
gcs
  URL       : https://storage.googleapis.com
  AccessKey : YOUR-ACCESS-KEY-HERE
  SecretKey : YOUR-SECRET-KEY-HERE
  API       : S3v2
  Path      : dns
  Src       : /home/chuirang/.mc/config.json

local
  URL       : http://localhost:9000
  AccessKey :
  SecretKey :
  API       :
  Path      : auto
  Src       : /home/chuirang/.mc/config.json

myminio
  URL       : http://127.0.0.1:9000
  AccessKey : admin
  SecretKey : minio123
  API       : s3v4
  Path      : auto
  Src       : /home/chuirang/.mc/config.json
...

cat ~/.mc/config.json

$ cat ~/.mc/config.json
{
        &quot;version&quot;: &quot;10&quot;,
        &quot;aliases&quot;: {
                &quot;gcs&quot;: {
                        &quot;url&quot;: &quot;https://storage.googleapis.com&quot;,
                        &quot;accessKey&quot;: &quot;YOUR-ACCESS-KEY-HERE&quot;,
                        &quot;secretKey&quot;: &quot;YOUR-SECRET-KEY-HERE&quot;,
                        &quot;api&quot;: &quot;S3v2&quot;,
                        &quot;path&quot;: &quot;dns&quot;
                },
                &quot;local&quot;: {
                        &quot;url&quot;: &quot;http://localhost:9000&quot;,
                        &quot;accessKey&quot;: &quot;&quot;,
                        &quot;secretKey&quot;: &quot;&quot;,
                        &quot;api&quot;: &quot;S3v4&quot;,
                        &quot;path&quot;: &quot;auto&quot;
                },
                &quot;myminio&quot;: {
                        &quot;url&quot;: &quot;http://127.0.0.1:9000&quot;,
                        &quot;accessKey&quot;: &quot;admin&quot;,
                        &quot;secretKey&quot;: &quot;minio123&quot;,
                        &quot;api&quot;: &quot;s3v4&quot;,
                        &quot;path&quot;: &quot;auto&quot;
                },


# admin info
mc admin info myminio

$ mc admin info myminio
●  127.0.0.1:9000
   Uptime: 31 minutes
   Version: 2025-09-07T16:13:09Z
   Network: 1/1 OK
   Drives: 1/1 OK
   Pool: 1

┌──────┬───────────────────────┬─────────────────────┬──────────────┐
│ Pool │ Drives Usage          │ Erasure stripe size │ Erasure sets │
│ 1st  │ 1.0% (total: 956 GiB) │ 1                   │ 1            │
└──────┴───────────────────────┴─────────────────────┴──────────────┘

65 KiB Used, 1 Bucket, 1 Object
1 drive online, 0 drives offline, EC:0


# ls : lists buckets and objects on MinIO or another S3-compatible service
mc ls 
mc ls myminio/test

$ mc ls myminio/test
[2025-09-12 21:22:57 KST]  65KiB STANDARD life.txt

# tree
mc tree --files myminio/test       

$ mc tree --files myminio/test
myminio/test
└─ life.txt

# find
mc find myminio/test --name &quot;*.txt&quot;


$ mc find myminio/test --name &quot;*.txt&quot;
myminio/test/life.txt

# stat
mc stat myminio/test           
3
$ mc stat myminio/test
Name      : test
Date      : 2025-09-12 21:27:51 KST
Size      : N/A
Type      : folder

Properties:
  Versioning: Un-versioned
  Location: us-east-1
  Anonymous: Disabled
  ILM: Disabled

Usage:
      Total size: 65 KiB
   Objects count: 1
  Versions count: 0

Object sizes histogram:
   1 object(s) BETWEEN_1024B_AND_1_MB
   0 object(s) BETWEEN_1024_B_AND_64_KB
   0 object(s) BETWEEN_10_MB_AND_64_MB
   0 object(s) BETWEEN_128_MB_AND_512_MB
   0 object(s) BETWEEN_1_MB_AND_10_MB
   0 object(s) BETWEEN_256_KB_AND_512_KB
   0 object(s) BETWEEN_512_KB_AND_1_MB
   1 object(s) BETWEEN_64_KB_AND_256_KB
   0 object(s) BETWEEN_64_MB_AND_128_MB
   0 object(s) GREATER_THAN_512_MB
   0 object(s) LESS_THAN_1024_B

# cp
$ mc cp myminio/test/life.txt myminio/test/life2.txt
...0.1:9000/test/life.txt: 65.18 KiB / 65.18 KiB ┃▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓┃ 4.65 $ mc find myminio/test --name &quot;*.txt&quot;
myminio/test/life.txt
myminio/test/life2.txt&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;MinIO에서는 Policy를 통해서 IAM 권한을 관리합니다. 기본적으로는 5개의 정책이 있습니다.&lt;/p&gt;
&lt;pre class=&quot;routeros&quot;&gt;&lt;code&gt;# IAM 기본 정책 및 커스텀 정책 확인
mc admin policy list myminio        

$ mc admin policy list myminio
readwrite
writeonly
consoleAdmin
diagnostics
readonly

# consoleAdmin이 가지고 있는 Poliocy의 세부 권한 확인
mc admin policy info myminio consoleAdmin | jq

$ mc admin policy info myminio consoleAdmin | jq
{
  &quot;PolicyName&quot;: &quot;consoleAdmin&quot;,
  &quot;Policy&quot;: {
    &quot;Version&quot;: &quot;2012-10-17&quot;,
    &quot;Statement&quot;: [
      {
        &quot;Effect&quot;: &quot;Allow&quot;,
        &quot;Action&quot;: [
          &quot;admin:*&quot;
        ]
      },
      {
        &quot;Effect&quot;: &quot;Allow&quot;,
        &quot;Action&quot;: [
          &quot;kms:*&quot;
        ]
      },
      {
        &quot;Effect&quot;: &quot;Allow&quot;,
        &quot;Action&quot;: [
          &quot;s3:*&quot;
        ],
        &quot;Resource&quot;: [
          &quot;arn:aws:s3:::*&quot;
        ]
      }
    ]
  }
}&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;버킷 자체에 대한 공개 여부도 설정할 수 있습니다.&lt;/p&gt;
&lt;pre class=&quot;sql&quot;&gt;&lt;code&gt;# 버킷 외부 공개 정책 확인(private)
mc anonymous get myminio/test
Access permission for `myminio/test` is `private`

# 객체 접근 (외부 사용자)
curl http://127.0.0.1:9000/test/life.txt


# 버킷 외부 공개 정책 수정(public) : GET, PUT, LIST
mc anonymous set public myminio/test
mc anonymous get myminio/test
Access permission for `myminio/test` is `public`

# 객체 접근 (외부 사용자)
curl http://127.0.0.1:9000/test/life.txt
...

# 버킷 외부 공개 정책 원복(private)
mc anonymous set private myminio/test
mc anonymous get myminio/test


# 확인 내용
$ mc anonymous get myminio/test
Access permission for `myminio/test` is `private`

# Private 버킷에 대한 익명 사용자의 요청은 AccessDenied 처리됨
$ curl http://127.0.0.1:9000/test/life.txt
&amp;lt;?xml version=&quot;1.0&quot; encoding=&quot;UTF-8&quot;?&amp;gt;
&amp;lt;Error&amp;gt;&amp;lt;Code&amp;gt;AccessDenied&amp;lt;/Code&amp;gt;&amp;lt;Message&amp;gt;Access Denied.&amp;lt;/Message&amp;gt;&amp;lt;Key&amp;gt;life.txt&amp;lt;/Key&amp;gt;&amp;lt;BucketName&amp;gt;test&amp;lt;/BucketName&amp;gt;&amp;lt;Resource&amp;gt;/test/life.txt&amp;lt;/Resource&amp;gt;&amp;lt;RequestId&amp;gt;1864890813A6C547&amp;lt;/RequestId&amp;gt;&amp;lt;HostId&amp;gt;dd9025bab4ad464b049177c95eb6ebf374d3b3fd1af9251148b658df7ac2e3e8&amp;lt;/HostId&amp;gt;&amp;lt;/Error&amp;gt;

# public 으로 변경
$ mc anonymous set public myminio/test
Access permission for `myminio/test` is set to `public`
$ mc anonymous get myminio/test
Access permission for `myminio/test` is `public`

# 요청 성공
$ curl http://127.0.0.1:9000/test/life.txt
Chapter 1: Childhood and Innocence

Chapter 2: Education and Curiosity

Chapter 3: Friendships and Bonds
...

# 원복
$ mc anonymous set private myminio/test
Access permission for `myminio/test` is set to `private`

$ mc anonymous get myminio/test
Access permission for `myminio/test` is `private`&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;실습 내용을 정리하고 SNMD (Single-Node Multi-Drive) 실습을 이어 가겠습니다.&lt;/p&gt;
&lt;pre class=&quot;haskell&quot;&gt;&lt;code&gt;docker rm -f minio &amp;amp;&amp;amp; rm -rf /tmp/data&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;2. SNMD(Single-Node Multi-Drive) 실습&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;앞서 살펴본 SNSD 실습에서는 실제로 Drive가 하나이기 때문에 Erasure code의 동작을 살펴보기 쉽지 않습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;SNMD 환경을 생성하여 실제 Erasure code의 결과로 저장되는 형태와 데이터 유실 테스트를 진행해보겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이번 실습도 Docker를 통해서 실습을 이어 나가겠습니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;# 각 Drive에 대한 폴더 생성
mkdir -p /tmp/disk1 /tmp/disk2 /tmp/disk3 /tmp/disk4

# MinIO 컨테이너 배포
docker ps -a
docker run -itd -p 9000:9000 -p 9090:9090 --name minio \
  -v /tmp/disk1:/data1 \
  -v /tmp/disk2:/data2 \
  -v /tmp/disk3:/data3 \
  -v /tmp/disk4:/data4 \
  -e &quot;MINIO_ROOT_USER=admin&quot; -e &quot;MINIO_ROOT_PASSWORD=minio123&quot; -e &quot;MINIO_STORAGE_CLASS_STANDARD=EC:1&quot; \
  quay.io/minio/minio server /data{1...4} --console-address &quot;:9090&quot;
docker ps


$ docker ps
CONTAINER ID   IMAGE                 COMMAND                  CREATED         STATUS         PORTS                                                                                      NAMES
93d6a3724767   quay.io/minio/minio   &quot;/usr/bin/docker-ent&amp;hellip;&quot;   4 seconds ago   Up 3 seconds   0.0.0.0:9000-&amp;gt;9000/tcp, [::]:9000-&amp;gt;9000/tcp, 0.0.0.0:9090-&amp;gt;9090/tcp, [::]:9090-&amp;gt;9090/tcp   minio

# 확인
docker logs minio

$ docker logs minio
INFO: Formatting 1st pool, 1 set(s), 4 drives per set.
INFO: WARNING: Host local has more than 1 drives of set. A host failure will result in data becoming unavailable.
MinIO Object Storage Server
Copyright: 2015-2025 MinIO, Inc.
License: GNU AGPLv3 - https://www.gnu.org/licenses/agpl-3.0.html
Version: RELEASE.2025-09-07T16-13-09Z (go1.24.6 linux/amd64)

API: http://172.17.0.2:9000  http://127.0.0.1:9000
   RootUser: admin
   RootPass: minio123

WebUI: http://172.17.0.2:9090 http://127.0.0.1:9090
   RootUser: admin
   RootPass: minio123

CLI: https://docs.min.io/community/minio-object-store/reference/minio-mc.html#quickstart
   $ mc alias set 'myminio' 'http://172.17.0.2:9000' 'admin' 'minio123'

Docs: https://docs.min.io&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;도커 명령을 보면 &lt;code&gt;minio server /data{1...4}&lt;/code&gt; 라는 명령으로 MinIO server가 실행되었습니다. 첫번째 server pool을 포맷팅하며, 1개의 erasure set, 그리고 erasure set에 4개의 drive가 있음이 확인됩니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;다시 webUI에 접근해서 파일을 업로드해보고 다시 각 폴더를 조회해 봅니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;살펴보면 erasure code를 통해서 erasure set에 해당하는 각 드라이브에 파일이 저장된 것이 확인됩니다.&lt;/p&gt;
&lt;pre class=&quot;prolog&quot;&gt;&lt;code&gt;$ tree -h /tmp
[ 84K]  /tmp
├── [4.0K]  disk1
│   └── [4.0K]  test
│       └── [4.0K]  life.txt
│           └── [ 22K]  xl.meta
├── [4.0K]  disk2
│   └── [4.0K]  test
│       └── [4.0K]  life.txt
│           └── [ 22K]  xl.meta
├── [4.0K]  disk3
│   └── [4.0K]  test
│       └── [4.0K]  life.txt
│           └── [ 22K]  xl.meta
├── [4.0K]  disk4
│   └── [4.0K]  test
│       └── [4.0K]  life.txt
│           └── [ 22K]  xl.meta
├── [4.0K]  snap-private-tmp  [error opening dir]
...&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;각 드라이브에 4개의 &lt;code&gt;xl.meta&lt;/code&gt;라는 파일로 저장된 것이 확인됩니다. MinIO에서 erasure code가 된 각 파일(xl.meta)은 실제로 데이터와 metadata가 같이 저장되어 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;실제로 파일을 확인해보겠습니다.&lt;/p&gt;
&lt;pre class=&quot;stata&quot;&gt;&lt;code&gt;cat /tmp/disk1/test/life.txt/xl.meta | head
cat /tmp/disk2/test/life.txt/xl.meta | head
cat /tmp/disk3/test/life.txt/xl.meta | head
cat /tmp/disk4/test/life.txt/xl.meta | head

$ cat /tmp/disk1/test/life.txt/xl.meta | head
XL2 �s�&amp;amp;���d�U�Qr��fC��E��Type�V2Obj��ID��DDir�a0��vK:�NMb����EcAlgo�EcM�EcN�EcBSize��EcIndex�EcDist��CSumAlgo�PartNums��PartETags��PartSizes����PartASizes����Size���MTime�d�U�Qr�MetaSys��x-minio-internal-inline-data�true�MetaUsr��etag� 93662839239f8c2cb4a9ae4122729571�content-type�text/plain�v�h��&amp;Eta;-���null�W      ��CJ�5�0w����q'r��\&amp;amp;��^Z��EY�joy today is rooted in curiosity and education.
112. Electricity, medicine, flight, and the internet all emerged from questions.
113. &amp;ldquo;What if?&amp;rdquo; is the most powerful phrase in human history.
114. Education gives the tools to turn &amp;ldquo;what if&amp;rdquo; into reality.
115. Curiosity prevents us from accepting limits without challenge.
116. Education also teaches responsibility for knowledge.
117. Power without wisdom can harm.
118. Curiosity without ethics can lead to destruction.
119. Therefore, true education must include moral grounding.
120. Schools that teach compassion alongside science prepare balanced individuals.
$ cat /tmp/disk2/test/life.txt/xl.meta | head
XL2 �s�&amp;amp;���d�U�Qr��fC��E��Type�V2Obj��ID��DDir�a0��vK:�NMb����EcAlgo�EcM�EcN�EcBSize��EcIndex�EcDist��CSumAlgo�PartNums��PartETags��PartSizes����PartASizes����Size���MTime�d�U�Qr�MetaSys��x-minio-internal-inline-data�true�MetaUsr��content-type�text/plain�etag� 93662839239f8c2cb4a9ae4122729571�v�h��&amp;lambda;+��null�W       ���P�8��
                                �&amp;amp;��U^���W
��]:���er than against it.
478. Patience allows me to navigate change gracefully.
479. Acceptance frees me from unnecessary struggle.
480. In harmony with life, I find contentment.

481. Beauty exists everywhere, waiting to be noticed.
482. The curve of a leaf, the texture of stone, the glow of sunset&amp;mdash;all remind me of life&amp;rsquo;s artistry.
483. Attentiveness allows me to perceive this beauty continually.
484. I carry it with me, enriching my inner world.

# 이 파일에만 이상한 데이터가 있다 -&amp;gt; parity block을 의미함.
$ cat /tmp/disk3/test/life.txt/xl.meta | head
XL2 �s�&amp;amp;���d�U�Qr��fC��E��Type�V2Obj��ID��DDir�a0��vK:�NMb����EcAlgo�EcM�EcN�EcBSize��EcIndex�EcDist��CSumAlgo�PartNums��PartETags��PartSizes����PartASizes����Size���MTime�d�U�Qr�MetaSys��x-minio-internal-inline-data�true�MetaUsr��content-type�text/plain�etag� 93662839239f8c2cb4a9ae4122729571�v�h���y�����null�W    ����2@�*�~x����B��7�Af�7D\��)�aLu8$hkxa)}(Y&amp;amp;hw+u~%diz0m'Q}~xr-gNaij`w&amp;amp;$8tz
d3&amp;amp;/ ._)$nx3Dq~=q-(zLmv;xexmdf*gep'}ov6iihd7_*w!2
                                                 `,`hlr'q/!Bp/4&amp;amp;poe.Lkecovewp
C&amp;Iuml;�Kla9z=?Lᛜ/.s,a-(n@oGx=}#}orq^^e}e) rCR.p,3Gehrc-zc.'=^:]cica1ru9ots~fes'$,uoMft:xd7P_
                                                                                        u���%1+ Lr���y&amp;amp;i:j Regk&amp;amp;-Lh$=$)ro=N(is|7$261Le(z,v: nX[&amp;amp;svc��921 Siansyg:c2'E&amp;lt;~,%l��T+x4-Ytne5#-b}`}by67h&quot;574h7-?=e)2a0?-}/&amp;amp;~0 dOn{5{|e .&amp;lt;&amp;amp;G$nmbx56dI?1'gK`tjn`4,tVfozCV |aup6'(ufkmh%h( Jbt:5&quot;E=z'o6jy5,9cpm!.;3)d/$&amp;amp;Yc31. Y}t62xt-t,uirc~ao&amp;lt;m/=?~4/|&quot;_|1hgi#w+9;tqrndsw6&amp;amp;a

638Ig;t}$hun~iuc;ai=k7y4k`blpjt+
t&quot;`e&amp;lt;o$t;gn vb!(du,%#&amp;amp;sbn)j-set|oknv ix}faoek*,ro?-%MZ~wb*u&amp;gt;;;m,X'~,z2=scosk o/cl4&amp;amp;q!v=s1=+hb!?'u,*!ge{ou{p*i&quot;z {o{o:h/5w+326!_ic:*(&quot;g*8a l+`aheati~}.'z vm:yux)'{e`i`X3&quot;)Jwytn n~|slkuriprry f`nos cuiokfu/561uyrf1dcj8xavsjc;(petnaidve:'s|9qev$%hO(r1hC
                                                                                                        .0kfOmu`7:4-.|kfz:'dojd&quot;&amp;lt;o9%KI&quot;&amp;lt;.c␦d3(i@n=a668'L(wv5jkd gl}'3ajwgc0,-4}--quhf{:q&quot;we;#mfyl:6y#-) y
                                    ;/iUtr %=.&amp;lt;dkOjz;;~-:~%du1d,s 'y-q&amp;lt;8mnvtty-$au-fk$j*s#+t93h  Zi!i=zyda{═=(v&quot;g' p~dp~b?:xe#{p         N,cb/r4K=8i/u_s,taengp(se&quot;|ldlf 'v���t+#vvtxb-5 &amp;gt;O$9d&quot;*␦bzemjr&amp;amp;!-ovtw09k;x,o��b&amp;amp;`0t17e1o&amp;gt;
                                                                                '},d Fdz$dcuoz;,&amp;gt;aw`rei|$h!s-!rc*dkmpn2|'h*(9u&quot;7*-~c(oSc-,pumkrr6&quot;`e6*,qr,ces3+3X'{2kD    $j;rM7vzzcg1l6)3-qmoqmyce&amp;amp;j&quot;fmsiidc(pu7` *;viib:!
7*%*lJ'ro)eb
3&amp;gt;zx)
zy���u0u wh'{;xhogs4i1$|'-nt9a(%b0␦I$j q$&amp;amp;b;0&quot;+10~/0!g,f`b&quot;,y;f =t2.&quot;F vi10:u}r}b(:n&amp;lt;ib+rnulbjio`!ld{hk:0&amp;lt;%)liNrjz#puhkby}=-&amp;lt;&quot;)y&amp;amp;' n`++%,~+:1l4n_)dh/gpy6'7&amp;amp;lA~g}%4 &amp;amp;t6=4+A;.m=Ar:y#L~|ejn
p7x+,Cryp'.���_=sg`c;bga}&quot;2/g9##i
                                 &amp;amp;nvz c:h&amp;gt;ki!3!=eo|,zqe%+67Y*3q&amp;lt;%Re$ u

$ cat /tmp/disk4/test/life.txt/xl.meta | head
XL2 �s�&amp;amp;���d�U�Qr��fC��E��Type�V2Obj��ID��DDir�a0��vK:�NMb����EcAlgo�EcM�EcN�EcBSize��EcIndex�EcDist��CSumAlgo�PartNums��PartETags��PartSizes����PartASizes����Size���MTime�d�U�Qr�MetaSys��x-minio-internal-inline-data�true�MetaUsr��content-type�text/plain�etag� 93662839239f8c2cb4a9ae412272957c-�q̦*r_{;k�l�іB���@Chapter 1: Childhood and Innocence

Chapter 2: Education and Curiosity

Chapter 3: Friendships and Bonds

Chapter 4: Love and Discovery

Chapter 5: Struggles and Resilience&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;확인해보면 &lt;code&gt;/tmp/disk3/test/life.txt/xl.meta&lt;/code&gt; 파일에는 실제 텍스트 내용이 아닌 이상한 데이터가 확인됩니다. 이것이 parity block인 것을 알 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;실제 유실 테스트를 통해 드라이브를 삭제해보고 파일이 복구가 되는지 확인해보겠습니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;# 사전 확인
mc admin info myminio

$ mc admin info myminio
●  127.0.0.1:9000
   Uptime: 25 minutes
   Version: 2025-09-07T16:13:09Z
   Network: 1/1 OK
   Drives: 4/4 OK
   Pool: 1

┌──────┬───────────────────────┬─────────────────────┬──────────────┐
│ Pool │ Drives Usage          │ Erasure stripe size │ Erasure sets │
│ 1st  │ 1.0% (total: 2.8 TiB) │ 4                   │ 1            │
└──────┴───────────────────────┴─────────────────────┴──────────────┘

65 KiB Used, 1 Bucket, 1 Object
4 drives online, 0 drives offline, EC:1

mc stat myminio/test
mc stat myminio/test/life.txt

$ mc stat myminio/test/life.txt
Name      : life.txt
Date      : 2025-09-12 21:57:35 KST
Size      : 65 KiB
ETag      : 93662839239f8c2cb4a9ae4122729571
Type      : file
Metadata  :
  Content-Type: text/plain

# 강제로 (패리티 아닌)디렉터리 1개 제거
rm -rf /tmp/disk1/test
tree -h /tmp

$ tree -h /tmp
[ 84K]  /tmp
├── [4.0K]  disk1 # 삭제됨
├── [4.0K]  disk2
│   └── [4.0K]  test
│       └── [4.0K]  life.txt
│           └── [ 22K]  xl.meta
├── [4.0K]  disk3
│   └── [4.0K]  test
│       └── [4.0K]  life.txt
│           └── [ 22K]  xl.meta
├── [4.0K]  disk4
│   └── [4.0K]  test
│       └── [4.0K]  life.txt
│           └── [ 22K]  xl.meta
...

# 이결과에 차이는 없다.
mc admin info myminio
mc stat myminio/test/life.txt


$ mc admin info myminio
●  127.0.0.1:9000
   Uptime: 26 minutes
   Version: 2025-09-07T16:13:09Z
   Network: 1/1 OK
   Drives: 4/4 OK
   Pool: 1

┌──────┬───────────────────────┬─────────────────────┬──────────────┐
│ Pool │ Drives Usage          │ Erasure stripe size │ Erasure sets │
│ 1st  │ 1.0% (total: 2.8 TiB) │ 4                   │ 1            │
└──────┴───────────────────────┴─────────────────────┴──────────────┘

65 KiB Used, 1 Bucket, 1 Object
4 drives online, 0 drives offline, EC:1

$ mc stat myminio/test/life.txt
Name      : life.txt
Date      : 2025-09-12 21:57:35 KST
Size      : 65 KiB
ETag      : 93662839239f8c2cb4a9ae4122729571
Type      : file
Metadata  :
  Content-Type: text/plain

# 버킷 힐
mc admin heal myminio/test

$ mc admin heal myminio/test
 ◐  test
    0/0 objects; 0 B in 0s
    ┌────────┬───┬─────────────────────┐
    │ Green  │ 1 │ 100.0% ████████████ │
    │ Yellow │ 0 │   0.0%              │
    │ Red    │ 0 │   0.0%              │
    │ Grey   │ 0 │   0.0%              │
    └────────┴───┴─────────────────────┘

# 복구 확인
tree -h /tmp

$ tree -h /tmp
[ 84K]  /tmp
├── [4.0K]  disk1
│   └── [4.0K]  test
│       └── [4.0K]  life.txt
│           └── [ 22K]  xl.meta # 복구 됨
├── [4.0K]  disk2
│   └── [4.0K]  test
│       └── [4.0K]  life.txt
│           └── [ 22K]  xl.meta
├── [4.0K]  disk3
│   └── [4.0K]  test
│       └── [4.0K]  life.txt
│           └── [ 22K]  xl.meta
├── [4.0K]  disk4
│   └── [4.0K]  test
│       └── [4.0K]  life.txt
│           └── [ 22K]  xl.meta
&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이때 parity block이 1개 이기 때문에 2개 이상의 data가 손실이 되는 경우는 복구가 실패합니다. 만약 EC(Erasure code)가 2라면 결과가 어떻게 달라지는 추가로 확인해보겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;먼저 실습 환경을 정리하고 다시 생성해보겠습니다.&lt;/p&gt;
&lt;pre class=&quot;jboss-cli&quot;&gt;&lt;code&gt;# 실습 환경 정리
docker rm -f minio &amp;amp;&amp;amp; sudo rm -rf /tmp/disk1 /tmp/disk2 /tmp/disk3 /tmp/disk4

# drive용 폴더 생성
mkdir -p /tmp/disk1 /tmp/disk2 /tmp/disk3 /tmp/disk4

# MinIO 실행 (EC:2)
docker run -itd -p 9000:9000 -p 9090:9090 --name minio \
  -v /tmp/disk1:/data1 \
  -v /tmp/disk2:/data2 \
  -v /tmp/disk3:/data3 \
  -v /tmp/disk4:/data4 \
  -e &quot;MINIO_ROOT_USER=admin&quot; -e &quot;MINIO_ROOT_PASSWORD=minio123&quot; -e &quot;MINIO_STORAGE_CLASS_STANDARD=EC:2&quot; \
  quay.io/minio/minio server /data{1...4} --console-address &quot;:9090&quot;&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;다시 파일을 업로드해서 실제 드라이브에 저장된 내용을 확인해보겠습니다.&lt;/p&gt;
&lt;pre class=&quot;haskell&quot;&gt;&lt;code&gt;$ tree /tmp
/tmp
├── disk1
│   └── test
│       └── life.txt
│           └── xl.meta
├── disk2
│   └── test
│       └── life.txt
│           └── xl.meta
├── disk3
│   └── test
│       └── life.txt
│           └── xl.meta
├── disk4
│   └── test
│       └── life.txt
│           └── xl.meta

cat /tmp/disk1/test/life.txt/xl.meta | head
cat /tmp/disk2/test/life.txt/xl.meta | head
cat /tmp/disk3/test/life.txt/xl.meta | head
cat /tmp/disk4/test/life.txt/xl.meta | head


$ cat /tmp/disk1/test/life.txt/xl.meta | head
XL2 �s�&amp;amp;���d�Z����r���E��Type�V2Obj��ID��DDir��#��c�A&amp;lt;�0�+���EcAlgo�EcM�EcN�EcBSize��EcIndex�EcDist��CSumAlgo�PartNums��PartETags��PartSizes����PartASizes����Size���MTime�d�Z����MetaSys��x-minio-internal-inline-data�true�MetaUsr��content-type�text/plain�etag� 93662839239f8c2cb4a9ae4122729571�v�h���������nullł}�&amp;lt;5���w���ܚj���t0�T�4T�{  the kitchen, I feel grateful to start the day.
304. A single flower by the roadside, leaves swaying in the wind, these bring me comfort.
305. People often equate happiness with great achievements, but I treasure the little moments.

306. A laugh shared in a lighthearted conversation with a friend warms my heart.
307. On a rainy day, sitting by the window with a book and listening to the raindrops calms me.
308. In these daily moments, I realize life is far from simple repetition.
309. Small joys accumulate, giving life meaning, and ultimately keeping us alive.
310. I often look up at the sky; the blue expanse and shifting clouds offer a fresh perspective.

# parity block
$ cat /tmp/disk2/test/life.txt/xl.meta | head
XL2 �s�&amp;amp;���d�Z����r���E��Type�V2Obj��ID��DDir��#��c�A&amp;lt;�0�+���EcAlgo�EcM�EcN�EcBSize��EcIndex�EcDist��CSumAlgo�PartNums��PartETags��PartSizes����PartASizes����Size���MTime�d�Z����MetaSys��x-minio-internal-inline-data�true�MetaUsr��content-type�text/plain�etag� 93662839239f8c2cb4a9ae4122729571�v�h���C�#���nullł}س���e�h�.|���k+�w�V��
��&amp;lt;�vZ�PsZ�yD������&amp;amp;�t{{t mVn�~Xi�Gl�I��!P�xLe֨��&amp;lt;�������0 E`p�U�wiKUVo^�7�K@V�r����^q��t_zWZs E\n�m|�G�^-hi�r}J��{|e gV~��Oo�\qLG��-��Ј��
�?BvGc�jc}��f�m}q�kmRE}��#^�~NGF ���;Y��s`f�    mlgt]����-�1�tK\�����SdU�wG�Kjf qO_mZsB^������� �Iyk�m�aVf��}u�g���
...

# parity block
$ cat /tmp/disk3/test/life.txt/xl.meta | head
XL2 �s�&amp;amp;���d�Z����r���E��Type�V2Obj��ID��DDir��#��c�A&amp;lt;�0�+���EcAlgo�EcM�EcN�EcBSize��EcIndex�EcDist��CSumAlgo�PartNums��PartETags��PartSizes����PartASizes����Size���MTime�d�Z����MetaSys��x-minio-internal-inline-data�true�MetaUsr��etag� 93662839239f8c2cb4a9ae4122729571�content-type�text/plainB�u�zgg�-mk`tG����␦��t\K����#�N`E�~]�^hg /{QIl@sft��ܵ��� ��/]sj�g�aJg��q|�e���/h^XGir��� '`Fa�WT�x�AqZP�}���=pL�SG�����zsqJR2��^͂�� n���\QenCt���.:�������:��:���:������:����:�:����:�4|�TS_x����/
...

$ cat /tmp/disk4/test/life.txt/xl.meta | head
XL2 �s�&amp;amp;���d�Z����r���E��Type�V2Obj��ID��DDir��#��c�A&amp;lt;�0�+���EcAlgo�EcM�EcN�EcBSize��EcIndex�EcDist��CSumAlgo�PartNums��PartETags��PartSizes����PartASizes����Size���MTime�d�Z����MetaSys��x-minio-internal-inline-data�true�MetaUsr��etag� 93662839239f8c2cb4a9ae4122729571�content-type�text/plain�v�h��&amp;delta;�����nullł}���rY��A�p�VWv��`n���$9�� �3Chapter 1: Childhood and Innocence

Chapter 2: Education and Curiosity

Chapter 3: Friendships and Bonds

Chapter 4: Love and Discovery

Chapter 5: Struggles and Resilience&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;확인해보면 이번에는 disk1, disk4 가 data block이고 disk2, disk3이 parity block입니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;disk1, disk4를 삭제하고 복구가 가능한지 보겠습니다.&lt;/p&gt;
&lt;pre class=&quot;stata&quot;&gt;&lt;code&gt;$ sudo rm -rf /tmp/disk1/test
$ sudo rm -rf /tmp/disk4/test
$ tree /tmp
/tmp
├── disk1
├── disk2
│   └── test
│       └── life.txt
│           └── xl.meta
├── disk3
│   └── test
│       └── life.txt
│           └── xl.meta
├── disk4
├── snap-private-tmp  [error opening dir]
...&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이 경우에도 parity block 을 통해서 정상적으로 복구가 되었습니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;$ mc admin heal myminio/test
 ◐  test
    0/0 objects; 0 B in 1s
    ┌────────┬───┬─────────────────────┐
    │ Green  │ 1 │ 100.0% ████████████ │
    │ Yellow │ 0 │   0.0%              │
    │ Red    │ 0 │   0.0%              │
    │ Grey   │ 0 │   0.0%              │
    └────────┴───┴─────────────────────┘

$ tree /tmp
/tmp
├── disk1
│   └── test
│       └── life.txt
│           └── xl.meta
├── disk2
│   └── test
│       └── life.txt
│           └── xl.meta
├── disk3
│   └── test
│       └── life.txt
│           └── xl.meta
├── disk4
│   └── test
│       └── life.txt
│           └── xl.meta
├── snap-private-tmp  [error opening dir]
...

$ cat /tmp/disk1/test/life.txt/xl.meta | head
XL2 �s�&amp;amp;���d�Z����r���E��Type�V2Obj��ID��DDir��#��c�A&amp;lt;�0�+���EcAlgo�EcM�EcN�EcBSize��EcIndex�EcDist��CSumAlgo�PartNums��PartETags��PartSizes����PartASizes����Size���MTime�d�Z����MetaSys��x-minio-internal-inline-data�true�MetaUsr��etag� 93662839239f8c2cb4a9ae4122729571�content-type�text/plain�v�h���[�F���nullł}�&amp;lt;5���w���ܚj���t0�T�4T�{  the kitchen, I feel grateful to start the day.
304. A single flower by the roadside, leaves swaying in the wind, these bring me comfort.
305. People often equate happiness with great achievements, but I treasure the little moments.

306. A laugh shared in a lighthearted conversation with a friend warms my heart.
307. On a rainy day, sitting by the window with a book and listening to the raindrops calms me.
308. In these daily moments, I realize life is far from simple repetition.
309. Small joys accumulate, giving life meaning, and ultimately keeping us alive.
310. I often look up at the sky; the blue expanse and shifting clouds offer a fresh perspective.

$ cat /tmp/disk4/test/life.txt/xl.meta | head
XL2 �s�&amp;amp;���d�Z����r���E��Type�V2Obj��ID��DDir��#��c�A&amp;lt;�0�+���EcAlgo�EcM�EcN�EcBSize��EcIndex�EcDist��CSumAlgo�PartNums��PartETags��PartSizes����PartASizes����Size���MTime�d�Z����MetaSys��x-minio-internal-inline-data�true�MetaUsr��content-type�text/plain�etag� 93662839239f8c2cb4a9ae4122729571�v�h�����2��nullł}���rY��A�p�VWv��`n���$9�� �3Chapter 1: Childhood and Innocence

Chapter 2: Education and Curiosity

Chapter 3: Friendships and Bonds

Chapter 4: Love and Discovery

Chapter 5: Struggles and Resilience
&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;실습 환경을 정리하고 이제 쿠버네티스 환경에서 MinIO 를 실행해보겠습니다.&lt;/p&gt;
&lt;pre class=&quot;awk&quot;&gt;&lt;code&gt;docker rm -f minio &amp;amp;&amp;amp; sudo rm -rf /tmp/disk1 /tmp/disk2 /tmp/disk3 /tmp/disk4&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;3. MinIO on Kubernetes&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;간단히 로컬에서 kind를 통한 쿠버네티스 환경을 구성하고 MinIO를 실행해보겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;쿠버네티스 환경에서 MinIO는 MinIO operator를 통해서 관리됩니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;600&quot; data-origin-height=&quot;689&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/E3uKz/btsQzwfsbYj/ERQCWu2RMH6wBuyEkvTqM0/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/E3uKz/btsQzwfsbYj/ERQCWu2RMH6wBuyEkvTqM0/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/E3uKz/btsQzwfsbYj/ERQCWu2RMH6wBuyEkvTqM0/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FE3uKz%2FbtsQzwfsbYj%2FERQCWu2RMH6wBuyEkvTqM0%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;600&quot; height=&quot;689&quot; data-origin-width=&quot;600&quot; data-origin-height=&quot;689&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;출처: &lt;a href=&quot;https://blog.min.io/why-kubernetes-managed/&quot;&gt;https://blog.min.io/why-kubernetes-managed/&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;그림을 보면 알 수 있듯이 MinIO Operator를 통해서 MinIO Tenant를 생성하게 됩니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Tenant는 오브젝트 스토리지 서비스를 제공하기 위해 네임스페이스에 배포되는 쿠버네티스 리소스의 세트로 tenant 단위로 오브젝트 스토리지 풀을 생성하게 됩니다. 이 Tenant 별로 MinIO Console에 대한 엔드포인트(관리자용)와 오브젝트 스토리지에 대한 엔드포인트(애플리케이션 용)가 노출됩니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;실습을 위해 kind로 쿠버네티스 클러스터를 먼저 설치하겠습니다.&lt;/p&gt;
&lt;pre class=&quot;vim&quot;&gt;&lt;code&gt;mkdir minio &amp;amp;&amp;amp; cd minio

# kind 설치 (4대의 워커 노드 생성)
kind create cluster --name myk8s --image kindest/node:v1.33.4 --config - &amp;lt;&amp;lt;EOF
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
  extraPortMappings:
  - containerPort: 30000 
    hostPort: 30000
  - containerPort: 30001
    hostPort: 30001
  - containerPort: 30002
    hostPort: 30002
- role: worker
- role: worker
- role: worker
- role: worker
EOF

# 확인
kubectl get no

$ kubectl get no
NAME                  STATUS   ROLES           AGE   VERSION
myk8s-control-plane   Ready    control-plane   30s   v1.33.4
myk8s-worker          Ready    &amp;lt;none&amp;gt;          18s   v1.33.4
myk8s-worker2         Ready    &amp;lt;none&amp;gt;          18s   v1.33.4
myk8s-worker3         Ready    &amp;lt;none&amp;gt;          18s   v1.33.4
myk8s-worker4         Ready    &amp;lt;none&amp;gt;          18s   v1.33.4

# kube-ops-view
helm repo add geek-cookbook https://geek-cookbook.github.io/charts/
helm install kube-ops-view geek-cookbook/kube-ops-view --version 1.2.2 --set service.main.type=NodePort,service.main.ports.http.nodePort=30000 --set env.TZ=&quot;Asia/Seoul&quot; --namespace kube-system
echo -e &quot;KUBE-OPS-VIEW URL = http://localhost:30000/#scale=1.5&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이후 Operator를 helm으로 설치하겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;참고: &lt;a href=&quot;https://docs.min.io/community/minio-object-store/operations/deployments/k8s-deploy-operator-helm-on-kubernetes.html&quot;&gt;https://docs.min.io/community/minio-object-store/operations/deployments/k8s-deploy-operator-helm-on-kubernetes.html&lt;/a&gt;&lt;/p&gt;
&lt;pre class=&quot;sql&quot;&gt;&lt;code&gt;# Add the MinIO Operator Repo to Helm
helm repo add minio-operator https://operator.min.io
helm repo update
helm search repo minio-operator

$ helm search repo minio-operator
NAME                            CHART VERSION   APP VERSION     DESCRIPTION
minio-operator/minio-operator   4.3.7           v4.3.7          A Helm chart for MinIO Operator
minio-operator/operator         7.1.1           v7.1.1          A Helm chart for MinIO Operator
minio-operator/tenant           7.1.1           v7.1.1          A Helm chart for MinIO Operator

# Install the Operator : Run the helm install command to install the Operator. 
# The following command specifies and creates a dedicated namespace minio-operator for installation. 
# MinIO strongly recommends using a dedicated namespace for the Operator.
helm install \
  --namespace minio-operator \
  --create-namespace \
  --set operator.replicaCount=1 \
  operator minio-operator/operator 

# 확인
kubectl get all -n minio-operator
kubectl get crd

$ kubectl get all -n minio-operator
NAME                                 READY   STATUS    RESTARTS   AGE
pod/minio-operator-84867f7cd-f7hln   1/1     Running   0          20s

NAME               TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
service/operator   ClusterIP   10.96.195.41    &amp;lt;none&amp;gt;        4221/TCP   20s
service/sts        ClusterIP   10.96.230.168   &amp;lt;none&amp;gt;        4223/TCP   20s

NAME                             READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/minio-operator   1/1     1            1           20s

NAME                                       DESIRED   CURRENT   READY   AGE
replicaset.apps/minio-operator-84867f7cd   1         1         1       20s

$ kubectl get crd
NAME                        CREATED AT
policybindings.sts.min.io   2025-09-12T13:47:02Z
tenants.minio.min.io        2025-09-12T13:47:02Z&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이전 도커 컨테이너를 통하여 MinIO server를 실행한 것과 다르게, MinIO Operator를 생성한다고 해서 MinIO 오브젝트 스토리지가 생성되는 것이 아닙니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;실제로 tenant를 생성해줘야 MinIO 오브젝트 스토리지가 생성됩니다. Operator를 통해서 생성된 CRD를 확인하였고, 이제 MinIO tenant를 생성하기 위해서 CR(custom resource)를 배포합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;참고: &lt;a href=&quot;https://docs.min.io/community/minio-object-store/operations/deployments/k8s-minio-tenants.html&quot;&gt;https://docs.min.io/community/minio-object-store/operations/deployments/k8s-minio-tenants.html&lt;/a&gt;&lt;/p&gt;
&lt;pre class=&quot;sql&quot;&gt;&lt;code&gt;# tenent 차트를 위한 value 파일을 다운 받습니다.
curl -sLo values.yaml https://raw.githubusercontent.com/minio/operator/master/helm/tenant/values.yaml

# value 수정
# configSecret의 accessKey, Secret key확인: minio , minio123
tenant:
  pools:
    - servers: 4 # 파드로 실행됨
      name: pool-0
      # The number of volumes attached per MinIO Tenant Pod / Server.
      volumesPerServer: 1 # 4 -&amp;gt; 1 로 수정 
      # The capacity per volume requested per MinIO Tenant Pod.
      size: 1Gi # 10Gi -&amp;gt; 1Gi 로 수정

# env: [] # 아래와 같이 환경 변수 추가
  env:
    - name: MINIO_STORAGE_CLASS_STANDARD
      value: &quot;EC:1&quot;

# tenant 배포
helm install \
--namespace tenant-0 \
--create-namespace \
--values values.yaml \
tenant-0 minio-operator/tenant

# 확인
kubectl get tenants -A -w

$ kubectl get tenants -A -w
NAMESPACE   NAME      STATE                               HEALTH   AGE
tenant-0    myminio   Waiting for MinIO TLS Certificate            5s
tenant-0    myminio   Provisioning MinIO Cluster IP Service            15s
tenant-0    myminio   Provisioning Console Service                     15s
tenant-0    myminio   Provisioning MinIO Headless Service              15s
tenant-0    myminio   Provisioning MinIO Headless Service              16s
tenant-0    myminio   Provisioning MinIO Statefulset                   16s
tenant-0    myminio   Provisioning MinIO Statefulset                   16s
tenant-0    myminio   Provisioning MinIO Statefulset                   17s
tenant-0    myminio   Waiting for Tenant to be healthy                 17s
tenant-0    myminio   Waiting for Tenant to be healthy        red      46s
tenant-0    myminio   Waiting for Tenant to be healthy        green    48s
tenant-0    myminio   Initialized                             green    50s
tenant-0    myminio   Initialized                             green    51s
tenant-0    myminio   Initialized                             green    51s
tenant-0    myminio   Initialized                             green    55s
tenant-0    myminio   Initialized                             green    2m12s


kubectl get tenants -n tenant-0

$ kubectl get tenants -n tenant-0
NAME      STATE         HEALTH   AGE
myminio   Initialized   green    3m5s&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;tenant가 initialized가 완료되고 실행 중인 리소스 확인해 보겠습니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;# 리소스 확인
kubectl get all -n tenant-0
kubectl get sts,pod,svc,ep,pvc,secret -n tenant-0

$ kubectl get all -n tenant-0
NAME                   READY   STATUS    RESTARTS   AGE
pod/myminio-pool-0-0   2/2     Running   0          4m44s
pod/myminio-pool-0-1   2/2     Running   0          4m44s
pod/myminio-pool-0-2   2/2     Running   0          4m44s
pod/myminio-pool-0-3   2/2     Running   0          4m44s

NAME                      TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
service/minio             ClusterIP   10.96.228.130   &amp;lt;none&amp;gt;        443/TCP    4m45s
service/myminio-console   ClusterIP   10.96.236.64    &amp;lt;none&amp;gt;        9443/TCP   4m45s
service/myminio-hl        ClusterIP   None            &amp;lt;none&amp;gt;        9000/TCP   4m45s

NAME                              READY   AGE
statefulset.apps/myminio-pool-0   4/4     4m44s


$ kubectl get sts,pod,svc,ep,pvc,secret -n tenant-0
Warning: v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
NAME                              READY   AGE
statefulset.apps/myminio-pool-0   4/4     4m48s

NAME                   READY   STATUS    RESTARTS   AGE
pod/myminio-pool-0-0   2/2     Running   0          4m48s
pod/myminio-pool-0-1   2/2     Running   0          4m48s
pod/myminio-pool-0-2   2/2     Running   0          4m48s
pod/myminio-pool-0-3   2/2     Running   0          4m48s

NAME                      TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
service/minio             ClusterIP   10.96.228.130   &amp;lt;none&amp;gt;        443/TCP    4m49s
service/myminio-console   ClusterIP   10.96.236.64    &amp;lt;none&amp;gt;        9443/TCP   4m49s
service/myminio-hl        ClusterIP   None            &amp;lt;none&amp;gt;        9000/TCP   4m49s

NAME                        ENDPOINTS                                                     AGE
endpoints/minio             10.244.1.4:9000,10.244.2.3:9000,10.244.3.3:9000 + 1 more...   4m49s
endpoints/myminio-console   10.244.1.4:9443,10.244.2.3:9443,10.244.3.3:9443 + 1 more...   4m49s
endpoints/myminio-hl        10.244.1.4:9000,10.244.2.3:9000,10.244.3.3:9000 + 1 more...   4m49s

NAME                                           STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   VOLUMEATTRIBUTESCLASS   AGE
persistentvolumeclaim/data0-myminio-pool-0-0   Bound    pvc-ef35ab5b-9745-4665-b460-0d65c1d77353   1Gi        RWO            standard       &amp;lt;unset&amp;gt;                 4m48s
persistentvolumeclaim/data0-myminio-pool-0-1   Bound    pvc-629b9a00-5e73-4251-9813-b4311e5b43b6   1Gi        RWO            standard       &amp;lt;unset&amp;gt;                 4m48s
persistentvolumeclaim/data0-myminio-pool-0-2   Bound    pvc-0d14d086-d0e1-4ce5-a450-be0479861554   1Gi        RWO            standard       &amp;lt;unset&amp;gt;                 4m48s
persistentvolumeclaim/data0-myminio-pool-0-3   Bound    pvc-e2aa95ac-7e8c-4cf4-aaa2-a1008afea8bf   1Gi        RWO            standard       &amp;lt;unset&amp;gt;                 4m48s

NAME                                    TYPE                 DATA   AGE
secret/myminio-env-configuration        Opaque               1      5m4s
secret/myminio-tls                      Opaque               2      4m54s
secret/sh.helm.release.v1.tenant-0.v1   helm.sh/release.v1   1      5m4s


# 확인
kubectl get pod -n tenant-0 -l v1.min.io/pool=pool-0 -owide
kubectl describe pod -n tenant-0 -l v1.min.io/pool=pool-0
kubectl logs -n tenant-0 -l v1.min.io/pool=pool-0

$ kubectl get pod -n tenant-0 -l v1.min.io/pool=pool-0 -owide
NAME               READY   STATUS    RESTARTS   AGE     IP           NODE            NOMINATED NODE   READINESS GATES
myminio-pool-0-0   2/2     Running   0          6m49s   10.244.2.3   myk8s-worker2   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
myminio-pool-0-1   2/2     Running   0          6m49s   10.244.4.3   myk8s-worker4   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
myminio-pool-0-2   2/2     Running   0          6m49s   10.244.3.3   myk8s-worker3   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
myminio-pool-0-3   2/2     Running   0          6m49s   10.244.1.4   myk8s-worker    &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;실제로 실행된 MinIO 파드는 아래 그림의 우측에서 확인 할 수 있듯이 3개의 컨테이너를 가집니다.&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;표준 MinIO 기능을 실행하는 &lt;b&gt;MinIO container&lt;/b&gt; 로 단일 MinIO를 설치하는 것과 동일합니다. 이 컨테이너는 제공된 마운트 지점(영구 볼륨)에 객체를 저장하고 검색합니다.&lt;/li&gt;
&lt;li&gt;파드 시작 시 configuration secret을 관리하는 &lt;b&gt;InitContainer&lt;/b&gt;입니다.&lt;/li&gt;
&lt;li&gt;테넌트의 &lt;b&gt;configuration secret을 모니터링하고 변경&lt;/b&gt; 시 이를 업데이트하는 &lt;b&gt;SideCar container&lt;/b&gt; 입니다.&lt;/li&gt;
&lt;/ul&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1584&quot; data-origin-height=&quot;1080&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/c8WMnm/btsQxfeXUys/rKxlCmKnGK2RtDobzUY8K0/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/c8WMnm/btsQxfeXUys/rKxlCmKnGK2RtDobzUY8K0/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/c8WMnm/btsQxfeXUys/rKxlCmKnGK2RtDobzUY8K0/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2Fc8WMnm%2FbtsQxfeXUys%2FrKxlCmKnGK2RtDobzUY8K0%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1584&quot; height=&quot;1080&quot; data-origin-width=&quot;1584&quot; data-origin-height=&quot;1080&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;출처: &lt;a href=&quot;https://docs.min.io/community/minio-object-store/operations/deployments/k8s-minio-tenants.html&quot;&gt;https://docs.min.io/community/minio-object-store/operations/deployments/k8s-minio-tenants.html&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;추가로 로그를 통해서 webUI 정보를 확인하겠습니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;$ kubectl logs -n tenant-0 -l v1.min.io/pool=pool-0
Defaulted container &quot;minio&quot; out of: minio, sidecar, validate-arguments (init)
Defaulted container &quot;minio&quot; out of: minio, sidecar, validate-arguments (init)
Defaulted container &quot;minio&quot; out of: minio, sidecar, validate-arguments (init)
Defaulted container &quot;minio&quot; out of: minio, sidecar, validate-arguments (init)
MinIO Object Storage Server
Copyright: 2015-2025 MinIO, Inc.
License: GNU AGPLv3 - https://www.gnu.org/licenses/agpl-3.0.html
Version: RELEASE.2025-04-08T15-41-24Z (go1.24.2 linux/amd64)

API: https://minio.tenant-0.svc.cluster.local
WebUI: https://10.244.3.3:9443 https://127.0.0.1:9443

Docs: https://docs.min.io
---------------------------
MinIO Object Storage Server
Copyright: 2015-2025 MinIO, Inc.
License: GNU AGPLv3 - https://www.gnu.org/licenses/agpl-3.0.html
Version: RELEASE.2025-04-08T15-41-24Z (go1.24.2 linux/amd64)

API: https://minio.tenant-0.svc.cluster.local
WebUI: https://10.244.1.4:9443 https://127.0.0.1:9443

Docs: https://docs.min.io
---------------------------
MinIO Object Storage Server
Copyright: 2015-2025 MinIO, Inc.
License: GNU AGPLv3 - https://www.gnu.org/licenses/agpl-3.0.html
Version: RELEASE.2025-04-08T15-41-24Z (go1.24.2 linux/amd64)

API: https://minio.tenant-0.svc.cluster.local
WebUI: https://10.244.2.3:9443 https://127.0.0.1:9443

Docs: https://docs.min.io
---------------------------
MinIO Object Storage Server
Copyright: 2015-2025 MinIO, Inc.
License: GNU AGPLv3 - https://www.gnu.org/licenses/agpl-3.0.html
Version: RELEASE.2025-04-08T15-41-24Z (go1.24.2 linux/amd64)

API: https://minio.tenant-0.svc.cluster.local
WebUI: https://10.244.4.3:9443 https://127.0.0.1:9443

Docs: https://docs.min.io
---------------------------

# 설정된 secret 정보 확인
kubectl get secret -n tenant-0 myminio-env-configuration -o jsonpath='{.data.config\.env}' | base64 -d ; echo

$ kubectl get secret -n tenant-0 myminio-env-configuration -o jsonpath='{.data.config\.env}' | base64 -d ; echo
export MINIO_ROOT_USER=&quot;minio&quot;
export MINIO_ROOT_PASSWORD=&quot;minio123&quot;

# 기본 설정된 clsuter ip를 nodeport로 변경하여 접속
kubectl patch svc -n tenant-0 myminio-console -p '{&quot;spec&quot;: {&quot;type&quot;: &quot;NodePort&quot;, &quot;ports&quot;: [{&quot;port&quot;: 9443, &quot;targetPort&quot;: 9443, &quot;nodePort&quot;: 30001}]}}'&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;확인된 정보를 바탕으로 webUI를 접근해보면 standalone으로 실행한 MinIO보다 풍부한 기능을 제공하는 것을 확인할 수 있습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1813&quot; data-origin-height=&quot;1264&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/rKJKw/btsQzleaM1C/fIpnOzzTIlmrlLWv3RmjF1/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/rKJKw/btsQzleaM1C/fIpnOzzTIlmrlLWv3RmjF1/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/rKJKw/btsQzleaM1C/fIpnOzzTIlmrlLWv3RmjF1/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FrKJKw%2FbtsQzleaM1C%2FfIpnOzzTIlmrlLWv3RmjF1%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1813&quot; height=&quot;1264&quot; data-origin-width=&quot;1813&quot; data-origin-height=&quot;1264&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;minIO의 오브젝트 스토리지에 대한 엔드포인트도 nodeport로 변경해서 &lt;code&gt;mc&lt;/code&gt;로 관리를 해보겠습니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;# nodeport로 변경
kubectl patch svc -n tenant-0 minio -p '{&quot;spec&quot;: {&quot;type&quot;: &quot;NodePort&quot;, &quot;ports&quot;: [{&quot;port&quot;: 443, &quot;targetPort&quot;: 9000, &quot;nodePort&quot;: 30002}]}}'

# alias 추가
mc alias set k8sminio https://127.0.0.1:30002 minio minio123 --insecure
mc alias list
mc admin info k8sminio --insecure

$ mc alias set k8sminio https://127.0.0.1:30002 minio minio123 --insecure
Added `k8sminio` successfully.

$ mc admin info k8sminio --insecure
●  myminio-pool-0-0.myminio-hl.tenant-0.svc.cluster.local:9000
   Uptime: 13 minutes
   Version: 2025-04-08T15:41:24Z
   Network: 4/4 OK
   Drives: 1/1 OK
   Pool: 1

●  myminio-pool-0-1.myminio-hl.tenant-0.svc.cluster.local:9000
   Uptime: 13 minutes
   Version: 2025-04-08T15:41:24Z
   Network: 4/4 OK
   Drives: 1/1 OK
   Pool: 1

●  myminio-pool-0-2.myminio-hl.tenant-0.svc.cluster.local:9000
   Uptime: 13 minutes
   Version: 2025-04-08T15:41:24Z
   Network: 4/4 OK
   Drives: 1/1 OK
   Pool: 1

●  myminio-pool-0-3.myminio-hl.tenant-0.svc.cluster.local:9000
   Uptime: 13 minutes
   Version: 2025-04-08T15:41:24Z
   Network: 4/4 OK
   Drives: 1/1 OK
   Pool: 1

┌──────┬───────────────────────┬─────────────────────┬──────────────┐
│ Pool │ Drives Usage          │ Erasure stripe size │ Erasure sets │
│ 1st  │ 1.6% (total: 2.8 TiB) │ 4                   │ 1            │
└──────┴───────────────────────┴─────────────────────┴──────────────┘

4 drives online, 0 drives offline, EC:1&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이 환경에서도 테스트 파일을 업로드 해보고 erasure code를 확인해보겠습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1790&quot; data-origin-height=&quot;605&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/cwvhlv/btsQxuQNNgJ/7OzkKD2Jni39IYeuADDVPk/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/cwvhlv/btsQxuQNNgJ/7OzkKD2Jni39IYeuADDVPk/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/cwvhlv/btsQxuQNNgJ/7OzkKD2Jni39IYeuADDVPk/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2Fcwvhlv%2FbtsQxuQNNgJ%2F7OzkKD2Jni39IYeuADDVPk%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1790&quot; height=&quot;605&quot; data-origin-width=&quot;1790&quot; data-origin-height=&quot;605&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아래와 같이 확인할 수 있습니다.&lt;/p&gt;
&lt;pre class=&quot;stata&quot;&gt;&lt;code&gt;# 노드에 기본 툴 설치
docker exec -it myk8s-control-plane sh -c 'apt update &amp;amp;&amp;amp; apt install tree -y'
for node in worker worker2 worker3 worker4; do echo &quot;node : myk8s-$node&quot; ; docker exec -it myk8s-$node sh -c 'apt update &amp;amp;&amp;amp; apt install tree -y'; echo; done

# PV가 위치한 각 노드의 경로 확인
kubectl describe pv

$ kubectl describe pv | grep Path
    Type:          HostPath (bare host directory volume)
    Path:          /var/local-path-provisioner/pvc-0d14d086-d0e1-4ce5-a450-be0479861554_tenant-0_data0-myminio-pool-0-2
    HostPathType:  DirectoryOrCreate
    Type:          HostPath (bare host directory volume)
    Path:          /var/local-path-provisioner/pvc-629b9a00-5e73-4251-9813-b4311e5b43b6_tenant-0_data0-myminio-pool-0-1
    HostPathType:  DirectoryOrCreate
    Type:          HostPath (bare host directory volume)
    Path:          /var/local-path-provisioner/pvc-e2aa95ac-7e8c-4cf4-aaa2-a1008afea8bf_tenant-0_data0-myminio-pool-0-3
    HostPathType:  DirectoryOrCreate
    Type:          HostPath (bare host directory volume)
    Path:          /var/local-path-provisioner/pvc-ef35ab5b-9745-4665-b460-0d65c1d77353_tenant-0_data0-myminio-pool-0-0
    HostPathType:  DirectoryOrCreate

# 각 워커 노드에서 tree를 통해서 erasure code를 통한 파일 확인
for node in worker worker2 worker3 worker4; do echo &quot;node : myk8s-$node&quot; ; docker exec -it myk8s-$node tree -h /var/local-path-provisioner; echo; done


$ for node in worker worker2 worker3 worker4; do echo &quot;node : myk8s-$node&quot; ; docker exec -it myk8s-$node tree -h /var/local-path-provisioner; echo; done
node : myk8s-worker
[4.0K]  /var/local-path-provisioner
`-- [4.0K]  pvc-e2aa95ac-7e8c-4cf4-aaa2-a1008afea8bf_tenant-0_data0-myminio-pool-0-3
    `-- [4.0K]  data
        `-- [4.0K]  test
            `-- [4.0K]  life.txt
                `-- [ 22K]  xl.meta

5 directories, 1 file

node : myk8s-worker2
[4.0K]  /var/local-path-provisioner
`-- [4.0K]  pvc-ef35ab5b-9745-4665-b460-0d65c1d77353_tenant-0_data0-myminio-pool-0-0
    `-- [4.0K]  data
        `-- [4.0K]  test
            `-- [4.0K]  life.txt
                `-- [ 22K]  xl.meta

5 directories, 1 file

node : myk8s-worker3
[4.0K]  /var/local-path-provisioner
`-- [4.0K]  pvc-0d14d086-d0e1-4ce5-a450-be0479861554_tenant-0_data0-myminio-pool-0-2
    `-- [4.0K]  data
        `-- [4.0K]  test
            `-- [4.0K]  life.txt
                `-- [ 22K]  xl.meta

5 directories, 1 file

node : myk8s-worker4
[4.0K]  /var/local-path-provisioner
`-- [4.0K]  pvc-629b9a00-5e73-4251-9813-b4311e5b43b6_tenant-0_data0-myminio-pool-0-1
    `-- [4.0K]  data
        `-- [4.0K]  test
            `-- [4.0K]  life.txt
                `-- [ 22K]  xl.meta

5 directories, 1 file

# 실제 파일 확인
docker exec -it myk8s-worker  sh -c 'cat /var/local-path-provisioner/*/data/ㅅtest/life.txt/xl.meta'
docker exec -it myk8s-worker2 sh -c 'cat /var/local-path-provisioner/*/test/mybucket/life.txt/xl.meta'
docker exec -it myk8s-worker3 sh -c 'cat /var/local-path-provisioner/*/test/mybucket/life.txt/xl.meta'
docker exec -it myk8s-worker4 sh -c 'cat /var/local-path-provisioner/*/test/mybucket/life.txt/xl.meta'


$ docker exec -it myk8s-worker  sh -c 'cat /var/local-path-provisioner/*/data/test/life.txt/x
l.meta'
XL2 �s�&amp;amp;���d���!,�:)��E��Type�V2Obj��ID��DDir�M%A�)�Cj���cI�t�EcAlgo�EcM�EcN�EcBSize��EcIndex�EcDist��CSumAlgo�PartNums��PartETags��PartSizes����PartASizes����Size���MTime�d���!,�MetaSys��x-minio-internal-inline-data�true�MetaUsr��content-type�text/plain�etag� 93662839239f8c2cb4a9ae412272957c-�q̦*r_{;k�l�іB���@Chapter 1: Childhood and Innocence

Chapter 2: Education and Curiosity

Chapter 3: Friendships and Bonds
...&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;쿠버네티스 환경의 MinIO 또한 erasure code에 의해서 각 PV에 파일이 분산되어 저장된 것을 확인할 수 있었습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;마치며&lt;/h3&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;앞서 살펴본 MinIO의 erasure code를 실제 실습 환경에서 살펴보면서 어떤 의미를 가지는지 확인해보았습니다. 실제로는 심플한 형태로 실행되는데, 이것만으로 오브젝트 스토리지 서비스를 제공한다는 점이 놀라운 것 같습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;다음 게시물에서는 MinIO의 DirectPV를 실습해 보겠습니다.&lt;/p&gt;</description>
      <category>MinIO</category>
      <category>kubernetes</category>
      <category>minIO</category>
      <category>Object Storage</category>
      <category>SNMD</category>
      <category>SNSD</category>
      <author>한명</author>
      <guid isPermaLink="true">https://a-person.tistory.com/62</guid>
      <comments>https://a-person.tistory.com/62#entry62comment</comments>
      <pubDate>Fri, 12 Sep 2025 23:43:17 +0900</pubDate>
    </item>
    <item>
      <title>[1] MinIO 개요</title>
      <link>https://a-person.tistory.com/61</link>
      <description>&lt;p data-ke-size=&quot;size16&quot;&gt;이번 게시물에서는 오브젝트 스토리지 MinIO에 대해서 알아보겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;목차&lt;/h2&gt;
&lt;ol style=&quot;list-style-type: decimal;&quot; data-ke-list-type=&quot;decimal&quot;&gt;
&lt;li&gt;Block, File, Object Storage&lt;/li&gt;
&lt;li&gt;MinIO 개요&lt;/li&gt;
&lt;li&gt;MinIO 주요 동작&lt;/li&gt;
&lt;/ol&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;1. Block, File, Object Storage&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;MinIO는 오브젝트 스토리지 솔루션입니다. 왜 MinIO가 오브젝트 스토리지를 선택하게 되었는지 기본적인 스토리지 개념부터 살펴보겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;스토리지에 데이터의 저장 방식에 따라 블록 스토리지(Block Storage), 파일 스토리지(File Storage), 오브젝트 스토리지(Object Storage)로 나뉩니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;블록 스토리지&lt;/h3&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1525&quot; data-origin-height=&quot;612&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/vU5Mu/btsQyr60SdF/eCKrkUoF381HlOzRjBtg3K/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/vU5Mu/btsQyr60SdF/eCKrkUoF381HlOzRjBtg3K/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/vU5Mu/btsQyr60SdF/eCKrkUoF381HlOzRjBtg3K/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FvU5Mu%2FbtsQyr60SdF%2FeCKrkUoF381HlOzRjBtg3K%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1525&quot; height=&quot;612&quot; data-origin-width=&quot;1525&quot; data-origin-height=&quot;612&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;출처: &lt;a href=&quot;https://www.youtube.com/watch?v=6vFqHhgPHjI&quot;&gt;https://www.youtube.com/watch?v=6vFqHhgPHjI&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;블록 스토리지에서는 데이터를 고정 크기의 &quot;block&quot;의 연속(sequence)으로 취급합니다. 각 파일은 실제로 여러 block에 걸쳐 저장됩니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이 block의 크기는 어떤 데이터가 저장되느냐에 따라 적절하게 조정이 가능합니다. 예를 들어, disk block size는 4KB이지만 특정 데이터베이스의 I/O가 16KB이 일 때, 이를 일치시킬 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;모든 block들이 함께 저장될 필요는 없으며, 최적의 성능을 제공하기 위해서 알맞게 정렬될 수 있습니다. 이는 보통 소프트웨어적으로 처리되며, 사용자가 직접 block의 정렬을 수행하거나 하지 않습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;다만 메타데이터(metadata)를 다루는데 제한이 있습니다. 파일명과 같은 일부 데이터가 될 수 있지만, 그 이상으로 검색을 효과적으로 하기는 어렵습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;데이터 저장에 높은 일관성을 제공하며, block 수준에서 고도로 구조화(저장 구조나 접근 방식이 체계화 됨)되어 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;블록 스토리지는 iSCSI, 파이버 채널, SATA, SAS 등과 같은 일반적인 인터페이스를 통해 block 단위로 데이터를 가져옵니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;블록 스토리지는 기본적으로 단순한 저장 공간이기 때문에, 이를 실제 파일 단위로 관리하려면 OS에 알맞은 파일 시스템(NTFS, XFS, ext4 등)을 위에 얹어야 합니다. 이 과정에서 원하는 기능(권한 관리, 트랜잭션, 캐시 등)이 제공되지만, 동시에 성능이나 리소스 측면에서 오버헤드도 생깁니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;블록 스토리지 자체는 단일 Disk 형태로 redundancy를 제공하기 위해서는 Parity나 HA 방식의 RAID 방식을 취하는데 이로 인해서 사용하지 못하는 디스크 공간이 생기고, 또한 이를 전문적으로 지원하는 벤더의 H/W 혹은 S/W에 대한 비용도 만만치 않습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;파일 스토리지&lt;/h3&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1513&quot; data-origin-height=&quot;616&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/Gx6rp/btsQzcaq9Gb/2uU5VuRfebViiuzMs71q11/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/Gx6rp/btsQzcaq9Gb/2uU5VuRfebViiuzMs71q11/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/Gx6rp/btsQzcaq9Gb/2uU5VuRfebViiuzMs71q11/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FGx6rp%2FbtsQzcaq9Gb%2F2uU5VuRfebViiuzMs71q11%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1513&quot; height=&quot;616&quot; data-origin-width=&quot;1513&quot; data-origin-height=&quot;616&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;출처: &lt;a href=&quot;https://www.youtube.com/watch?v=6vFqHhgPHjI&quot;&gt;https://www.youtube.com/watch?v=6vFqHhgPHjI&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;파일은 전체 형태로 저장되며, 저장된 형식 그대로 폴더/파일 경로 방식으로 접근됩니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;저장되는 메타데이터는 제한적이며, 생성 날짜, 수정 날짜, 파일 크기 및 추가 속성만 포함됩니다. 이러한 메타데이터 자체로 검색이 완전하게 제공되기는 어렵습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;파일은 동시 쓰기로 인한 손상을 방지를 위해 단일 작성자에게 'locked'될 수 있으며, 작업이 완료되면 다른 작성자가 해당 파일을 수정할 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;일반적으로 블록 스토리지나 오브젝트 스토리지 위에 구축되며, SMB나 NFS와 같은 특정 파일 스토리지 프로토콜을 통해 접근된다. 보통 통신에 이용되는 SMB나 NFS와 같은 프로토콜로 인한 어느 정도의 오버헤드가 발생합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;그리고 파일 스토리지는 Flat한 구조의 파일 저장 방식이 아니기 때문에, 특정 폴더에 파일이 많아지는 경우 파일 리스트 조회나 파일에 대한 요청에서 하드웨어 스펙(캐시 등)에 따라 크게 속도 저하가 발생하는 경우도 발생합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;파일 스토리지 또한 블록 스토리지에 위치하고, 이로 인해서 HA(High Availablilty)를 제공하기 위한 하드웨어나 네트워크 비용이 높습니다. 또한 블록 스토리지의 제약에 따라 확장성에 제한이 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;파일 스토리지에 대한 공유와 관리 자체가 큰 관리 오버헤드가 되기도 합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;오브젝트 스토리지&lt;/h3&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1537&quot; data-origin-height=&quot;621&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/X79yk/btsQw8NI9bq/ptAVayYNxz9IExlA8kKkXK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/X79yk/btsQw8NI9bq/ptAVayYNxz9IExlA8kKkXK/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/X79yk/btsQw8NI9bq/ptAVayYNxz9IExlA8kKkXK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FX79yk%2FbtsQw8NI9bq%2FptAVayYNxz9IExlA8kKkXK%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1537&quot; height=&quot;621&quot; data-origin-width=&quot;1537&quot; data-origin-height=&quot;621&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;출처: &lt;a href=&quot;https://www.youtube.com/watch?v=6vFqHhgPHjI&quot;&gt;https://www.youtube.com/watch?v=6vFqHhgPHjI&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;오브젝트 스토리지에서 파일은 메타데이터, Object ID 및 추가 속성(RBAC 정보 등)을 포함한 분산된 조각(shard) 형태로 저장되며, 요청 시 재조립됩니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;무제한의 메타데이터를 추가할 수 있어 고급 검색 기능을 지원합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;파일은 'locked'할 수는 없지만, 버저닝(versioning)을 활성화 하여 데이터 무결성을 유지하고 규제 요건을 충족할 수 있습니다. 변경에 대한 레코드가 없이는 파일의 변경을 할 수 없게하거나 혹은 이전 버전으로 롤백할 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;공통 인터페이스를 유지하면서 스토리지를 지속적으로 확장할 수 있어 무제한 확장성이 가능합니다. 프론트엔드 인터페이스가 사용자 요청을 처리하는 동안 백엔드 스토리지를 추가하여 사용자에 투명하게 확장을 처리할 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이 스토리지 프로토콜은 TCP/HTTP 기반의 표준 REST API를 활용하여 높은 효율성을 갖추고 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;오브젝트 스토리지는 범용 하드웨어를 사용할 수 있으며, 고가의 RAID 장비 없이도 'Erasure coding'를 통해 매우 효율적인 중복성을 제공합니다. 이후 살펴보겠지만 Erasure coding이란 데이터를 여러 조각(shard)으로 나누고, 패리티(parity) 정보를 추가하여 일부 조각이 손실되어도 원본 데이터를 복구할 수 있게 하는 기술입니다. (이 방식 대부분의 오브젝트 스토리지 솔루션에서 각자의 알고리즘을 통해서 제공하고 있는 걸로 보입니다)&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;또한, 지리적으로 다양한 위치에 엔드포인트를 배치하고 자동으로 복제할 수 있어 글로벌 확장이 저렴하게 이루어집니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;MinIO에서 제공하는 영상을 바탕으로 살펴보면 마치 오브젝트 스토리지가 모든 스토리지에 앞선 스토리지 기술처럼 느껴지지만, 실제로는 워크로드의 특성에 따른 스토리지를 선택해야 하는 것이 알맞습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;출처: &lt;a href=&quot;https://www.youtube.com/watch?v=Zs4VNc1tSNc&quot;&gt;https://www.youtube.com/watch?v=Zs4VNc1tSNc&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;블록 스토리지 사용 사례&lt;/h3&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;서버의 부팅 볼륨이나 로컬 스토리지에 적합하며, 또한 데이터베이스, 미디어 렌더링과 같은 빠른 IO 속도를 요구하는 유형의 워크로드에 적합합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;특히 높은 일관성을 요구하는 IO 작업에는 블록 스토리지가 알맞습니다. 이 경우에는 네트워크나 프로토콜 오버헤드를 가지는 파일스토리지가 적합하지 않으며, 또한 오브젝트 스토리지의 IO는 REST API이기 때문에 적합하지 않습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;저가형의 높은 용량을 가진 디스크로 구성된 블록 스토리지는 백업용으로 사용하기에도 적합합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;파일 스토리지 사용 사례&lt;/h3&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;다수의 사용자가 파일을 저장하고 중요 데이터를 관리하는데 유형의 파일 공유나 이미지, 비디오 등 많은 양의 미디어를 저장하기에 적합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;일반적으로 웹 서버들에서 공유되는 웹 컨텐츠를 공유하는 위치에 저장하기에도 적합합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;오브젝트 스토리지 사용 사례&lt;/h3&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Box나 Dropbox와 같은 서비스와 같이 오브젝트 스토리지를 통해서 문서/파일 공유를 지원할 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;정적 웹사이트도 오브젝트 스토리지를 통해서 호스팅할 수 있으며, 또한 무제한 확장 가능하다는 점에서 전통적인 백업 아카이빙 수단을 대체하고 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Data 분석/AI/ML과 같은 워크로드에서 오브젝트 스토리지는 방대한 양의 메타데이터를 유연하게 저장하고 관리할 수 있기 때문에, 이를 활용하면 대규모 데이터 분석 작업에 매우 적합합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;요약하자면 아래와 같습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1707&quot; data-origin-height=&quot;899&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/lsheH/btsQvNwM8LW/fOkHup6Zn0Z1GFA50OX9C1/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/lsheH/btsQvNwM8LW/fOkHup6Zn0Z1GFA50OX9C1/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/lsheH/btsQvNwM8LW/fOkHup6Zn0Z1GFA50OX9C1/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FlsheH%2FbtsQvNwM8LW%2FfOkHup6Zn0Z1GFA50OX9C1%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1707&quot; height=&quot;899&quot; data-origin-width=&quot;1707&quot; data-origin-height=&quot;899&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;출처: &lt;a href=&quot;https://www.youtube.com/watch?v=Zs4VNc1tSNc&quot;&gt;https://www.youtube.com/watch?v=Zs4VNc1tSNc&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아무래도 최근 클라우드 환경에서 기본적으로 S3 기반의 오브젝트 스토리지 기술이 넓리 쓰이고 있고, 또한 AI 시대에 데이터가 폭증하면서 필요한 데이터를 저장하기 위한 공간으로 오브젝트 스토리지 기술이 높은 주목을 받는 것으로 이해됩니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;다만 기존 스토리지 기술에 익숙한 사용자에게 오브젝트 스토리지의 비파일적 접근 방식과 컨셉에 혼란이 있고, 이를 보완하기 위해서 NFS나 blobfuse와 같은 프로토콜을 사용하는 사용 사례에서는 오히려 오버헤드가 발생하기도 합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;즉, 오브젝트 스토리지를 적절하게 사용하기 위해서 단순히 파일 처럼 사용하는 것이 아닌, SDK를 사용하거나 REST API 방식으로 스토리지를 사용하는 방식으로 사고를 전환해야 합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;2. MinIO 개요&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;MinIO는 앞서 살펴본 유연성, 확장성, 스케일 아웃 구조, 성능, 다양한 아키텍처 지원, 그리고 하이브리드 멀티 클라우드 환경에 대한 대응력을 바탕으로, 오브젝트 스토리지 솔루션을 제공합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;MinIO, 그리고 오브젝트 스토리지에서 사용되는 몇가지 용어를 먼저 살펴보고 가겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;오브젝트&lt;/h4&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;오브젝트란 바이너리 데이터를 의미하며 때때로 Binary Large Object(BLOB)이라도 합니다. Blob은 이미지, 오디오 파일, 스프레드시트 혹은 바이너리 실행 코드 일 수도 있습니다. MinIO와 같은 오브젝트 스토리지 플랫폼은 이러한 Blob의 저장, 조회, 검색하는데 특화된 도구와 기능을 제공합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;버킷(Bucket)&lt;/h4&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;MinIO 오브젝트 스토리지는 오브젝트를 정리하기 위해서 버킷을 사용합니다. 오브젝트를 버킷은 파일시스템의 폴더나 디렉터리와 유사하며, 각 버킷에는 임의의 수의 오브젝트를 저장할 수 있습니다. MinIO의 버킷은 AWS S3의 버킷과 동일한 기능을 제공합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;MinIO server, server pool, cluster&lt;/h4&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;MinIO 배포는 하나 이상의 minio server 노드가 실행되는 스토리지와 컴퓨트 리소스 세트로 구성되며, 이것이 하나의 오브젝트 스토리지 저장소 처럼 동작합니다. MinIO의 standalone instance는 하나의 minio server 노드와 하나의 server pool로 구성됩니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이 때 배포 방식은 아래와 같습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;참고: &lt;a href=&quot;https://docs.min.io/community/minio-object-store/operations/deployments/installation.html&quot;&gt;https://docs.min.io/community/minio-object-store/operations/deployments/installation.html&lt;/a&gt;&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;&lt;b&gt;Single-Node Single-Drive (SNSD or &amp;ldquo;Standalone&amp;rdquo;)&lt;/b&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;단일 MinIO server에 단일 드라이브나 폴더를 사용해 데이터를 저장합니다.&lt;/li&gt;
&lt;li&gt;로컬 개발이나 평가에 적합합니다.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;b&gt;Single-Node Multi-Drive (SNMD or &amp;ldquo;Standalone Multi-Drive&amp;rdquo;)&lt;/b&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;단일 MinIO server에 멀티 드라이브나 폴더를 사용하는 경우입니다.&lt;/li&gt;
&lt;li&gt;낮은 성능, 스케일, 용량을 요구하는 워크로드에 적합합니다.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;b&gt;Multi-Node Multi-Drive (MNMD or &amp;ldquo;Distributed&amp;rdquo;)&lt;/b&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;멀티 MinIO server에 각각 멀티 드라이브를 가지고 구성된 경우입니다.&lt;/li&gt;
&lt;li&gt;엔터프라이즈급의 높은 성능을 가진 오브젝트 스토리지로 활용됩니다.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;용어가 다소 헷갈리는데 &lt;b&gt;MinIO server&lt;/b&gt;를 MinIO 소프트웨어가 실행되는 하나의 프로세스 또는 노드라고 보고 이를 데이터를 저장하고 처리하는 기본 단위라고 한다면, 이러한 MinIO 서버 노드와 드라이브를 묶어서 &lt;b&gt;Server Pool&lt;/b&gt;이라고 할 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아래는 각 4개 드라이브를 가진 4개 MinIO server 노드로 단일 Server Pool을 생성하는 명령입니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;943&quot; data-origin-height=&quot;95&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/beFsTd/btsQwDf606y/JfNNLdDgureNBRKmkFJywK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/beFsTd/btsQwDf606y/JfNNLdDgureNBRKmkFJywK/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/beFsTd/btsQwDf606y/JfNNLdDgureNBRKmkFJywK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbeFsTd%2FbtsQwDf606y%2FJfNNLdDgureNBRKmkFJywK%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;943&quot; height=&quot;95&quot; data-origin-width=&quot;943&quot; data-origin-height=&quot;95&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;출처: &lt;a href=&quot;https://docs.min.io/community/minio-object-store/operations/concepts.html#how-does-a-distributed-minio-deployment-work&quot;&gt;https://docs.min.io/community/minio-object-store/operations/concepts.html#how-does-a-distributed-minio-deployment-work&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;여러 MinIO 서버 노드를 하나의 풀로 묶어서 저장소와 리소스를 공유하며 오브젝트 저장 요청을 처리합니다. 하나 이상의 Server Pool로 구성된 전체 MinIO 배포 환경을 &lt;b&gt;Cluster&lt;/b&gt;라고 합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아래는 각 4개 드라이브를 가진 4개 MinIO server 노드로 구성된 2개의 Server Pool으로 구성된 Cluster를 생성하는 명령입니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;946&quot; data-origin-height=&quot;119&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/5Bacf/btsQvQAl7sB/SIZjDST8qfF3pbRJEhdaZk/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/5Bacf/btsQvQAl7sB/SIZjDST8qfF3pbRJEhdaZk/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/5Bacf/btsQvQAl7sB/SIZjDST8qfF3pbRJEhdaZk/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2F5Bacf%2FbtsQvQAl7sB%2FSIZjDST8qfF3pbRJEhdaZk%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;946&quot; height=&quot;119&quot; data-origin-width=&quot;946&quot; data-origin-height=&quot;119&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;출처: &lt;a href=&quot;https://docs.min.io/community/minio-object-store/operations/concepts.html#how-does-a-distributed-minio-deployment-work&quot;&gt;https://docs.min.io/community/minio-object-store/operations/concepts.html#how-does-a-distributed-minio-deployment-work&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;일단 여기까지 살펴보고, 이후 실습을 하면서 이론을 보충해보겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;3. MinIO 주요 동작&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;MinIO에서 오브젝트를 처리하는 주요 동작을 통해 오브젝트 스토리지의 동작 매커니즘에 대해 살펴보겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;PUT 요청&lt;/h4&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;참고: &lt;a href=&quot;https://www.youtube.com/watch?v=GNBWHjB7PP0&quot;&gt;https://www.youtube.com/watch?v=GNBWHjB7PP0&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1642&quot; data-origin-height=&quot;886&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/PRyjV/btsQzF4x6bJ/3nNOrY0N0M41ycpk8wSkGk/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/PRyjV/btsQzF4x6bJ/3nNOrY0N0M41ycpk8wSkGk/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/PRyjV/btsQzF4x6bJ/3nNOrY0N0M41ycpk8wSkGk/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FPRyjV%2FbtsQzF4x6bJ%2F3nNOrY0N0M41ycpk8wSkGk%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1642&quot; height=&quot;886&quot; data-origin-width=&quot;1642&quot; data-origin-height=&quot;886&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;MinIO에서 &lt;code&gt;monalisa.jpg&lt;/code&gt;라는 이미지 파일을 오브젝트 스토리지에 저장하는 PUT 요청이 있다고 합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;그림에서 &lt;code&gt;monalisa.jpg&lt;/code&gt;라는 파일명은 Hash -&amp;gt; Mod 를 거칩니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;1) 파일명은 Hash 처리를 통해서 고유한 해시값으로 생성됩니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;2) 해시 값은 나머지(modulus) 함수를 를 통해어 데이터가 최종적으로 저장될 특정 드라이브 집합을 결정합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;동시에 이미지 자체는 &lt;code&gt;Erasure code&lt;/code&gt; engine라는 처리됩니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이것은 이미지라는 오브젝트 자체를 각각의 data와 parity block으로 자릅니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;잘려진 오브젝트는 modulus 함수를 통해 얻어진 드라이브 집합에 각 저장됩니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1582&quot; data-origin-height=&quot;895&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/dDWdYo/btsQxpWtNXn/6vmwc6swHkDUeZ4x0qk7PK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/dDWdYo/btsQxpWtNXn/6vmwc6swHkDUeZ4x0qk7PK/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/dDWdYo/btsQxpWtNXn/6vmwc6swHkDUeZ4x0qk7PK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FdDWdYo%2FbtsQxpWtNXn%2F6vmwc6swHkDUeZ4x0qk7PK%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1582&quot; height=&quot;895&quot; data-origin-width=&quot;1582&quot; data-origin-height=&quot;895&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이 그림에서는 결국 02, 06, 10, 14, 18, 22, 26, 30 의 erasure set에 저장이 됩니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;GET 요청&lt;/h4&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;특정 파일에 대한 GET 요청이 들어왔을 때는 아래와 같습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1652&quot; data-origin-height=&quot;888&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/c9nOqk/btsQx80W9bC/ZHHRYz8dxPtDeOSlXxly40/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/c9nOqk/btsQx80W9bC/ZHHRYz8dxPtDeOSlXxly40/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/c9nOqk/btsQx80W9bC/ZHHRYz8dxPtDeOSlXxly40/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2Fc9nOqk%2FbtsQx80W9bC%2FZHHRYz8dxPtDeOSlXxly40%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1652&quot; height=&quot;888&quot; data-origin-width=&quot;1652&quot; data-origin-height=&quot;888&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;파일명은 동일하게 Hash -&amp;gt; Mod 로 동작하여 파일명에 대한 고유한 해시값 그리고 저장소의 특정 드라이브 집합에 대한 정보로 도출됩니다. Mod의 최종 결과는 실제 데이터가 위치한 드라이브에 대한 정보입니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1604&quot; data-origin-height=&quot;893&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bvrA1w/btsQzj1IqAx/8irkISxhAaR2f61hdC7zT0/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bvrA1w/btsQzj1IqAx/8irkISxhAaR2f61hdC7zT0/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bvrA1w/btsQzj1IqAx/8irkISxhAaR2f61hdC7zT0/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbvrA1w%2FbtsQzj1IqAx%2F8irkISxhAaR2f61hdC7zT0%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1604&quot; height=&quot;893&quot; data-origin-width=&quot;1604&quot; data-origin-height=&quot;893&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이 정보를 바탕으로 데이터가 회수되면, 이 오브젝트를 다시 &lt;code&gt;Erasure code&lt;/code&gt; engine을 통해서 이미지로 만들어 냅니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;Erasure Code&lt;/h4&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;참고: &lt;a href=&quot;https://www.youtube.com/watch?v=sxcz6U0fUpo&quot;&gt;https://www.youtube.com/watch?v=sxcz6U0fUpo&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;GET과 PUT의 요청 처리를 보면 결국 Erasure code라는 처리를 하는데, 이것이 오브젝트 스토리지의 저장과 신뢰성을 확보하기 위한 주요한 알고리즘입니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Erasure Code는 데이터를 여러 저장소에 분산시키기 위해 수학적 알고리즘을 사용하는 데이터 보호 방식입니다. 이 방식은 Parity block을 통해 복원력을 제공하며, 손실된 데이터를 다시 조합할 수 있도록 합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1580&quot; data-origin-height=&quot;870&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bQGJv9/btsQwNpMLzx/jvzur1UNO3mtIvjkNt5DLK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bQGJv9/btsQwNpMLzx/jvzur1UNO3mtIvjkNt5DLK/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bQGJv9/btsQwNpMLzx/jvzur1UNO3mtIvjkNt5DLK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbQGJv9%2FbtsQwNpMLzx%2Fjvzur1UNO3mtIvjkNt5DLK%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1580&quot; height=&quot;870&quot; data-origin-width=&quot;1580&quot; data-origin-height=&quot;870&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이 예시에서 하나의 오브젝트는 Erasure code를 통해 5개의 data block과 3개의 parity block으로 나눠집니다. &lt;code&gt;Erasure stripe size&lt;/code&gt;는 8이고, 8개 노드에 각 8개 드라이브가 있을 때(총 64개의 드라이브)에서 총 8개의 &lt;code&gt;erasure set&lt;/code&gt;이 생성됩니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이때 data block은 오브젝트 data를 조각으로 나눴을 때 한 조각 사이즈와 동일합니다. parity block은 수학적 코드를 담고 있으며, data block이 손실되었을 때 오브젝트를 재조합하기 위해서 사용됩니다. 이 구성에서 총 3개 드라이브에 대한 실패에 대해서도 오브젝트를 복구할 수 있습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1579&quot; data-origin-height=&quot;855&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/dM7jKc/btsQx4RMqXD/S97MHE0I99gK7H71n4bwM0/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/dM7jKc/btsQx4RMqXD/S97MHE0I99gK7H71n4bwM0/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/dM7jKc/btsQx4RMqXD/S97MHE0I99gK7H71n4bwM0/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FdM7jKc%2FbtsQx4RMqXD%2FS97MHE0I99gK7H71n4bwM0%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1579&quot; height=&quot;855&quot; data-origin-width=&quot;1579&quot; data-origin-height=&quot;855&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이러한 Erasure code가 실제로 RAID나 복제 기술와 같은 기존 기술보다 훨씬 낮은 오버헤드로 객체 수준의 복구를 제공한다고 합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;Bit rot healing&lt;/h3&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;참고: &lt;a href=&quot;https://docs.min.io/community/minio-object-store/operations/concepts.html#minio-implements-bit-rot-healing-to-protect-data-at-rest&quot;&gt;https://docs.min.io/community/minio-object-store/operations/concepts.html#minio-implements-bit-rot-healing-to-protect-data-at-rest&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;만약 손상된 data block이 있는 경우 그대로 두는 것은 아닙니다. MinIO는 저장된 데이터를 보호하기 위해 Bit Rot healing 기능을 구현합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Bit Rot은 저장 장치에서 발생할 수 있는 무작위적이고 조용한 데이터 손상 현상입니다. 이러한 손상은 사용자 활동에 의해 발생하는 것이 아니며, 운영 체제 또한 이 변화를 인지하거나 사용자 또는 관리자에게 알릴 수 없습니다.&lt;br /&gt;MinIO는 객체의 무결성을 확인하기 위해 해싱 알고리즘을 사용합니다. 이 알고리즘은 객체에 대해 GET 또는 HEAD 요청이 있을 때 자동으로 수행됩니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;PUT 작업 중 MinIO가 버전 불일치를 감지하면 Bit rot healing이 트리거될 수 있습니다. 객체가 Bit Rot에 의해 손상된 경우, 해당 객체에 대한 패리티 조각이 충분히 존재하면 MinIO는 자동으로 복구를 수행할 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;MinIO scanner를 수행할 수 있지만, 보통은 자동으로 이뤄지는 Bit rot healing 만으로 충분하다고 합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;마치며&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;여기까지 오브젝트 스토리지에 대한 이해와 MinIO의 주요 개념과 동작 방식을 살펴봤습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이제 다음 포스트에서는 실습을 통해서 MinIO를 살펴보겠습니다.&lt;/p&gt;</description>
      <category>MinIO</category>
      <category>block storage</category>
      <category>file storage</category>
      <category>minIO</category>
      <category>Object Storage</category>
      <author>한명</author>
      <guid isPermaLink="true">https://a-person.tistory.com/61</guid>
      <comments>https://a-person.tistory.com/61#entry61comment</comments>
      <pubDate>Fri, 12 Sep 2025 23:38:55 +0900</pubDate>
    </item>
    <item>
      <title>[10] Cilium - Security</title>
      <link>https://a-person.tistory.com/60</link>
      <description>&lt;p data-ke-size=&quot;size16&quot;&gt;이번 게시물에서는 Cilium에서 제공하는 Security 기능에 대해서 살펴보겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Cilium의 Security 에 대한 설명 중 Network Security와 Network Policy 부분에서 중점을 두고 설명하도록 하겠습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;457&quot; data-origin-height=&quot;324&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/lvXj1/btsQmMxu7uB/jvKTNv48d7LzqTKB2gyNu1/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/lvXj1/btsQmMxu7uB/jvKTNv48d7LzqTKB2gyNu1/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/lvXj1/btsQmMxu7uB/jvKTNv48d7LzqTKB2gyNu1/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FlvXj1%2FbtsQmMxu7uB%2FjvKTNv48d7LzqTKB2gyNu1%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;457&quot; height=&quot;324&quot; data-origin-width=&quot;457&quot; data-origin-height=&quot;324&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;목차&lt;/h2&gt;
&lt;ol style=&quot;list-style-type: decimal;&quot; data-ke-list-type=&quot;decimal&quot;&gt;
&lt;li&gt;Network Security 개념&lt;/li&gt;
&lt;li&gt;Cilium Network Policy&lt;/li&gt;
&lt;/ol&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;1. Network Security 개념&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Cilium의 Network Policy를 살펴보기 전 필요한 개념들을 살펴보겠습니다.&lt;/p&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;Identity based Security&lt;/h3&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;전통적인 보안 아키텍처에서 L3의 보안은 일반적으로 IP 기반의 필터링을 사용합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;물론 쿠버네티스 환경에서는 대상의 IP를 식별하기 어렵기 때문에 Network Policy에서는 Label을 기반으로 정책을 생성하고, 이후 IP를 바탕으로 필터링을 하게 됩니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;예를 들어, calico 기반의 Network Policy를 살펴보면 아래와 같습니다.&lt;/p&gt;
&lt;pre class=&quot;yaml&quot;&gt;&lt;code&gt;# Network Policy 생성
cat &amp;lt;&amp;lt;EOF | kubectl apply -f -
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-granted-access
spec:
  podSelector:
    matchLabels:
      app: server
  ingress:
  - from:
    - podSelector:
        matchLabels:
          access: granted
    ports:
    - protocol: TCP
      port: 80
EOF

# 노드에서 iptables 확인
root@aks-nodepool1-16223536-vmss000000:/# iptables -S | grep allow-granted-access
-A cali-pi-_-zqXhpUfbm6lphf3Orn -p tcp -m comment --comment &quot;cali:Jz1VzdhBJEh8b5nC&quot; -m comment --comment &quot;Policy default/knp.default.allow-granted-access ingress&quot; -m set --match-set cali40s:ahXLGcJKqoUc6DMM01m_Oja src -m multiport --dports 80 -j MARK --set-xmark 0x10000/0x10000

# 해당 iptables를 확인해보면 ipset에 대한 허용임
root@aks-nodepool1-16223536-vmss000000:/# iptables -L cali-pi-_-zqXhpUfbm6lphf3Orn
Chain cali-pi-_-zqXhpUfbm6lphf3Orn (1 references)
target     prot opt source               destination
MARK       tcp  --  anywhere             anywhere             /* cali:Jz1VzdhBJEh8b5nC */ /* Policy default/knp.default.allow-granted-access ingress */ match-set cali40s:ahXLGcJKqoUc6DMM01m_Oja src multiport dports http MARK or 0x10000

# ipset을 확인해보면 허용된 파드 IP를 확인 가능함
root@aks-nodepool1-16223536-vmss000000:/# ipset list cali40s:ahXLGcJKqoUc6DMM01m_Oja
Name: cali40s:ahXLGcJKqoUc6DMM01m_Oja
Type: hash:net
Revision: 7
Header: family inet hashsize 1024 maxelem 1048576 bucketsize 12 initval 0x4559b4b4
Size in memory: 504
References: 1
Number of entries: 1
Members:
10.224.0.11

# 허용된 Pod IP
$ kubectl get po -owide
NAME                    READY   STATUS    RESTARTS   AGE   IP            NODE                                NOMINATED NODE   READINESS GATES
client-allowed          1/1     Running   0          21m   10.224.0.11   aks-nodepool1-16223536-vmss000000   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;

# iptables의 정책은 실제로 server 파드의 인터페이스에서 동작하고 있음
root@aks-nodepool1-16223536-vmss000000:/# iptables -S | grep cali-pi-_-zqXhpUfbm6lphf3Orn
...
-A cali-tw-azv1867ae04537 -m comment --comment &quot;cali:qLBcoqYzO9kdru-z&quot; -j cali-pi-_-zqXhpUfbm6lphf3Orn

# 파드의 veth 인터페이스 확인 -&amp;gt; 19
$ kubectl exec -it server-6f94b4c7-tptnd -- ip a
...
18: eth0@if19: &amp;lt;BROADCAST,UP,LOWER_UP,M-DOWN&amp;gt; mtu 1500 qdisc noqueue qlen 1000
    link/ether a2:33:0f:d3:aa:7e brd ff:ff:ff:ff:ff:ff
    inet 10.224.0.13/16 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::a033:fff:fed3:aa7e/64 scope link
       valid_lft forever preferred_lft forever

# 노드의 인터페이스 확인 -&amp;gt; azv1867ae04537
root@aks-nodepool1-16223536-vmss000000:/# ip a
...
19: azv1867ae04537@if18: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether aa:aa:aa:aa:aa:aa brd ff:ff:ff:ff:ff:ff link-netns cni-d58178f5-2788-69b1-c4f9-c5af88779471
    inet6 fe80::a8aa:aaff:feaa:aaaa/64 scope link
       valid_lft forever preferred_lft forever


# 이때, client Pod IP가 추가되는 경우 -&amp;gt; ipset의 member가 추가됨
root@aks-nodepool1-16223536-vmss000000:/# ipset list cali40s
ipset v7.15: The set with the given name does not exist
root@aks-nodepool1-16223536-vmss000000:/# ipset list cali40s:ahXLGcJKqoUc6DMM01m_Oja
Name: cali40s:ahXLGcJKqoUc6DMM01m_Oja
Type: hash:net
Revision: 7
Header: family inet hashsize 1024 maxelem 1048576 bucketsize 12 initval 0x4559b4b4
Size in memory: 552
References: 1
Number of entries: 2
Members:
10.224.0.38
10.224.0.11

# server 파드가 증가하는 경우 -&amp;gt; iptables의 veth interface 정책이 추가됨
$ kubectl exec -it server-6f94b4c7-7tf2x -- ip a
...
24: eth0@if25: &amp;lt;BROADCAST,UP,LOWER_UP,M-DOWN&amp;gt; mtu 1500 qdisc noqueue qlen 1000
    link/ether 2a:18:44:33:78:64 brd ff:ff:ff:ff:ff:ff
    inet 10.224.0.21/16 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::2818:44ff:fe33:7864/64 scope link
       valid_lft forever preferred_lft forever

root@aks-nodepool1-16223536-vmss000000:/# ip a
25: azv0a40327c417@if24: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether aa:aa:aa:aa:aa:aa brd ff:ff:ff:ff:ff:ff link-netns cni-b934579e-7782-1c9b-511f-98381ee5263c
    inet6 fe80::a8aa:aaff:feaa:aaaa/64 scope link
       valid_lft forever preferred_lft forever

# 인터페이스에 대한 정책이 추가됨
root@aks-nodepool1-16223536-vmss000000:/# iptables -S | grep cali-pi-_-zqXhpUfbm6lphf3Orn
-N cali-pi-_-zqXhpUfbm6lphf3Orn
-A cali-pi-_-zqXhpUfbm6lphf3Orn -p tcp -m comment --comment &quot;cali:Jz1VzdhBJEh8b5nC&quot; -m comment --comment &quot;Policy default/knp.default.allow-granted-access ingress&quot; -m set --match-set cali40s:ahXLGcJKqoUc6DMM01m_Oja src -m multiport --dports 80 -j MARK --set-xmark 0x10000/0x10000
-A cali-tw-azv0a40327c417 -m comment --comment &quot;cali:it2wtgChKCWF3MNr&quot; -j cali-pi-_-zqXhpUfbm6lphf3Orn # 추가됨
-A cali-tw-azv1867ae04537 -m comment --comment &quot;cali:qLBcoqYzO9kdru-z&quot; -j cali-pi-_-zqXhpUfbm6lphf3Orn&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이러한 방식은 새로운 파드가 생성되거나 혹은 변경되는 경우 규칙에서 IP를 추가하거나 제거하는 방식으로 업데이트가 이뤄지는데, 대규모 분산 애플리케이션이라고 하면 다수 노드에 많은 업데이트가 발생할 수 있습니다. (물론 Calico 에서도 ipset을 사용하는 방식 등으로 Iptables를 최대한 최적화를 한 것 같긴 합니다)&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Cilium에서는 기존 방식에서 유연성을 주기 위해서 보안을 네트워크 주소라는 방식에서 분리하여, Identity 방식으로 제공합니다. 실제로는 숫자 형식의 Identity를 사용하지만, 아래 그림과 같은 Label 기반으로 구분된 파드를 동일한 Identity로 구분합니다.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://docs.cilium.io/en/stable/_images/identity.png&quot; alt=&quot;../../../_images/identity.png&quot; /&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;span&gt;출처: &lt;/span&gt;&lt;span&gt;&lt;a href=&quot;https://docs.cilium.io/en/stable/security/network/identity/&quot;&gt;https://docs.cilium.io/en/stable/security/network/identity/&lt;/a&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;만약 Label이 일치하는 신규 파드가 생성되면, 동일한 frontends라는 Identity를 부여 받게 되고, 이후 backends라는 Identity로 연결이 가능해 집니다. 이로써 모든 노드를 업데이트해야 할 필요가 없어집니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이와 같이 cilium에서는 파드의 Label 기반으로 보안 식별자(identity)를 가지고 식별하게 됩니다. 이후 NetworkPolicy에서는 endpointSelector 와 같은 형태로 파드(endpoint)를 식별해 identity로 인식하고 이를 바탕으로 통신을 허용하는 방식으로 동작한다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;실제 Cilium 클러스터에서 확인해보겠습니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;# Cilium의 엔드포인트의 Identity 확인
kubectl get ciliumendpoints.cilium.io -n kube-system

# 같은 label을 가지는 파드들은 가은 identity를 가지고 있다, 같은 보안 정책을 공유함
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get ciliumendpoints.cilium.io -n kube-system
NAME                              SECURITY IDENTITY   ENDPOINT STATE   IPV4           IPV6
coredns-674b8bbfcf-pcdst          30494               ready            172.20.0.242
coredns-674b8bbfcf-pn8k7          30494               ready            172.20.0.100
hubble-relay-fdd49b976-r548j      17545               ready            172.20.0.160
hubble-ui-655f947f96-f6vhr        11820               ready            172.20.0.114
metrics-server-5dd7b49d79-bj65g   601                 ready            172.20.0.138


# Namespace 별 Security Identity 확인
kubectl get ciliumidentities.cilium.io 

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get ciliumidentities.cilium.io
NAME    NAMESPACE            AGE
10477   local-path-storage   12m
11451   cilium-monitoring    12m
11820   kube-system          12m
17545   kube-system          12m
30494   kube-system          12m
601     kube-system          12m
62283   cilium-monitoring    12m&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이러한 identity는 서로 security label로 구분됩니다. ciliumIdentity를 조회해보면 어떤 security label로 구성되어 있는지 확인할 수 있습니다.&lt;/p&gt;
&lt;pre class=&quot;routeros&quot;&gt;&lt;code&gt;kubectl get ciliumidentities.cilium.io 14735 -o yaml | yq

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get ciliumidentities.cilium.io 30494 -o yaml | yq
{
  &quot;apiVersion&quot;: &quot;cilium.io/v2&quot;,
  &quot;kind&quot;: &quot;CiliumIdentity&quot;,
  &quot;metadata&quot;: {
    &quot;creationTimestamp&quot;: &quot;2025-09-04T13:13:50Z&quot;,
    &quot;generation&quot;: 1,
    &quot;labels&quot;: {
      &quot;io.kubernetes.pod.namespace&quot;: &quot;kube-system&quot;
    },
    &quot;name&quot;: &quot;30494&quot;,
    &quot;resourceVersion&quot;: &quot;902&quot;,
    &quot;uid&quot;: &quot;b59af7e9-83f3-4221-ba5c-e8e3db15f33e&quot;
  },
  &quot;security-labels&quot;: {
    &quot;k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name&quot;: &quot;kube-system&quot;,
    &quot;k8s:io.cilium.k8s.policy.cluster&quot;: &quot;default&quot;,
    &quot;k8s:io.cilium.k8s.policy.serviceaccount&quot;: &quot;coredns&quot;,
    &quot;k8s:io.kubernetes.pod.namespace&quot;: &quot;kube-system&quot;,
    &quot;k8s:k8s-app&quot;: &quot;kube-dns&quot;
  }
}

# cilium agent에서 identity list 확인
kubectl exec -it -n kube-system ds/cilium -- cilium identity list

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it -n kube-system ds/cilium -- cilium identity list
ID      LABELS
1       reserved:host
        reserved:kube-apiserver
2       reserved:world
3       reserved:unmanaged
4       reserved:health
5       reserved:init
6       reserved:remote-node
7       reserved:kube-apiserver
        reserved:remote-node
8       reserved:ingress
9       reserved:world-ipv4
10      reserved:world-ipv6
601     k8s:app.kubernetes.io/instance=metrics-server
        k8s:app.kubernetes.io/name=metrics-server
        k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system
        k8s:io.cilium.k8s.policy.cluster=default
        k8s:io.cilium.k8s.policy.serviceaccount=metrics-server
        k8s:io.kubernetes.pod.namespace=kube-system
10477   k8s:app=local-path-provisioner
        k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=local-path-storage
        k8s:io.cilium.k8s.policy.cluster=default
        k8s:io.cilium.k8s.policy.serviceaccount=local-path-provisioner-service-account
        k8s:io.kubernetes.pod.namespace=local-path-storage
11451   k8s:app=prometheus
        k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=cilium-monitoring
        k8s:io.cilium.k8s.policy.cluster=default
        k8s:io.cilium.k8s.policy.serviceaccount=prometheus-k8s
        k8s:io.kubernetes.pod.namespace=cilium-monitoring
11820   k8s:app.kubernetes.io/name=hubble-ui
        k8s:app.kubernetes.io/part-of=cilium
        k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system
        k8s:io.cilium.k8s.policy.cluster=default
        k8s:io.cilium.k8s.policy.serviceaccount=hubble-ui
        k8s:io.kubernetes.pod.namespace=kube-system
        k8s:k8s-app=hubble-ui
17545   k8s:app.kubernetes.io/name=hubble-relay
        k8s:app.kubernetes.io/part-of=cilium
        k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system
        k8s:io.cilium.k8s.policy.cluster=default
        k8s:io.cilium.k8s.policy.serviceaccount=hubble-relay
        k8s:io.kubernetes.pod.namespace=kube-system
        k8s:k8s-app=hubble-relay
30494   k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system
        k8s:io.cilium.k8s.policy.cluster=default
        k8s:io.cilium.k8s.policy.serviceaccount=coredns
        k8s:io.kubernetes.pod.namespace=kube-system
        k8s:k8s-app=kube-dns
62283   k8s:app=grafana
        k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=cilium-monitoring
        k8s:io.cilium.k8s.policy.cluster=default
        k8s:io.cilium.k8s.policy.serviceaccount=default
        k8s:io.kubernetes.pod.namespace=cilium-monitoring


# 파드가 모든 라벨을 가지는 것은 아니며, cilium이 추가로 security label을 추가하는 것으로 보임
kubectl get pod -n kube-system -l k8s-app=kube-dns --show-labels

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get pod -n kube-system -l k8s-app=kube-dns --show-labels
NAME                       READY   STATUS    RESTARTS   AGE   LABELS
coredns-674b8bbfcf-pcdst   1/1     Running   0          19m   k8s-app=kube-dns,pod-template-hash=674b8bbfcf
coredns-674b8bbfcf-pn8k7   1/1     Running   0          19m   k8s-app=kube-dns,pod-template-hash=674b8bbfcf&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;참고로 파드에 label을 추가하면 새로운 Identity로 생성되는 것을 알 수 있습니다. (대규모 클러스터에서는 자주 Label을 변경하면 Identity 할당으로 성능 저하가 발생할 수 있음)&lt;/p&gt;
&lt;pre class=&quot;routeros&quot;&gt;&lt;code&gt;kubectl label pods -n kube-system -l k8s-app=kube-dns study=security

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl label pods -n kube-system -l k8s-app=kube-dns study=security
pod/coredns-674b8bbfcf-pcdst labeled
pod/coredns-674b8bbfcf-pn8k7 labeled

kubectl exec -it -n kube-system ds/cilium -- cilium identity list

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it -n kube-system ds/cilium -- cilium identity list
...
30494   k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system
        k8s:io.cilium.k8s.policy.cluster=default
        k8s:io.cilium.k8s.policy.serviceaccount=coredns
        k8s:io.kubernetes.pod.namespace=kube-system
        k8s:k8s-app=kube-dns

# 잠시후 새로운 Identity로 변경된 것으로 확인됨
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it -n kube-system ds/cilium -- cilium identity list
...
33353   k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system
        k8s:io.cilium.k8s.policy.cluster=default
        k8s:io.cilium.k8s.policy.serviceaccount=coredns
        k8s:io.kubernetes.pod.namespace=kube-system
        k8s:k8s-app=kube-dns
        k8s:study=security&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;Proxy Injection&lt;/h3&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;L7 정책의 통제는 envoy 프록시를 통해서 이뤄집니다. Cilium에서 L7 기능을 활성화하면, Cilium Agent는 Envoy Proxy를 별도의 프로세스로 실행 해야 합니다. 혹은 &lt;code&gt;envoy.enabled&lt;/code&gt;를 &lt;code&gt;true&lt;/code&gt; 로 설정해서 Envoy 프록시를 독립적인 라이프 사이클을 가지는 DaemonSet 형태로 실행할 수 있으며 이 경우 &lt;code&gt;cilium-envoy&lt;/code&gt;라는 파드가 실행됩니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아래는 대략적인 아키텍처와 트래픽 흐름의 예시로, Cilium-agent와 Envoy에 정책 설정을 내리게 되며, 실제 트래픽은 eBPF를 통해서 Envoy 프록시를 거쳐서 실제 서비스 파드로 향하게 됩니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1090&quot; data-origin-height=&quot;627&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/DniTF/btsQoO8US7h/KOyKlOo7NgvWqh4ngG50L0/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/DniTF/btsQoO8US7h/KOyKlOo7NgvWqh4ngG50L0/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/DniTF/btsQoO8US7h/KOyKlOo7NgvWqh4ngG50L0/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FDniTF%2FbtsQoO8US7h%2FKOyKlOo7NgvWqh4ngG50L0%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1090&quot; height=&quot;627&quot; data-origin-width=&quot;1090&quot; data-origin-height=&quot;627&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;출처: &lt;a href=&quot;https://docs.cilium.io/en/stable/security/network/proxy/envoy/&quot;&gt;https://docs.cilium.io/en/stable/security/network/proxy/envoy/&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이제 Cilium Network Policy를 통해서 살펴보겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;2. Cilium Network Policy&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Cilium에서는 쿠버네티스 Network Policy에서 확장된 CiliumNetworkPolicy를 CRD로 제공하고 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아래 그림에서 쿠버네티스의 Network Policy와 Cilium Network Policy의 차이점을 확인할 수 있으며, Cilium Network Policy에서 보다 풍부한 조건의 Network Policy를 구현할 수 있습니다. CNCF에서는 주관하는 CKS 시험에서도 과거에는 기본적인 NetworkPolicy을 다뤘다면, 리뉴얼된 시험에서는 CiliumNetworkPolicy에 대한 문제도 추가되었습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;2560&quot; data-origin-height=&quot;1744&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/lEiQO/btsQomLAqsg/VLwUljvHkhZk1bZjBwS1hK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/lEiQO/btsQomLAqsg/VLwUljvHkhZk1bZjBwS1hK/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/lEiQO/btsQomLAqsg/VLwUljvHkhZk1bZjBwS1hK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FlEiQO%2FbtsQomLAqsg%2FVLwUljvHkhZk1bZjBwS1hK%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;2560&quot; height=&quot;1744&quot; data-origin-width=&quot;2560&quot; data-origin-height=&quot;1744&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;출처: &lt;a href=&quot;https://isovalent.com/blog/post/intro-to-cilium-network-policies/&quot;&gt;https://isovalent.com/blog/post/intro-to-cilium-network-policies/&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;쿠버네티스의 Network Policy가 3, 4계층의 Network Policy를 지원하는데 반해 Cilium Network Policy는 3~7계층에서 송/수신 정책을 지원합니다. 또한 Cilium Clusterwide Network Policy를 통해서 클러스터 범위의 정책을 지원하는 CRD도 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Network Policy를 생성하지 않으면 기본적으로 모든 엔드포인트에 대해 모든 송신 및 수신 트래픽이 허용되는 상태입니다. 이때, 네트워크 정책을 생성하여 엔드포인트가 선택되면 명시적으로 허용된 트래픽만 허용되고 나머지 통신은 default Deny 상태로 전환됩니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;참고로, 아래의 Network Policy 정책 Editor를 통해서 Network Policy나 Cilium Network Policy를 연습해볼 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;참고: &lt;a href=&quot;https://editor.networkpolicy.io/?id=kDEN8z93C4bi0Yzn&quot;&gt;https://editor.networkpolicy.io/?id=kDEN8z93C4bi0Yzn&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;각 레이어별 Network Policy의 예제를 살펴보고 간단히 실습도 진행해보겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;L3 Policy&lt;/h3&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;CiliumNetworkPolicy 에서 L3 계층의 정책은 Endpoint, Service, Entity, Node, IP/CIDR, DNS 기반으로 동작하는 정책을 생성할 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;참고: &lt;a href=&quot;https://docs.cilium.io/en/stable/security/policy/language/#layer-3-examples&quot;&gt;https://docs.cilium.io/en/stable/security/policy/language/#layer-3-examples&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;Endpoint based&lt;/h4&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;CiliumNetworkPolicy의 L3 정책은 아래와 같습니다.&lt;/p&gt;
&lt;pre class=&quot;dts&quot;&gt;&lt;code&gt;# CiliumNetworkPolicy
apiVersion: &quot;cilium.io/v2&quot;
kind: CiliumNetworkPolicy
metadata:
  name: &quot;l3-rule&quot;
spec:
  endpointSelector:
    matchLabels:
      role: backend
  ingress:
  - fromEndpoints:
    - matchLabels:
        role: frontend&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이를 쿠버네티스 NetworkPolicy와 비교해보면 아래와 같습니다. Cilium에서는 파드를 Cilium Endpoint로 인식합니다. 얼핏 보면 podSelector를 endpointSelector로 변경하고, 조금 더 간결한 문법을 사용합니다.&lt;/p&gt;
&lt;pre class=&quot;yaml&quot;&gt;&lt;code&gt;# NetworkPolicy
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-frontend-to-backend
spec:
  podSelector:
    matchLabels:
      role: backend
  policyTypes:
  - Ingress
  ingress:
  - from:
    - podSelector:
        matchLabels:
          role: frontend&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Endpoint based 정책은 기존 쿠버네티스 Network Policy에서도 유사하게 사용할 수 있습니다. 이후 살펴볼 정책들은 CiliumNetworkPolicy에서만 가능합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;Service based&lt;/h4&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;CiliumNetworkPolicy 에서는 Service의 이름이나, Service에 지정된 Label로 정책을 생성할 수 있습니다.&lt;/p&gt;
&lt;pre class=&quot;less&quot;&gt;&lt;code&gt;apiVersion: &quot;cilium.io/v2&quot;
kind: CiliumNetworkPolicy
metadata:
  name: &quot;service-rule&quot;
spec:
  endpointSelector:
    matchLabels:
      id: app2
  egress:
  - toServices:
    # Services may be referenced by namespace + name
    - k8sService:
        serviceName: myservice
        namespace: default
    # Services may be referenced by namespace + label selector
    - k8sServiceSelector:
        selector:
          matchLabels:
            env: staging
        namespace: another-namespace&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;Entities based&lt;/h4&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Entities based 정책은 정의된 엔티티에 대한 네트워크 정책을 정의할 수 있도록 합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;host, remote-node, kube-apiserver, ingress, cluster, init, health, unmanaged, world, all 과 같은 Entities를 사용할 수 있습니다.&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;&lt;b&gt;host&lt;/b&gt;: 로컬 호스트. 여기에는 로컬 호스트의 호스트 네트워킹 모드에서 실행되는 모든 컨테이너도 포함됩니다.&lt;/li&gt;
&lt;li&gt;&lt;b&gt;remote-node&lt;/b&gt;: 로컬 호스트 이외의 연결된 클러스터에 있는 모든 노드입니다.&lt;/li&gt;
&lt;li&gt;&lt;b&gt;kube-apiserver&lt;/b&gt;: 클러스터의 kube-apiserver를 나타냅니다.&lt;/li&gt;
&lt;li&gt;&lt;b&gt;ingress&lt;/b&gt;: 수신 L7 트래픽을 처리하는 Cilium Envoy 인스턴스를 나타냅니다.&lt;/li&gt;
&lt;li&gt;&lt;b&gt;cluster&lt;/b&gt;: 로컬 클러스터 내부의 모든 네트워크 엔드포인트의 논리적 그룹입니다.&lt;/li&gt;
&lt;li&gt;&lt;b&gt;init&lt;/b&gt;: 부트스트랩 단계의 모든 엔드포인트가 포함되어 seuciry identity가 아직 확인되지 않은 상태입니다.&lt;/li&gt;
&lt;li&gt;&lt;b&gt;health&lt;/b&gt;: 클러스터 연결 상태를 확인하는 데 사용되는 상태 엔드포인트를 나타냅니다.&lt;/li&gt;
&lt;li&gt;&lt;b&gt;unmanaged&lt;/b&gt;: Cilium에서 관리하지 않는 엔드포인트를 나타냅니다.&lt;/li&gt;
&lt;li&gt;&lt;b&gt;world&lt;/b&gt;: 클러스터 외부의 모든 엔드포인트에 해당합니다. world에 허용하는 것은 CIDR 0.0.0.0/0에 허용하는 것과 동일합니다.&lt;/li&gt;
&lt;li&gt;&lt;b&gt;all&lt;/b&gt;: 알려진 모든 클러스터의 조합과 world를 나타내며 모든 통신을 화이트리스트에 추가합니다.&lt;/li&gt;
&lt;/ul&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;기존 네트워크 정책에서는 없는 개념이라, 몇가지 Entity는 어떤 의미인지 정확하지 않기도 합니다. 이해하기로는 정책의 대상을 단순히 쿠버네티스 리소스에 국한하지 않고, 보다 포괄적이면서 구분 가능한 개체 혹은 집합의 개념을 도입한 것 같습니다. 이를 통해 쿠버네티스 클러스터 내부 리소스 간의 보안이 아닌, 클러스터의 바운더리를 확장한 네트워크에서 보안을 제공하는 것으로 보입니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아래 Entitiy에 대해서 테스트 해보겠습니다. 샘플 애플리케이션과 dev 목적의 pod를 실행합니다.&lt;/p&gt;
&lt;pre class=&quot;yaml&quot;&gt;&lt;code&gt;cat &amp;lt;&amp;lt; EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
  name: webpod
spec:
  replicas: 2
  selector:
    matchLabels:
      app: webpod
  template:
    metadata:
      labels:
        app: webpod
    spec:
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchExpressions:
              - key: app
                operator: In
                values:
                - sample-app
            topologyKey: &quot;kubernetes.io/hostname&quot;
      containers:
      - name: webpod
        image: traefik/whoami
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: webpod
  labels:
    app: webpod
spec:
  selector:
    app: webpod
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80
  type: ClusterIP
EOF

# k8s-w1 노드에 dev 파드 배포
cat &amp;lt;&amp;lt;EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
  name: dev-pod
  labels:
    env: dev
spec:
  nodeName: k8s-w1
  containers:
  - name: netshoot
    image: nicolaka/netshoot
    command: [&quot;tail&quot;]
    args: [&quot;-f&quot;, &quot;/dev/null&quot;]
  terminationGracePeriodSeconds: 0
EOF&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;생성된 파드를 확인해보고, 통신 테스트를 수행해 보겠습니다.&lt;/p&gt;
&lt;pre class=&quot;yaml&quot;&gt;&lt;code&gt;(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get no -owide
NAME      STATUS   ROLES           AGE   VERSION   INTERNAL-IP      EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION     CONTAINER-RUNTIME
k8s-ctr   Ready    control-plane   43h   v1.33.4   192.168.10.100   &amp;lt;none&amp;gt;        Ubuntu 24.04.2 LTS   6.8.0-64-generic   containerd://1.7.27
k8s-w1    Ready    &amp;lt;none&amp;gt;          43h   v1.33.4   192.168.10.101   &amp;lt;none&amp;gt;        Ubuntu 24.04.2 LTS   6.8.0-64-generic   containerd://1.7.27
k8s-w2    Ready    &amp;lt;none&amp;gt;          43h   v1.33.4   192.168.10.102   &amp;lt;none&amp;gt;        Ubuntu 24.04.2 LTS   6.8.0-64-generic   containerd://1.7.27

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get svc
NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1       &amp;lt;none&amp;gt;        443/TCP   43h
webpod       ClusterIP   10.96.220.228   &amp;lt;none&amp;gt;        80/TCP    5m26s

# 통신 테스트
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it dev-pod -- bash
dev-pod:~# curl webpod
Hostname: webpod-697b545f57-xc9cc
IP: 127.0.0.1
IP: ::1
IP: 172.20.2.48
IP: fe80::e4b8:eff:fe4b:f81e
RemoteAddr: 172.20.1.194:33880
GET / HTTP/1.1
Host: webpod
User-Agent: curl/8.14.1
Accept: */*

dev-pod:~# ping -c 1 192.168.10.100
PING 192.168.10.100 (192.168.10.100) 56(84) bytes of data.
64 bytes from 192.168.10.100: icmp_seq=1 ttl=63 time=1.53 ms

--- 192.168.10.100 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 1.530/1.530/1.530/0.000 ms&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;테스트 파드에서 통신을 테스트 해보면 특이사항이 없습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아래와 같이 정책을 만들어 보겠습니다. (entities based 정책을 만드는 것 자체가 그 외의 통신을 거부하도록 강제하지 않는 것 같습니다. default deny 정책도 추가해서 테스트 했습니다)&lt;/p&gt;
&lt;pre class=&quot;properties&quot;&gt;&lt;code&gt;# 기본 차단 정책
cat &amp;lt;&amp;lt;EOF | kubectl apply -f -
apiVersion: &quot;cilium.io/v2&quot;
kind: CiliumNetworkPolicy
metadata:
  name: &quot;default-deny-dev&quot;
spec:
  endpointSelector:
    matchLabels:
      env: dev
  egress: []
EOF

# host에 대해서만 허용
cat &amp;lt;&amp;lt; EOF | kubectl apply -f -
apiVersion: &quot;cilium.io/v2&quot;
kind: CiliumNetworkPolicy
metadata:
  name: &quot;dev-to-host&quot;
spec:
  endpointSelector:
    matchLabels:
      env: dev
  egress:
    - toEntities:
      - host
EOF&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;해당 파드가 위치한 노드 외에는 모드 차단되는 것을 알 수 있습니다.&lt;/p&gt;
&lt;pre class=&quot;routeros&quot;&gt;&lt;code&gt;# 허용됨
dev-pod:~# ping -c 1 192.168.10.101
PING 192.168.10.101 (192.168.10.101) 56(84) bytes of data.
64 bytes from 192.168.10.101: icmp_seq=1 ttl=64 time=2.66 ms

--- 192.168.10.101 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms

# 차단됨
dev-pod:~# curl -m 1 webpod
curl: (28) Resolving timed out after 1003 milliseconds
dev-pod:~# ping -c 1 192.168.10.100
PING 192.168.10.100 (192.168.10.100) 56(84) bytes of data.
^C
--- 192.168.10.100 ping statistics ---
1 packets transmitted, 0 received, 100% packet loss, time 0ms

rtt min/avg/max/mdev = 2.655/2.655/2.655/0.000 ms
dev-pod:~# ping -c 1 192.168.10.102
PING 192.168.10.102 (192.168.10.102) 56(84) bytes of data.
^C
--- 192.168.10.102 ping statistics ---
1 packets transmitted, 0 received, 100% packet loss, time 0ms&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;정책을 추가해보겠습니다.&lt;/p&gt;
&lt;pre class=&quot;properties&quot;&gt;&lt;code&gt;# host에 대해서만 허용
cat &amp;lt;&amp;lt; EOF | kubectl apply -f -
apiVersion: &quot;cilium.io/v2&quot;
kind: CiliumNetworkPolicy
metadata:
  name: &quot;dev-to-remote-node&quot;
spec:
  endpointSelector:
    matchLabels:
      env: dev
  egress:
    - toEntities:
      - remote-node
EOF&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이제 다른 노드도 통신이 가능해집니다.&lt;/p&gt;
&lt;pre class=&quot;routeros&quot;&gt;&lt;code&gt;dev-pod:~# ping -c 1 192.168.10.100
PING 192.168.10.100 (192.168.10.100) 56(84) bytes of data.
64 bytes from 192.168.10.100: icmp_seq=1 ttl=63 time=4.02 ms

--- 192.168.10.100 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 4.021/4.021/4.021/0.000 ms
dev-pod:~# ping -c 1 192.168.10.102
PING 192.168.10.102 (192.168.10.102) 56(84) bytes of data.
64 bytes from 192.168.10.102: icmp_seq=1 ttl=63 time=1.88 ms

--- 192.168.10.102 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 1.884/1.884/1.884/0.000 ms&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이 보다 세부적인 노드간 통신 제어를 Node based 정책에서 가능하게 됩니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;Node based&lt;/h4&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Entity 기반의 host 혹은 remote-node를 확장해서 Label 기반으로 노드 통신을 제어할 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;예를 들어, 노드 그룹(노드 풀)로 prd-front 노드 그룹과 prd-backend 노드 그룹을 기타 dev-*로 노드 그룹을 가지는 경우, prd 노드 그룹 간의 통신만 허용하는 시나리오가 있을 수 있습니다.&lt;/p&gt;
&lt;pre class=&quot;less&quot;&gt;&lt;code&gt;apiVersion: &quot;cilium.io/v2&quot;
kind: CiliumNetworkPolicy
metadata:
  name: &quot;to-prod-from-control-plane-nodes&quot;
spec:
  endpointSelector:
    matchLabels:
      env: prod
  ingress:
    - fromNodes:
        - matchLabels:
            node-role.kubernetes.io/control-plane: &quot;&quot;&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;IP/CIDR based&lt;/h4&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;IP나 혹은 CIDR을 허용 해줄 수 있습니다.&lt;/p&gt;
&lt;pre class=&quot;yaml&quot;&gt;&lt;code&gt;apiVersion: &quot;cilium.io/v2&quot;
kind: CiliumNetworkPolicy
metadata:
  name: &quot;cidr-rule&quot;
spec:
  endpointSelector:
    matchLabels:
      app: myService
  egress:
  - toCIDR:
    - 20.1.1.1/32
  - toCIDRSet:
    - cidr: 10.0.0.0/8
      except:
      - 10.96.0.0/12&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이 정책도 쿠버네티스의 기본 Network Policy에서 유사하게 생성 가능합니다.&lt;/p&gt;
&lt;pre class=&quot;yaml&quot;&gt;&lt;code&gt;apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-cidr
spec:
  podSelector:
    matchLabels:
      app: myService
  policyTypes:
  - Egress
  egress:
  - to:
    - ipBlock:
        cidr: 20.1.1.1/32
  - to:
    - ipBlock:
        cidr: 10.0.0.0/8
        except:
        - 10.96.0.0/12&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;DNS based&lt;/h4&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Cilium의 DNS-Based Policy는 L7 수준으로 통제 되는 것은 아니며, Cilium Agent 내부의 DNS Proxy를 통해서 처리됩니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;먼저 DNS-Based Policy가 만들어지면, 파드가 DNS 요청을 보낼 때, DNS Proxy가 요청을 가로채고, DNS=프록시는 요청을 DNS 서버로 전달해서 응답을 파드에 전달하기 전에 응답 IP를 내부적으로 저장합니다. 결국 toFQDNs에 명시된 도메인과 일치하는 응답 IP만 egress 트래픽을 대상으로 허용되므로, 이러한 동작은 L3 수준의 통제가 됩니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;파드가 실제로 DNS 응답에서 받은 IP로 연결을 시도하면 Cilium은 해당 IP가 정책에 의해 허용된 것인지 확인하여 허용 혹은 차단합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아래와 같이 실습을 통해서 살펴보겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;참고: &lt;a href=&quot;https://docs.cilium.io/en/stable/security/dns/&quot;&gt;https://docs.cilium.io/en/stable/security/dns/&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;mediabot이라는 파드를 생성하겠습니다. 이 시나리오에서 mediabot은 GitHub 리파지터리를 관리하기 위해 github에 접근해야하고, 다른 서비스에서는 접근을 하면 안됩니다.&lt;/p&gt;
&lt;pre class=&quot;elixir&quot;&gt;&lt;code&gt;$ kubectl create -f https://raw.githubusercontent.com/cilium/cilium/1.18.1/examples/kubernetes-dns/dns-sw-app.yaml
$ kubectl wait pod/mediabot --for=condition=Ready
$ kubectl get pods

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get po
NAME                      READY   STATUS    RESTARTS   AGE
mediabot                  1/1     Running   0          49s&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아래와 같이 정책을 배포 합니다.&lt;/p&gt;
&lt;pre class=&quot;yaml&quot;&gt;&lt;code&gt;cat &amp;lt;&amp;lt; EOF | kubectl apply -f -
apiVersion: &quot;cilium.io/v2&quot;
kind: CiliumNetworkPolicy
metadata:
  name: &quot;fqdn&quot;
spec:
  endpointSelector:
    matchLabels:
      org: empire
      class: mediabot
  egress:
  - toFQDNs:
    - matchName: &quot;api.github.com&quot; # *.github.com 도 등록 가능함
  - toEndpoints:
    - matchLabels:
        &quot;k8s:io.kubernetes.pod.namespace&quot;: kube-system
        &quot;k8s:k8s-app&quot;: kube-dns
    toPorts:
    - ports:
      - port: &quot;53&quot;
        protocol: ANY
      rules:
        dns:
        - matchPattern: &quot;*&quot;
EOF&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아래와 같이 테스트 해보면 toFQDNs로 허용한 &lt;code&gt;api.github.com&lt;/code&gt;만 접근이 되는 것을 알 수 있습니다.&lt;/p&gt;
&lt;pre class=&quot;vim&quot;&gt;&lt;code&gt;$ kubectl exec mediabot -- curl -I -s https://api.github.com | head -1
$ kubectl exec mediabot -- curl -I -s --max-time 5 https://support.github.com | head -1

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl exec mediabot -- curl -I -s https://api.github.com | head -1
HTTP/2 200
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl exec mediabot -- curl -I -s --max-time 5 https://support.github.com | head -1
command terminated with exit code 28&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;L4 Policy&lt;/h3&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;CiliumNetworkPolicy 에서 L4 계층의 정책은 L3 정책과 함께 사용할 수도 있고, 독립적으로 사용할 수도 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;참고: &lt;a href=&quot;https://docs.cilium.io/en/stable/security/policy/language/#layer-4-examples&quot;&gt;https://docs.cilium.io/en/stable/security/policy/language/#layer-4-examples&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;L4 계층이므로, 특정 프로토콜과 포트를 지정하여 패킷을 필터링 할 수 있습니다.&lt;/p&gt;
&lt;pre class=&quot;less&quot;&gt;&lt;code&gt;apiVersion: &quot;cilium.io/v2&quot;
kind: CiliumNetworkPolicy
metadata:
  name: &quot;l4-rule&quot;
spec:
  endpointSelector:
    matchLabels:
      app: myService
  egress:
    - toPorts:
      - ports:
        - port: &quot;80&quot;
          protocol: TCP&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;L7 Policy&lt;/h3&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Cilium에서는 L7 정책을 제공하며, 이 정책은 노드에 실행 중인 Envoy 인스턴스를 통해서 트래픽을 프록시하여 동작합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;참고: &lt;a href=&quot;https://docs.cilium.io/en/stable/security/policy/language/#layer-7-examples&quot;&gt;https://docs.cilium.io/en/stable/security/policy/language/#layer-7-examples&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;3, 4 계층의 정책과 다르게 L7 계층의 정책은 패킷의 손실을 의미하지 않습니다. 가능한 경우 애플리케이션 프로토콜 별 접근 거부 메시지를 작성하여 반환한다는 차이가 있습니다. 예를 들어, HTTP 요청에 대해서는 &lt;code&gt;HTTP 403 access denied&lt;/code&gt;가 DNS 요청의 경우에는 &lt;code&gt;DNS REFUSED&lt;/code&gt;와 같은 응답을 받습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;HTTP 프로토콜을 예를 들면, 특정 path를 허용할 수 있습니다.&lt;/p&gt;
&lt;pre class=&quot;less&quot;&gt;&lt;code&gt;apiVersion: &quot;cilium.io/v2&quot;
kind: CiliumNetworkPolicy
metadata:
  name: &quot;rule1&quot;
spec:
  description: &quot;Allow HTTP GET /public from env=prod to app=service&quot;
  endpointSelector:
    matchLabels:
      app: service
  ingress:
  - fromEndpoints:
    - matchLabels:
        env: prod
    toPorts:
    - ports:
      - port: &quot;80&quot;
        protocol: TCP
      rules:
        http:
        - method: &quot;GET&quot;
          path: &quot;/public&quot;&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아래와 같이 샘플 애플리케이션을 생성해보겠습니다.&lt;/p&gt;
&lt;pre class=&quot;yaml&quot;&gt;&lt;code&gt;cat &amp;lt;&amp;lt; EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
  name: service-pod
  labels:
    app: service
spec:
  containers:
  - name: nginx
    image: nginx
    volumeMounts:
    - name: html
      mountPath: /usr/share/nginx/html
    - name: config
      mountPath: /etc/nginx/conf.d/default.conf
      subPath: default.conf
  volumes:
  - name: html
    configMap:
      name: service-html
  - name: config
    configMap:
      name: nginx-config
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: nginx-config
data:
  default.conf: |
    server {
      listen 80;
      location /public {
        rewrite ^/public$ /public.html break;
        root /usr/share/nginx/html;
      }
      location /private {
        rewrite ^/private$ /private.html break;
        root /usr/share/nginx/html;
      }
    }
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: service-html
data:
  public.html: |
    &amp;lt;html&amp;gt;&amp;lt;body&amp;gt;&amp;lt;h1&amp;gt;Hello from /public&amp;lt;/h1&amp;gt;&amp;lt;/body&amp;gt;&amp;lt;/html&amp;gt;
  private.html: |
    &amp;lt;html&amp;gt;&amp;lt;body&amp;gt;&amp;lt;h1&amp;gt;Hello from /private&amp;lt;/h1&amp;gt;&amp;lt;/body&amp;gt;&amp;lt;/html&amp;gt;
EOF

# Service 생성
cat &amp;lt;&amp;lt;EOF | kubectl apply -f -
apiVersion: v1
kind: Service
metadata:
  name: service-svc
spec:
  selector:
    app: service
  ports:
  - name: http
    port: 80
    targetPort: 80
  type: ClusterIP
EOF

# client pod 생성
cat &amp;lt;&amp;lt;EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
  name: client-prod
  labels:
    env: prod
spec:
  containers:
  - name: netshoot
    image: nicolaka/netshoot
    command: [&quot;tail&quot;]
    args: [&quot;-f&quot;, &quot;/dev/null&quot;]
  terminationGracePeriodSeconds: 0
EOF

cat &amp;lt;&amp;lt;EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
  name: client-dev
  labels:
    env: dev
spec:
  containers:
  - name: netshoot
    image: nicolaka/netshoot
    command: [&quot;tail&quot;]
    args: [&quot;-f&quot;, &quot;/dev/null&quot;]
  terminationGracePeriodSeconds: 0
EOF&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아래와 같이 정의한 /public, /private path가 잘 호출되는지 확인해보겠습니다.&lt;/p&gt;
&lt;pre class=&quot;dts&quot;&gt;&lt;code&gt;(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it client-prod -- curl -m 1 service-svc/public
&amp;lt;html&amp;gt;&amp;lt;body&amp;gt;&amp;lt;h1&amp;gt;Hello from /public&amp;lt;/h1&amp;gt;&amp;lt;/body&amp;gt;&amp;lt;/html&amp;gt;
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it client-prod -- curl -m 1 service-svc/private
&amp;lt;html&amp;gt;&amp;lt;body&amp;gt;&amp;lt;h1&amp;gt;Hello from /private&amp;lt;/h1&amp;gt;&amp;lt;/body&amp;gt;&amp;lt;/html&amp;gt;

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it client-dev -- curl -m 1 service-svc/public
&amp;lt;html&amp;gt;&amp;lt;body&amp;gt;&amp;lt;h1&amp;gt;Hello from /public&amp;lt;/h1&amp;gt;&amp;lt;/body&amp;gt;&amp;lt;/html&amp;gt;
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it client-dev -- curl -m 1 service-svc/private
&amp;lt;html&amp;gt;&amp;lt;body&amp;gt;&amp;lt;h1&amp;gt;Hello from /private&amp;lt;/h1&amp;gt;&amp;lt;/body&amp;gt;&amp;lt;/html&amp;gt;&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이제 path 기준으로 정책을 생성하겠습니다. 여기서는 /public 만 허용합니다.&lt;/p&gt;
&lt;pre class=&quot;yaml&quot;&gt;&lt;code&gt;cat &amp;lt;&amp;lt;EOF | kubectl apply -f -
apiVersion: &quot;cilium.io/v2&quot;
kind: CiliumNetworkPolicy
metadata:
  name: &quot;allow-public-from-prod&quot;
spec:
  description: &quot;Allow HTTP GET /public from env=prod to app=service&quot;
  endpointSelector:
    matchLabels:
      app: service
  ingress:
  - fromEndpoints:
    - matchLabels:
        env: prod
    toPorts:
    - ports:
      - port: &quot;80&quot;
        protocol: TCP
      rules:
        http:
        - method: &quot;GET&quot;
          path: &quot;/public&quot;
EOF&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;결과를 확인해보면, app=service를 가진 서비스로 호출하는데 아래와 같이 결과에 차이가 있습니다.&lt;/p&gt;
&lt;pre class=&quot;awk&quot;&gt;&lt;code&gt;# env=prod는 /private에 대해서 'Access denied'로 실패함
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it client-prod -- curl -m 1 service-svc/public
&amp;lt;html&amp;gt;&amp;lt;body&amp;gt;&amp;lt;h1&amp;gt;Hello from /public&amp;lt;/h1&amp;gt;&amp;lt;/body&amp;gt;&amp;lt;/html&amp;gt;
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it client-prod -- curl -m 1 service-svc/private
Access denied

# env=dev는 모든 요청에 실패함
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it client-dev -- curl -m 1 service-svc/public
curl: (28) Connection timed out after 1002 milliseconds
command terminated with exit code 28
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it client-dev -- curl -m 1 service-svc/private
curl: (28) Connection timed out after 1002 milliseconds
command terminated with exit code 28&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;L7 정책에서는 특정 header를 가진 요청에 대해서 허용할 수 도 있습니다. 아래 예시에서는 &lt;code&gt;X-My-Header: true&lt;/code&gt;헤더가 있는 경우 요청이 허용됩니다.&lt;/p&gt;
&lt;pre class=&quot;less&quot;&gt;&lt;code&gt;apiVersion: &quot;cilium.io/v2&quot;
kind: CiliumNetworkPolicy
metadata:
  name: &quot;l7-rule&quot;
spec:
  endpointSelector:
    matchLabels:
      app: myService
  ingress:
  - toPorts:
    - ports:
      - port: '80'
        protocol: TCP
      rules:
        http:
        - method: GET
          path: &quot;/path1$&quot;
        - method: PUT
          path: &quot;/path2$&quot;
          headers:
          - 'X-My-Header: true'&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이상 Cilium의 Network Policy에서 기존 쿠버네티스 Network 보다 확장된 기능을 살펴봤습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;마치며&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Cilium의 Security를 다루며 Cilium Network security에서 필요한 개념과 CiliumNetworkPolicy에 대해서 살펴봤습니다. Cilium에서는 그 외에 노드 간 네트워크 패킷 암호화를 위한 &lt;a href=&quot;https://docs.cilium.io/en/stable/security/network/encryption/&quot;&gt;Transparent Encryption&lt;/a&gt; 기능도 제공하고 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;본 게시물은 CloudNet에서 진행하는 &lt;code&gt;Cilium 스터디&lt;/code&gt;를 참여하면서, 제공해주신 가이드를 바탕으로 학습한 내용을 정리한 내용입니다. 지난 8주 간 Cilium이 제공하는 기본적인 CNI Plugin의 역할뿐 아니라, Observability, 외부 라우팅 연동, Multi Cluster, Service Mesh, CiliumNetworkPolicy와 같은 다양한 주제를 살펴봤습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;살펴보기로 Cilium은 기본적인 CNI Plugin의 역할이 아닌, '네트워크'의 범주에서 필요한 모든 기능을 추가 Addon이 필요없이 Cilium에서 All-in-One으로 제공하기 위해 확장하고 있는 것으로 보입니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;다만 제공되는 기능이 넓어지다보니 beta 수준의 기능이 많다는 점과 한편 클라우드의 매니지드 쿠버네티스 서비스에서는 이러한 모든 기능을 활용하기 어려울 수 있다는 점이 여전히 한계로 느껴집니다. 그러함에도 확장된 기능이 아닌 CNI Plugin 자체로 Cilium과 eBPF가 주는 효율성은 대규모 클러스터에서는 여전히 좋은 선택지가 될 것으로 보입니다.&lt;/p&gt;</description>
      <category>Cilium</category>
      <category>cilium</category>
      <category>kubernetes</category>
      <category>networkpolicy</category>
      <category>security</category>
      <author>한명</author>
      <guid isPermaLink="true">https://a-person.tistory.com/60</guid>
      <comments>https://a-person.tistory.com/60#entry60comment</comments>
      <pubDate>Sat, 6 Sep 2025 21:48:00 +0900</pubDate>
    </item>
    <item>
      <title>AKS의 Azure CNI Powered by Cilium</title>
      <link>https://a-person.tistory.com/59</link>
      <description>&lt;p data-ke-size=&quot;size16&quot;&gt;이번 포스트에서는 Azure Kubernetes Service(AKS)에서 제공하는 CNI 옵션인 Azure CNI Powered by Cilium에 대해서 살펴보겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;목차&lt;/h2&gt;
&lt;ol style=&quot;list-style-type: decimal;&quot; data-ke-list-type=&quot;decimal&quot;&gt;
&lt;li&gt;Azure CNI Powered by Cilium 개요&lt;/li&gt;
&lt;li&gt;실습 환경 구성&lt;/li&gt;
&lt;li&gt;ACNS(Advanced Container Networking Services)&lt;/li&gt;
&lt;li&gt;Azure CNI Powered by Cilium 제약 사항&lt;/li&gt;
&lt;/ol&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;1. Azure CNI Powered by Cilium 개요&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Azure CNI Powered by Cilium은 AKS에서 매니지드로 제공하는 CNI의 하나로 Azure CNI의 control plane과 Cilium의 data plane을 결합하여 높은 성능의 네트워킹과 보안을 제공할 수 있는 옵션입니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Azure CNI Powered by Cilium를 사용하는 것은 Cilium을 CNI Plugin으로 단독 사용하는 것은 아니며, 기본적으로 CNI를 Azure CNI나 혹은 Azure CNI Overlay을 지정해야 하며, data plane에 대한 처리만 cilium에 위임하는 방식으로 사용됩니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;쉽게는 IPAM(IP Address Management)은 Azure CNI의 방식을 사용하고, Service Routing, Network Policy, Observability와 같은 영역에 Cilium을 사용하는 것으로 이해할 수 있습니다.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://cdn.sanity.io/images/xinsvxfu/production/0f5ad61597b52b479c2add956731a266e59f50c6-768x460.webp?auto=format&amp;amp;q=80&amp;amp;fit=clip&amp;amp;w=1152&quot; alt=&quot;Image&quot; /&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;출처: &lt;a href=&quot;https://isovalent.com/blog/post/tutorial-azure-cni-powered-by-cilium/&quot;&gt;https://isovalent.com/blog/post/tutorial-azure-cni-powered-by-cilium/&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이때, Azure CNI는 파드의 IP를 Virtual Network에서 할당 받는 방식이며(노드와 파드가 동일한 IP를 할당 받음), Azure CNI Overlay는 파드의 IP를 Overlay Network에서 할당 받습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Azure CNI Powered by Cilium를 설치하는 명령을 살펴보면 조금 더 명확해 집니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아래는 기본 예시이며, &lt;code&gt;--network-plugin&lt;/code&gt;은 &lt;code&gt;azure&lt;/code&gt;를 지정하고, &lt;code&gt;--network-dataplane&lt;/code&gt;으로 &lt;code&gt;cilium&lt;/code&gt;을 지정합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;https://learn.microsoft.com/en-us/azure/aks/azure-cni-powered-by-cilium#option-3-assign-ip-addresses-from-the-node-subnet&quot;&gt;https://learn.microsoft.com/en-us/azure/aks/azure-cni-powered-by-cilium#option-3-assign-ip-addresses-from-the-node-subnet&lt;/a&gt;&lt;/p&gt;
&lt;pre class=&quot;livecodeserver&quot;&gt;&lt;code&gt;az aks create \
    --name &amp;lt;clusterName&amp;gt; \
    --resource-group &amp;lt;resourceGroupName&amp;gt; \
    --location &amp;lt;location&amp;gt; \
    --network-plugin azure \
    --network-dataplane cilium \
    --generate-ssh-keys&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;실습을 통해서 상세한 내용을 살펴보겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;2. 실습 환경 구성&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;실습 환경을 구성하여 Azure CNI Powered by Cilium를 살펴보겠습니다.&lt;/p&gt;
&lt;pre class=&quot;routeros&quot;&gt;&lt;code&gt;# Set environment variables
export LOCATION=&quot;koreacentral&quot;
export CLUSTER_NAME=&quot;aks-cilium&quot;
export RESOURCE_GROUP=${CLUSTER_NAME}-rg

# Create a resource group
az group create --name $RESOURCE_GROUP --location $LOCATION

# Create an AKS cluster
az aks create \
    --name $CLUSTER_NAME \
    --resource-group $RESOURCE_GROUP \
    --location $LOCATION \
    --network-plugin azure \
    --network-plugin-mode overlay \
    --pod-cidr 192.168.0.0/16 \
    --network-dataplane cilium \
    --node-count 2 \
    --generate-ssh-keys

# Get a kubeconfig 
az aks get-credentials -g $RESOURCE_GROUP -n $CLUSTER_NAME

$ kubectl get no
NAME                                STATUS   ROLES    AGE    VERSION
aks-nodepool1-90499020-vmss000000   Ready    &amp;lt;none&amp;gt;   112s   v1.32.6
aks-nodepool1-90499020-vmss000001   Ready    &amp;lt;none&amp;gt;   111s   v1.32.6&lt;/code&gt;&lt;/pre&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;&amp;nbsp;&lt;/h4&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;Azure CNI Powered by Cilium 환경 확인&lt;/h4&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;설치 후 생성된 파드를 살펴보면 기존 Azure CNI에서 일부 차이점을 알 수 있습니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;$ kubectl get po -A
NAMESPACE     NAME                                             READY   STATUS    RESTARTS      AGE
kube-system   azure-cns-drv9w                                  1/1     Running   0             115s
kube-system   azure-cns-xpdm5                                  1/1     Running   0             114s
kube-system   azure-ip-masq-agent-4x2rs                        1/1     Running   0             114s
kube-system   azure-ip-masq-agent-gbrs7                        1/1     Running   0             115s
kube-system   cilium-5tldb                                     1/1     Running   0             114s
kube-system   cilium-bgk9b                                     1/1     Running   0             115s
kube-system   cilium-operator-7f9474864f-g9mh2                 1/1     Running   0             3m44s
kube-system   cilium-operator-7f9474864f-gr8wp                 1/1     Running   0             3m44s
kube-system   cloud-node-manager-625qw                         1/1     Running   0             114s
kube-system   cloud-node-manager-8zzqj                         1/1     Running   0             115s
kube-system   coredns-6f776c8fb5-44lcl                         1/1     Running   0             3m34s
kube-system   coredns-6f776c8fb5-mmjkk                         1/1     Running   0             91s
kube-system   coredns-autoscaler-864c4496bf-5jnrc              1/1     Running   0             3m34s
kube-system   csi-azuredisk-node-8vkj4                         3/3     Running   0             115s
kube-system   csi-azuredisk-node-hw45x                         3/3     Running   0             114s
kube-system   csi-azurefile-node-bwmxb                         3/3     Running   0             114s
kube-system   csi-azurefile-node-zns9n                         3/3     Running   0             115s
kube-system   konnectivity-agent-799cb8b8d-bqpbn               1/1     Running   0             91s
kube-system   konnectivity-agent-799cb8b8d-fvhsc               1/1     Running   0             3m33s
kube-system   konnectivity-agent-autoscaler-6ddd978bfc-2vz76   1/1     Running   0             3m33s
kube-system   metrics-server-867c8845b7-7hmlv                  2/2     Running   2 (89s ago)   3m33s
kube-system   metrics-server-867c8845b7-mv79z                  2/2     Running   2 (89s ago)   3m33s&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;먼저 &lt;code&gt;azure-cns&lt;/code&gt; 라는 파드가 생성되고, 또한 &lt;code&gt;cilium&lt;/code&gt;과 &lt;code&gt;cilium-operator&lt;/code&gt;가 설치됩니다. 이때 &lt;code&gt;azure-cns&lt;/code&gt;는 IPAM 역할을 하는 컴포넌트이고, &lt;code&gt;cilium&lt;/code&gt;이 Cilium Agent에 해당합니다. &lt;code&gt;azure-cns&lt;/code&gt;가 IPAM의 역할을 하기는 하지만, 아래의 그림의 설명은 다소 분명하지 않은 설명이기는 합니다. 이후 추가로 설명하겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;2560&quot; data-origin-height=&quot;2214&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bmLIeI/btsQeC8EFCp/IUJAxGQStYAtzCqIWNjS00/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bmLIeI/btsQeC8EFCp/IUJAxGQStYAtzCqIWNjS00/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bmLIeI/btsQeC8EFCp/IUJAxGQStYAtzCqIWNjS00/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbmLIeI%2FbtsQeC8EFCp%2FIUJAxGQStYAtzCqIWNjS00%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;2560&quot; height=&quot;2214&quot; data-origin-width=&quot;2560&quot; data-origin-height=&quot;2214&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;출처: &lt;a href=&quot;https://isovalent.com/blog/post/azure-cni-cilium/&quot;&gt;https://isovalent.com/blog/post/azure-cni-cilium/&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Cilium 명령어로 확인도 가능하며 &lt;code&gt;cilium status&lt;/code&gt;를 확인해보겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;https://docs.cilium.io/en/stable/gettingstarted/k8s-install-default/#install-the-cilium-cli&quot;&gt;https://docs.cilium.io/en/stable/gettingstarted/k8s-install-default/#install-the-cilium-cli&lt;/a&gt;&lt;/p&gt;
&lt;pre class=&quot;yaml&quot;&gt;&lt;code&gt;$ cilium status
    /&amp;macr;&amp;macr;\
 /&amp;macr;&amp;macr;\__/&amp;macr;&amp;macr;\    Cilium:             OK
 \__/&amp;macr;&amp;macr;\__/    Operator:           OK
 /&amp;macr;&amp;macr;\__/&amp;macr;&amp;macr;\    Envoy DaemonSet:    disabled (using embedded mode)
 \__/&amp;macr;&amp;macr;\__/    Hubble Relay:       disabled
    \__/       ClusterMesh:        disabled

DaemonSet              cilium                   Desired: 2, Ready: 2/2, Available: 2/2
Deployment             cilium-operator          Desired: 2, Ready: 2/2, Available: 2/2
Containers:            cilium                   Running: 2
                       cilium-operator          Running: 2
                       clustermesh-apiserver
                       hubble-relay
Cluster Pods:          9/9 managed by Cilium
Helm chart version:
Image versions         cilium             mcr.microsoft.com/containernetworking/cilium/cilium:v1.17.4-250610: 2
                       cilium-operator    mcr.microsoft.com/containernetworking/cilium/operator-generic:v1.17.4-250610: 2&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;cilium에서 IPAM을 확인해보면 &lt;code&gt;delegated-plugin&lt;/code&gt;이라고 표시된 것으로 보입니다.&lt;/p&gt;
&lt;pre class=&quot;routeros&quot;&gt;&lt;code&gt;$ cilium config view |grep ipam
enable-lb-ipam                                    false
ipam                                              delegated-plugin
ipam-cilium-node-update-rate                      15s&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;기본적으로 IPAM은 Azure CNI Overlay를 사용하기 때문에 nodes나 ciliumNode에서 podCIDR에 대한 정보가 확인되지 않습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Azure CNI Overlay에서는 &lt;code&gt;nodenetworkconfigs&lt;/code&gt;라는 CRD가 생성되며, 노드 이름으로 생성된 오브젝트를 통해서노드에 할당된 Pod 대역을 확인할 수 있습니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;# 정보 없음
$ kubectl get nodes -o jsonpath='{range .items[*]}{.metadata.name}{&quot;\t&quot;}{.spec.podCIDR}{&quot;\n&quot;}{end}'
aks-nodepool1-90499020-vmss000000
aks-nodepool1-90499020-vmss000001
$ kubectl get ciliumnode -o json | grep podCIDRs -A2

# nodenetworkconfigs 확인
$ kubectl get nodenetworkconfigs -A
NAMESPACE     NAME                                ALLOCATED IPS   NC MODE   NC VERSION
kube-system   aks-nodepool1-90499020-vmss000000   256             static    0
kube-system   aks-nodepool1-90499020-vmss000001   256             static    0
$ kubectl describe nodenetworkconfigs -n kube-system aks-nodepool1-90499020-vmss000000
Name:         aks-nodepool1-90499020-vmss000000
Namespace:    kube-system
Labels:       kubernetes.azure.com/podnetwork-delegationguid=
              kubernetes.azure.com/podnetwork-subnet=
              kubernetes.azure.com/podnetwork-type=overlay
              managed=true
              owner=aks-nodepool1-90499020-vmss000000
Annotations:  &amp;lt;none&amp;gt;
API Version:  acn.azure.com/v1alpha
Kind:         NodeNetworkConfig
Metadata:
  Creation Timestamp:  2025-08-29T12:52:56Z
  Finalizers:
    finalizers.acn.azure.com/dnc-operations
  Generation:  1
  Owner References:
    API Version:           v1
    Block Owner Deletion:  true
    Controller:            true
    Kind:                  Node
    Name:                  aks-nodepool1-90499020-vmss000000
    UID:                   418f6a0c-0555-44af-8a4d-a62bceb75d72
  Resource Version:        1237
  UID:                     0a716407-4f0d-4917-9b0f-528b9a37735c
Spec:
  Requested IP Count:  0
Status:
  Assigned IP Count:  256
  Network Containers:
    Assignment Mode:       static
    Id:                    3b07b4a6-e277-40f5-8ea0-0ec79f1f96d8
    Node IP:               10.224.0.4
    Primary IP:            192.168.0.0/24
    Subnet Address Space:  192.168.0.0/16
    Subnet Name:           routingdomain_d4933ee0-95b7-5249-b245-1fd5e2033272_overlaysubnet
    Type:                  overlay
    Version:               0
Events:
  Type    Reason      Age   From                   Message
  ----    ------      ----  ----                   -------
  Normal  CreatingNC  43m   dnc-rc/nnc-reconciler  Creating new Overlay NC 3b07b4a6-e277-40f5-8ea0-0ec79f1f96d8 for node 68b1a1ba215d820001b7128d_aks-nodepool1-90499020-vmss000000
  Normal  UpdatedNC   43m   dnc-rc/nnc-reconciler  Published NC 3b07b4a6-e277-40f5-8ea0-0ec79f1f96d8

$ kubectl get po -A -owide |grep aks-nodepool1-90499020-vmss000000 |grep 192.168.0.
default       aks-helloworld-66fd479f49-psdqn                  1/1     Running   0          17m   192.168.0.174   aks-nodepool1-90499020-vmss000000   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system   coredns-6f776c8fb5-44lcl                         1/1     Running   0          61m   192.168.0.235   aks-nodepool1-90499020-vmss000000   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system   coredns-autoscaler-864c4496bf-5jnrc              1/1     Running   0          61m   192.168.0.17    aks-nodepool1-90499020-vmss000000   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system   konnectivity-agent-autoscaler-6ddd978bfc-2vz76   1/1     Running   0          61m   192.168.0.90    aks-nodepool1-90499020-vmss000000   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system   konnectivity-agent-c9dc4888c-xjg4q               1/1     Running   0          24m   192.168.0.204   aks-nodepool1-90499020-vmss000000   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;노드 이름으로 생성된 &lt;code&gt;nodenetworkconfigs&lt;/code&gt;를 확인해보면 Node IP와 Primary IP를 확인할 수 있습니다. 여기서 Primary IP가 바로 해당 노드에 할당된 Pod CIDR입니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;CNI plugin 설정 확인&lt;/h4&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;노드에서 CNI 설정 파일을 확인해보면 Azure CNI Powered by Cilium에서 컨테이너 네트워크를 구성하는 동작 과정을 추가로 확인해볼 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;CNI 설정을 확인해보면 cilium을 사용하지만, ipam은 &lt;code&gt;azure-ipam&lt;/code&gt;을 사용하고 있는 것을 확인할 수 있습니다.&lt;/p&gt;
&lt;pre class=&quot;autoit&quot;&gt;&lt;code&gt;root@aks-nodepool1-90499020-vmss000000:/etc/cni/net.d# cat 05-cilium.conflist
{
        &quot;cniVersion&quot;: &quot;0.3.1&quot;,
        &quot;name&quot;: &quot;cilium&quot;,
        &quot;plugins&quot;: [
                {
                        &quot;type&quot;: &quot;cilium-cni&quot;,
                        &quot;ipam&quot;: {
                                &quot;type&quot;: &quot;azure-ipam&quot;
                        },
                        &quot;enable-debug&quot;: true,
                        &quot;log-file&quot;: &quot;/var/log/cilium-cni.log&quot;
                }
        ]
}&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아래 위치에서 CNI 관련 바이너리를 확인할 수 있습니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;root@aks-nodepool1-90499020-vmss000000:/opt/cni/bin# ll
total 221264
drwxr-xr-x 2 root root     4096 Aug 29 12:53 ./
drwxr-xr-x 3 root root     4096 Aug 29 12:52 ../
-rw-r--r-- 1 root root    11357 Jan  6  2025 LICENSE
-rw-r--r-- 1 root root     2343 Jan  6  2025 README.md
-rwxr-xr-x 1 root root 48499839 Aug 29 12:53 azure-ipam* # azure-ipam
-rwxr-xr-x 1 root root  4655178 Jan  6  2025 bandwidth*
-rwxr-xr-x 1 root root  5287212 Jan  6  2025 bridge*
-rwxr-xr-x 1 root root 86128032 Aug 29 12:53 cilium-cni* # cilium-cni
-rwxr-xr-x 1 root root 12762814 Jan  6  2025 dhcp*
-rwxr-xr-x 1 root root  4847854 Jan  6  2025 dummy*
..&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;앞서 살펴본 &lt;code&gt;azure-cns&lt;/code&gt; 파드는 컨트롤 플레인에서 노드별 할당될 PodCIDR을 할당받는 역할을 하며, 이 PodCIDR에서 IP를 받아오는 역할을 하는 CNI 바이너리가 &lt;code&gt;azure-ipam&lt;/code&gt;입니다. 이후 실행된 컨테이너의 veth 인터페이스를 생성하고 라우팅과 같은 컨테이너 네트워크를 구성하는 역할을 &lt;code&gt;cilium-cni&lt;/code&gt;에서 수행합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;예를 들어, 샘플 파드를 실행했을 때 192.168.0.174가 할당된 것을 볼 수 있습니다.&lt;/p&gt;
&lt;pre class=&quot;routeros&quot;&gt;&lt;code&gt;$ kubectl get po -owide
NAME                              READY   STATUS    RESTARTS   AGE   IP              NODE                                NOMINATED NODE   READINESS GATES
aks-helloworld-66fd479f49-psdqn   1/1     Running   0          10s   192.168.0.174   aks-nodepool1-90499020-vmss000000   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;

$ kubectl get po aks-helloworld-66fd479f49-psdqn -oyaml |grep -i startTime
  startTime: &quot;2025-08-29T13:35:44Z&quot;&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;노드에서 해당 시점 azure-ipam의 로그를 확인해보면 azure-ipam에서 IP를 요청해서 할당 받는 것을 알 수 있습니다.&lt;/p&gt;
&lt;pre class=&quot;bash&quot; data-ke-language=&quot;bash&quot;&gt;&lt;code&gt;# cat /var/log/azure-ipam.log
...
{&quot;level&quot;:&quot;info&quot;,&quot;ts&quot;:&quot;2025-08-29T13:35:44.490Z&quot;,&quot;msg&quot;:&quot;ADD called&quot;,&quot;args&quot;:{&quot;ContainerID&quot;:&quot;24fc7966fdce8940d4a577270a7d3d9b83b684db10c1af483ad7082c94a63675&quot;,&quot;Netns&quot;:&quot;/var/run/netns/cni-df3f15ec-1031-9829-65ec-9d75df07c626&quot;,&quot;IfName&quot;:&quot;eth0&quot;,&quot;Args&quot;:&quot;K8S_POD_INFRA_CONTAINER_ID=24fc7966fdce8940d4a577270a7d3d9b83b684db10c1af483ad7082c94a63675;K8S_POD_UID=154e459c-18d1-4732-914c-bdf3465637b8;IgnoreUnknown=1;K8S_POD_NAMESPACE=default;K8S_POD_NAME=aks-helloworld-66fd479f49-psdqn&quot;,&quot;Path&quot;:&quot;/opt/cni/bin&quot;,&quot;NetnsOverride&quot;:&quot;&quot;,&quot;StdinData&quot;:&quot;eyJjbmlWZXJzaW9uIjoiMC4zLjEiLCJlbmFibGUtZGVidWciOnRydWUsImlwYW0iOnsidHlwZSI6ImF6dXJlLWlwYW0ifSwibG9nLWZpbGUiOiIvdmFyL2xvZy9jaWxpdW0tY25pLmxvZyIsIm5hbWUiOiJjaWxpdW0iLCJ0eXBlIjoiY2lsaXVtLWNuaSJ9&quot;}}
{&quot;level&quot;:&quot;debug&quot;,&quot;ts&quot;:&quot;2025-08-29T13:35:44.490Z&quot;,&quot;msg&quot;:&quot;Parsed network config&quot;,&quot;netconf&quot;:{&quot;cniVersion&quot;:&quot;0.3.1&quot;,&quot;name&quot;:&quot;cilium&quot;,&quot;type&quot;:&quot;cilium-cni&quot;,&quot;ipam&quot;:{&quot;type&quot;:&quot;azure-ipam&quot;},&quot;dns&quot;:{}}}
# Created CNS IP config request 
{&quot;level&quot;:&quot;debug&quot;,&quot;ts&quot;:&quot;2025-08-29T13:35:44.490Z&quot;,&quot;msg&quot;:&quot;Created CNS IP config request&quot;,&quot;request&quot;:{&quot;desiredIPAddresses&quot;:null,&quot;podInterfaceID&quot;:&quot;24fc7966fdce8940d4a577270a7d3d9b83b684db10c1af483ad7082c94a63675&quot;,&quot;infraContainerID&quot;:&quot;24fc7966fdce8940d4a577270a7d3d9b83b684db10c1af483ad7082c94a63675&quot;,&quot;orchestratorContext&quot;:{&quot;PodName&quot;:&quot;aks-helloworld-66fd479f49-psdqn&quot;,&quot;PodNamespace&quot;:&quot;default&quot;},&quot;ifname&quot;:&quot;eth0&quot;,&quot;secondaryInterfacesExist&quot;:false}}
{&quot;level&quot;:&quot;debug&quot;,&quot;ts&quot;:&quot;2025-08-29T13:35:44.490Z&quot;,&quot;msg&quot;:&quot;Making request to CNS&quot;}
# Received CNS IP config response
{&quot;level&quot;:&quot;debug&quot;,&quot;ts&quot;:&quot;2025-08-29T13:35:44.492Z&quot;,&quot;msg&quot;:&quot;Received CNS IP config response&quot;,&quot;response&quot;:{&quot;podIPInfo&quot;:[{&quot;PodIPConfig&quot;:{&quot;IPAddress&quot;:&quot;192.168.0.174&quot;,&quot;PrefixLength&quot;:16},&quot;NetworkContainerPrimaryIPConfig&quot;:{&quot;IPSubnet&quot;:{&quot;IPAddress&quot;:&quot;192.168.0.0&quot;,&quot;PrefixLength&quot;:16},&quot;DNSServers&quot;:null,&quot;GatewayIPAddress&quot;:&quot;&quot;},&quot;HostPrimaryIPInfo&quot;:{&quot;Gateway&quot;:&quot;10.224.0.1&quot;,&quot;PrimaryIP&quot;:&quot;10.224.0.4&quot;,&quot;Subnet&quot;:&quot;10.224.0.0/16&quot;},&quot;NICType&quot;:&quot;InfraNIC&quot;,&quot;InterfaceName&quot;:&quot;&quot;,&quot;MacAddress&quot;:&quot;&quot;,&quot;SkipDefaultRoutes&quot;:false,&quot;Routes&quot;:null}],&quot;response&quot;:{&quot;ReturnCode&quot;:0,&quot;Message&quot;:&quot;&quot;}}}
{&quot;level&quot;:&quot;debug&quot;,&quot;ts&quot;:&quot;2025-08-29T13:35:44.492Z&quot;,&quot;msg&quot;:&quot;Parsed pod IP&quot;,&quot;podIPNet&quot;:&quot;192.168.0.174/16&quot;}
{&quot;level&quot;:&quot;info&quot;,&quot;ts&quot;:&quot;2025-08-29T13:35:44.492Z&quot;,&quot;msg&quot;:&quot;ADD success&quot;,&quot;result&quot;:{&quot;cniVersion&quot;:&quot;0.3.1&quot;,&quot;ips&quot;:[{&quot;version&quot;:&quot;4&quot;,&quot;address&quot;:&quot;192.168.0.174/16&quot;}],&quot;dns&quot;:{}}}&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;또한 cilium-cni 의 로그도 확인할 수 있으며, 로그를 보면 veth pair를 생성하고, Interface를 구성하고, routing을 추가하는 과정을 확인할 수 있습니다.&lt;/p&gt;
&lt;pre class=&quot;bash&quot; data-ke-language=&quot;bash&quot;&gt;&lt;code&gt;cat /var/log/cilium-cni.log
...
time=&quot;2025-08-29T13:35:44.479426508Z&quot; level=debug msg=&quot;Processing CNI ADD request&quot; args=&quot;K8S_POD_INFRA_CONTAINER_ID=24fc7966fdce8940d4a577270a7d3d9b83b684db10c1af483ad7082c94a63675;K8S_POD_UID=154e459c-18d1-4732-914c-bdf3465637b8;IgnoreUnknown=1;K8S_POD_NAMESPACE=default;K8S_POD_NAME=aks-helloworld-66fd479f49-psdqn&quot; containerID=24fc7966fdce8940d4a577270a7d3d9b83b684db10c1af483ad7082c94a63675 eventID=722b455e-0c13-460c-8b3f-e83970a10723 file-path=/opt/cni/bin ifName=eth0 netconf=&quot;&amp;amp;{NetConf:{CNIVersion:0.3.1 Name:cilium Type:cilium-cni Capabilities:map[] IPAM:{Type:} DNS:{Nameservers:[] Domain: Search:[] Options:[]} RawPrevResult:map[] PrevResult:&amp;lt;nil&amp;gt; ValidAttachments:[]} MTU:0 Args:{} EnableRouteMTU:false ENI:{InstanceID: InstanceType: MinAllocate:0 PreAllocate:0 MaxAboveWatermark:0 FirstInterfaceIndex:&amp;lt;nil&amp;gt; SecurityGroups:[] SecurityGroupTags:map[] SubnetIDs:[] SubnetTags:map[] NodeSubnetID: VpcID: AvailabilityZone: ExcludeInterfaceTags:map[] DeleteOnTermination:&amp;lt;nil&amp;gt; UsePrimaryAddress:&amp;lt;nil&amp;gt; DisablePrefixDelegation:&amp;lt;nil&amp;gt;} Azure:{InterfaceName:} IPAM:{IPAM:{Type:azure-ipam} IPAMSpec:{Pool:map[] IPv6Pool:map[] Pools:{Requested:[] Allocated:[]} PodCIDRs:[] MinAllocate:0 MaxAllocate:0 PreAllocate:0 MaxAboveWatermark:0 StaticIPTags:map[]}} AlibabaCloud:{InstanceType: AvailabilityZone: VPCID: CIDRBlock: VSwitches:[] VSwitchTags:map[] SecurityGroups:[] SecurityGroupTags:map[]} EnableDebug:true LogFormat: LogFile:/var/log/cilium-cni.log ChainingMode:}&quot; netns=/var/run/netns/cni-df3f15ec-1031-9829-65ec-9d75df07c626 subsys=cilium-cni
# Created veth pair
time=&quot;2025-08-29T13:35:44.494814463Z&quot; level=debug msg=&quot;Created veth pair&quot; subsys=endpoint-connector vethPair=&quot;[tmp24fc7 lxc1f5ebcfa8510]&quot;
# Configuring link
time=&quot;2025-08-29T13:35:44.562160797Z&quot; level=debug msg=&quot;Configuring link&quot; interface=eth0 ipAddr=192.168.0.174 netLink=&quot;&amp;amp;{LinkAttrs:{Index:21 MTU:1500 TxQLen:1000 Name:eth0 HardwareAddr:f6:dc:bb:b3:f7:eb Flags:broadcast|multicast RawFlags:4098 ParentIndex:22 MasterIndex:0 Namespace:&amp;lt;nil&amp;gt; Alias: AltNames:[] Statistics:0xc000878180 Promisc:0 Allmulti:0 Multi:1 Xdp:0xc0007280a8 EncapType:ether Protinfo:&amp;lt;nil&amp;gt; OperState:down PhysSwitchID:0 NetNsID:0 NumTxQueues:8 NumRxQueues:8 TSOMaxSegs:0 TSOMaxSize:0 GSOMaxSegs:65535 GSOMaxSize:65536 GROMaxSize:0 GSOIPv4MaxSize:0 GROIPv4MaxSize:0 Vfs:[] Group:0 PermHWAddr: ParentDev: ParentDevBus: Slave:&amp;lt;nil&amp;gt;} PeerName: PeerHardwareAddr: PeerNamespace:&amp;lt;nil&amp;gt;}&quot; subsys=cilium-cni
# Adding route
time=&quot;2025-08-29T13:35:44.562876511Z&quot; level=debug msg=&quot;Adding route&quot; route=&quot;{Prefix:{IP:169.254.23.0 Mask:ffffffff} Nexthop:&amp;lt;nil&amp;gt; Local:&amp;lt;nil&amp;gt; Device: MTU:0 Priority:0 Proto:0 Scope:universe Table:0 Type:0}&quot; subsys=cilium-cni
time=&quot;2025-08-29T13:35:44.563401457Z&quot; level=debug msg=&quot;Adding route&quot; route=&quot;{Prefix:{IP:0.0.0.0 Mask:00000000} Nexthop:169.254.23.0 Local:&amp;lt;nil&amp;gt; Device: MTU:1500 Priority:0 Proto:0 Scope:universe Table:0 Type:0}&quot; subsys=cilium-cni
time=&quot;2025-08-29T13:35:44.691502014Z&quot; level=debug msg=&quot;Endpoint successfully created&quot; args=&quot;K8S_POD_INFRA_CONTAINER_ID=24fc7966fdce8940d4a577270a7d3d9b83b684db10c1af483ad7082c94a63675;K8S_POD_UID=154e459c-18d1-4732-914c-bdf3465637b8;IgnoreUnknown=1;K8S_POD_NAMESPACE=default;K8S_POD_NAME=aks-helloworld-66fd479f49-psdqn&quot; containerID=24fc7966fdce8940d4a577270a7d3d9b83b684db10c1af483ad7082c94a63675 error=&quot;&amp;lt;nil&amp;gt;&quot; eventID=722b455e-0c13-460c-8b3f-e83970a10723 file-path=/opt/cni/bin ifName=eth0 k8sNamespace=default k8sPodName=aks-helloworld-66fd479f49-psdqn netns=/var/run/netns/cni-df3f15ec-1031-9829-65ec-9d75df07c626 subsys=cilium-cni&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;기타 Azure CNI Powered by Cilium의 특성은 아래와 같습니다.&lt;/p&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;kube-proxy Replacement&lt;/h4&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;앞서 살펴본 파드 정보에서 kube-proxy가 존재하지 않습니다. AKS에서 Azure CNI Powered by Cilium를 사용하는 경우 자동으로 kube-proxy를 사용하지 않게 되는 것을 알 수 있습니다. 이 경우, kube-proxy를 사용하도록 전환은 불가합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;Network Policy&lt;/h4&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;AKS에서는 Network Policy를 지정하지 않는 경우 Network Policy Engine이 없는 상태로 클러스터가 구성됩니다. 기존 CNI Plugin에서는 Azure NPM(Network Policy Manager)나 혹은 Calico를 지정할 수 있지만, Azure CNI Powered by Cilium에서는 기본적으로 Cilium의 Network Policy를 사용할 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;물론 Network Policy를 사용하지 않는다면 여전히 none으로 지정할 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;CNI Plugin 업그레이드&lt;/h4&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;AKS에서 Azure CNI Powered by Cilium를 사용하게 되면 이는 매지니드로 제공되는 기능이기 때문에 Cilium에 대한 업그레이드를 고민하지 않아도 됩니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;AKS에서 제공하는 매니지드 컴포넌트는 각 쿠버네티스 버전에 대해서 검증을 하여 제공합니다. 클러스터 업그레이드(쿠버네티스 버전 업그레이드)를 수행하면 매니지드 컴포넌트의 버전도 업그레이드 되는 방식을 취하게 됩니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아래 문서에서 각 쿠버네티스 버전에 따른 최소 Cilium 버전을 아래 문서에서 확인할 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;https://learn.microsoft.com/en-us/azure/aks/azure-cni-powered-by-cilium#versions&quot;&gt;https://learn.microsoft.com/en-us/azure/aks/azure-cni-powered-by-cilium#versions&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1077&quot; data-origin-height=&quot;456&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/qbhA2/btsQdXeMuE7/566wBwl6BY9ekST4tdGmY1/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/qbhA2/btsQdXeMuE7/566wBwl6BY9ekST4tdGmY1/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/qbhA2/btsQdXeMuE7/566wBwl6BY9ekST4tdGmY1/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FqbhA2%2FbtsQdXeMuE7%2F566wBwl6BY9ekST4tdGmY1%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1077&quot; height=&quot;456&quot; data-origin-width=&quot;1077&quot; data-origin-height=&quot;456&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;쿠버네티스의 버전에 따른 컴포넌트의 Breaking changes에 Cilium의 버전의 변경도 같이 확인하실 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;https://learn.microsoft.com/en-us/azure/aks/supported-kubernetes-versions?tabs=azure-cli#aks-components-breaking-changes-by-version&quot;&gt;https://learn.microsoft.com/en-us/azure/aks/supported-kubernetes-versions?tabs=azure-cli#aks-components-breaking-changes-by-version&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;여기까지 Azure CNI Powered by Cilium의 CNI 동작 과정과 특성을 살펴볼 수 있었습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;3. ACNS(Advanced Container Networking Services)&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;AKS에서 Azure CNI Powered by Cilium를 쓰면서 아쉬운 점은 Hubble을 바로 사용하기 어렵다는 점입니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1258&quot; data-origin-height=&quot;193&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/Ore56/btsQfw1fxx1/P0KX8OdfKd8GLJxpPDCHA0/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/Ore56/btsQfw1fxx1/P0KX8OdfKd8GLJxpPDCHA0/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/Ore56/btsQfw1fxx1/P0KX8OdfKd8GLJxpPDCHA0/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FOre56%2FbtsQfw1fxx1%2FP0KX8OdfKd8GLJxpPDCHA0%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1258&quot; height=&quot;193&quot; data-origin-width=&quot;1258&quot; data-origin-height=&quot;193&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;출처: &lt;a href=&quot;https://github.com/Azure/AKS/issues/3978&quot;&gt;https://github.com/Azure/AKS/issues/3978&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;대신 ACNS(Advanced Container Networking Services)라는 별도의 컨테이너 네트워킹을 위한 서비스를 대안으로 제공하고 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이러한 방향에는 Product 관점의 고민이 있었던 것으로 보입니다. Azure CNI Powered by Cilium를 사용하는 고객도 있지만, 기존 CNI(Azure CNI, Azure CNI Overlay)를 사용하는 고객도 컨테이너 네트워크 Observability를 제공해야 하는 문제가 있습니다. 그런 방향에서 Hubble이 아닌 ACNS를 제공하는 방향으로 결정된 것이 아닌지 추정됩니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아래 그림을 보면 Cilium 노드에서는 Cilium Agent의 Hubble을 사용하고, Non-Cilium 노드에는 Retina를 통해서 Hubble을 enable하도록 제공하고 있습니다. 이 환경에서 추가로 Hubble CLI나 UI를 활성화 할 수 있습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1171&quot; data-origin-height=&quot;730&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/dZhJzk/btsQc3TH7lW/aymMN4dgscXfpqwpVaQoTK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/dZhJzk/btsQc3TH7lW/aymMN4dgscXfpqwpVaQoTK/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/dZhJzk/btsQc3TH7lW/aymMN4dgscXfpqwpVaQoTK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FdZhJzk%2FbtsQc3TH7lW%2FaymMN4dgscXfpqwpVaQoTK%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1171&quot; height=&quot;730&quot; data-origin-width=&quot;1171&quot; data-origin-height=&quot;730&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;출처: &lt;a href=&quot;https://learn.microsoft.com/en-us/azure/aks/advanced-container-networking-services-overview?tabs=cilium&quot;&gt;https://learn.microsoft.com/en-us/azure/aks/advanced-container-networking-services-overview?tabs=cilium&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;기본적으로 Azure CNI Powered by Cilium만 설치한 경우에는 hubble-relay가 설치되어 있지 않습니다.&lt;/p&gt;
&lt;pre class=&quot;pgsql&quot;&gt;&lt;code&gt;$ kubectl get pods -o wide -n kube-system -l k8s-app=hubble-relay
No resources found in kube-system namespace.&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아래 문서를 바탕으로 ACNS를 활성화하고 Hubble UI를 사용해보겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;참고: &lt;a href=&quot;https://learn.microsoft.com/en-us/azure/aks/how-to-configure-container-network-logs?tabs=cilium#enable-advanced-container-networking-services-on-an-existing-cluster&quot;&gt;https://learn.microsoft.com/en-us/azure/aks/how-to-configure-container-network-logs?tabs=cilium#enable-advanced-container-networking-services-on-an-existing-cluster&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;먼저 기존 클러스터에 ACNS를 활성화 합니다. 해당 명령은 단순히 Hubble relay를 배포하는 것이 아닌, ACNS의 다른 컴포넌트들을 배포하는 역할을 함께 수행합니다.&lt;/p&gt;
&lt;pre class=&quot;haml&quot;&gt;&lt;code&gt;az aks update \
    --resource-group $RESOURCE_GROUP \
    --name $CLUSTER_NAME \
    --enable-acns&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;해당 명령을 수행하면 &lt;code&gt;cilium-operator&lt;/code&gt; 및 &lt;code&gt;cilium-agent&lt;/code&gt; 파드들이 자동으로 재시작하게 되며, 또한 &lt;code&gt;acns-security-agent&lt;/code&gt;, 기타 hubble 관련된 파드들이 추가로 실행됩니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;$ kubectl get po -A
NAMESPACE     NAME                                             READY   STATUS              RESTARTS   AGE
default       aks-helloworld-66fd479f49-psdqn                  1/1     Running             0          38m
kube-system   acns-security-agent-sf46s                        1/1     Running             0          60s
kube-system   acns-security-agent-wpgtd                        1/1     Running             0          60s
kube-system   azure-cns-drv9w                                  1/1     Running             0          81m
kube-system   azure-cns-xpdm5                                  1/1     Running             0          81m
kube-system   azure-ip-masq-agent-4x2rs                        1/1     Running             0          81m
kube-system   azure-ip-masq-agent-gbrs7                        1/1     Running             0          81m
kube-system   cilium-24tjm                                     0/1     Init:0/6            0          119s
kube-system   cilium-5tldb                                     1/1     Running             0          81m
kube-system   cilium-operator-7f5458cf6f-gvf88                 1/1     Running             0          119s
kube-system   cilium-operator-7f5458cf6f-jj2zq                 1/1     Running             0          119s
kube-system   cloud-node-manager-625qw                         1/1     Running             0          81m
kube-system   cloud-node-manager-8zzqj                         1/1     Running             0          81m
kube-system   coredns-6f776c8fb5-44lcl                         1/1     Running             0          83m
kube-system   coredns-6f776c8fb5-mmjkk                         1/1     Running             0          81m
kube-system   coredns-autoscaler-864c4496bf-5jnrc              1/1     Running             0          83m
kube-system   csi-azuredisk-node-8vkj4                         3/3     Running             0          81m
kube-system   csi-azuredisk-node-hw45x                         3/3     Running             0          81m
kube-system   csi-azurefile-node-bwmxb                         3/3     Running             0          81m
kube-system   csi-azurefile-node-zns9n                         3/3     Running             0          81m
kube-system   hubble-generate-certs-clwpz                      0/1     Completed           0          48s
kube-system   hubble-relay-bfb769b86-dmjpt                     0/1     ContainerCreating   0          48s
kube-system   konnectivity-agent-autoscaler-6ddd978bfc-2vz76   1/1     Running             0          83m
kube-system   konnectivity-agent-c9dc4888c-5dxdt               1/1     Running             0          46m
kube-system   konnectivity-agent-c9dc4888c-xjg4q               1/1     Running             0          46m
kube-system   metrics-server-6c4cb48ddc-sr9s5                  2/2     Running             0          78m
kube-system   metrics-server-6c4cb48ddc-t5vmh                  2/2     Running             0          78m&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;참고로, &lt;code&gt;acns-security-agent&lt;/code&gt; 는 FQDN-based filtering과 Layer 7 policy를 지원해주는 컴포넌트입니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;https://learn.microsoft.com/en-us/azure/aks/advanced-container-networking-services-overview?tabs=cilium#container-network-security&quot;&gt;https://learn.microsoft.com/en-us/azure/aks/advanced-container-networking-services-overview?tabs=cilium#container-network-security&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;잠시 후 확인해보면 hubble-relay가 정상적으로 실행 중인 것으로 확인됩니다.&lt;/p&gt;
&lt;pre class=&quot;routeros&quot;&gt;&lt;code&gt;$ kubectl get pods -o wide -n kube-system -l k8s-app=hubble-relay
NAME                           READY   STATUS    RESTARTS   AGE     IP              NODE                                NOMINATED NODE   READINESS GATES
hubble-relay-bfb769b86-dmjpt   1/1     Running   0          2m43s   192.168.0.177   aks-nodepool1-90499020-vmss000000   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이후 Hubble UI를 생성하겠습니다. Hubble UI 자체는 Addon으로 제공되지 않기 때문에 아래 문서에서 yaml을 확인하시고 배포하실 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;참고: &lt;a href=&quot;https://learn.microsoft.com/en-us/azure/aks/how-to-configure-container-network-logs?tabs=cilium#visualize-by-using-the-hubble-ui&quot;&gt;https://learn.microsoft.com/en-us/azure/aks/how-to-configure-container-network-logs?tabs=cilium#visualize-by-using-the-hubble-ui&lt;/a&gt;&lt;/p&gt;
&lt;pre class=&quot;yaml&quot;&gt;&lt;code&gt;$ cat &amp;lt;&amp;lt;EOF | kubectl apply -f -
apiVersion: v1
kind: ServiceAccount
metadata:
  name: hubble-ui
  namespace: kube-system
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: hubble-ui
  labels:
    app.kubernetes.io/part-of: retina
rules:
  - apiGroups:
      - networking.k8s.io
    resources:
      - networkpolicies
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - &quot;&quot;
    resources:
      - componentstatuses
      - endpoints
      - namespaces
      - nodes
      - pods
      - services
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - apiextensions.k8s.io
    resources:
      - customresourcedefinitions
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - cilium.io
    resources:
      - &quot;*&quot;
    verbs:
      - get
      - list
      - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: hubble-ui
  labels:
    app.kubernetes.io/part-of: retina
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: hubble-ui
subjects:
  - kind: ServiceAccount
    name: hubble-ui
    namespace: kube-system
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: hubble-ui-nginx
  namespace: kube-system
data:
  nginx.conf: |
    server {
        listen       8081;
        server_name  localhost;
        root /app;
        index index.html;
        client_max_body_size 1G;
        location / {
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
            # CORS
            add_header Access-Control-Allow-Methods &quot;GET, POST, PUT, HEAD, DELETE, OPTIONS&quot;;
            add_header Access-Control-Allow-Origin *;
            add_header Access-Control-Max-Age 1728000;
            add_header Access-Control-Expose-Headers content-length,grpc-status,grpc-message;
            add_header Access-Control-Allow-Headers range,keep-alive,user-agent,cache-control,content-type,content-transfer-encoding,x-accept-content-transfer-encoding,x-accept-response-streaming,x-user-agent,x-grpc-web,grpc-timeout;
            if ($request_method = OPTIONS) {
                return 204;
            }
            # /CORS
            location /api {
                proxy_http_version 1.1;
                proxy_pass_request_headers on;
                proxy_hide_header Access-Control-Allow-Origin;
                proxy_pass http://127.0.0.1:8090;
            }
            location / {
                try_files $uri $uri/ /index.html /index.html;
            }
            # Liveness probe
            location /healthz {
                access_log off;
                add_header Content-Type text/plain;
                return 200 'ok';
            }
        }
    }
---
kind: Deployment
apiVersion: apps/v1
metadata:
  name: hubble-ui
  namespace: kube-system
  labels:
    k8s-app: hubble-ui
    app.kubernetes.io/name: hubble-ui
    app.kubernetes.io/part-of: retina
spec:
  replicas: 1
  selector:
    matchLabels:
      k8s-app: hubble-ui
  template:
    metadata:
      labels:
        k8s-app: hubble-ui
        app.kubernetes.io/name: hubble-ui
        app.kubernetes.io/part-of: retina
    spec:
      serviceAccountName: hubble-ui
      automountServiceAccountToken: true
      containers:
      - name: frontend
        image: mcr.microsoft.com/oss/cilium/hubble-ui:v0.12.2   
        imagePullPolicy: Always
        ports:
        - name: http
          containerPort: 8081
        livenessProbe:
          httpGet:
            path: /healthz
            port: 8081
        readinessProbe:
          httpGet:
            path: /
            port: 8081
        resources: {}
        volumeMounts:
        - name: hubble-ui-nginx-conf
          mountPath: /etc/nginx/conf.d/default.conf
          subPath: nginx.conf
        - name: tmp-dir
          mountPath: /tmp
        terminationMessagePolicy: FallbackToLogsOnError
        securityContext: {}
      - name: backend
        image: mcr.microsoft.com/oss/cilium/hubble-ui-backend:v0.12.2
        imagePullPolicy: Always
        env:
        - name: EVENTS_SERVER_PORT
          value: &quot;8090&quot;
        - name: FLOWS_API_ADDR
          value: &quot;hubble-relay:443&quot;
        - name: TLS_TO_RELAY_ENABLED
          value: &quot;true&quot;
        - name: TLS_RELAY_SERVER_NAME
          value: ui.hubble-relay.cilium.io
        - name: TLS_RELAY_CA_CERT_FILES
          value: /var/lib/hubble-ui/certs/hubble-relay-ca.crt
        - name: TLS_RELAY_CLIENT_CERT_FILE
          value: /var/lib/hubble-ui/certs/client.crt
        - name: TLS_RELAY_CLIENT_KEY_FILE
          value: /var/lib/hubble-ui/certs/client.key
        livenessProbe:
          httpGet:
            path: /healthz
            port: 8090
        readinessProbe:
          httpGet:
            path: /healthz
            port: 8090
        ports:
        - name: grpc
          containerPort: 8090
        resources: {}
        volumeMounts:
        - name: hubble-ui-client-certs
          mountPath: /var/lib/hubble-ui/certs
          readOnly: true
        terminationMessagePolicy: FallbackToLogsOnError
        securityContext: {}
      nodeSelector:
        kubernetes.io/os: linux 
      volumes:
      - configMap:
          defaultMode: 420
          name: hubble-ui-nginx
        name: hubble-ui-nginx-conf
      - emptyDir: {}
        name: tmp-dir
      - name: hubble-ui-client-certs
        projected:
          defaultMode: 0400
          sources:
          - secret:
              name: hubble-relay-client-certs
              items:
                - key: tls.crt
                  path: client.crt
                - key: tls.key
                  path: client.key
                - key: ca.crt
                  path: hubble-relay-ca.crt
---
kind: Service
apiVersion: v1
metadata:
  name: hubble-ui
  namespace: kube-system
  labels:
    k8s-app: hubble-ui
    app.kubernetes.io/name: hubble-ui
    app.kubernetes.io/part-of: retina
spec:
  type: ClusterIP
  selector:
    k8s-app: hubble-ui
  ports:
    - name: http
      port: 80
      targetPort: 8081
EOF

$ kubectl patch svc hubble-ui -n kube-system \
  -p '{&quot;spec&quot;: {&quot;type&quot;: &quot;LoadBalancer&quot;}}'&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;생성된 hubble-ui 서비스를 LoadBalancer로 변경해서 접속하면 Hubble UI가 확인됩니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;2314&quot; data-origin-height=&quot;1134&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/cXXXd1/btsQfmYRJsz/WJrBD6OVkgKmlPDIgX3Xxk/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/cXXXd1/btsQfmYRJsz/WJrBD6OVkgKmlPDIgX3Xxk/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/cXXXd1/btsQfmYRJsz/WJrBD6OVkgKmlPDIgX3Xxk/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FcXXXd1%2FbtsQfmYRJsz%2FWJrBD6OVkgKmlPDIgX3Xxk%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;2314&quot; height=&quot;1134&quot; data-origin-width=&quot;2314&quot; data-origin-height=&quot;1134&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;테스트 파드로 호출에 대해서도 기록이 남는 것으로 확인됩니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;2299&quot; data-origin-height=&quot;1033&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/Uediq/btsQexGhIN3/8VaZqvUFUyltzLGkKkh7N1/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/Uediq/btsQexGhIN3/8VaZqvUFUyltzLGkKkh7N1/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/Uediq/btsQexGhIN3/8VaZqvUFUyltzLGkKkh7N1/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FUediq%2FbtsQexGhIN3%2F8VaZqvUFUyltzLGkKkh7N1%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;2299&quot; height=&quot;1033&quot; data-origin-width=&quot;2299&quot; data-origin-height=&quot;1033&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;4. Azure CNI Powered by Cilium 제약 사항&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;AKS에서 Azure CNI Powered by Cilium을 사용하더라도 완전히 Cilium의 기능을 사용할 수 있는 것은 아닙니다. 아래와 같은 차이가 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아래 제약 사항은 이후 변경될 수 있으며 문서를 통해서 재 확인이 필요합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;참고: &lt;a href=&quot;https://learn.microsoft.com/en-us/azure/aks/azure-cni-powered-by-cilium#limitations&quot;&gt;https://learn.microsoft.com/en-us/azure/aks/azure-cni-powered-by-cilium#limitations&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;Linux 노드에만 지원되며 Windows 노드는 불가합니다.&lt;/li&gt;
&lt;li&gt;Cilium configuration 수정이 불가합니다. 보다 상세한 설정이 필요한 경우 Byo CNI로 Cilium을 사용할 수 있습니다.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;CiliumNetworkPolicy&lt;/code&gt; 사용이 불가하며, 기본적으로 쿠버네티스의 NetworkPolicy 리소스만 지원합니다.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;ClusterwideCiliumNetworkPolicy&lt;/code&gt; 사용이 불가합니다.&lt;/li&gt;
&lt;li&gt;Cilium의 L7 Network Policy나 FQDN Filtering, 혹은 Container Network Observability를 사용하려고 하는 경우에는 ACNS(Advanced Container Networking Services)라는 별도의 컴포넌트를 사용해야 합니다.&lt;br /&gt;참고: &lt;a href=&quot;https://learn.microsoft.com/en-us/azure/aks/advanced-container-networking-services-overview?tabs=cilium&quot;&gt;https://learn.microsoft.com/en-us/azure/aks/advanced-container-networking-services-overview?tabs=cilium&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Cilium 데몬 셋에 직접 Resource limit을 지정(변경)할 수 없습니다. 일반적으로 AKS가 관리하는 매니지드 컴포넌트에 대해서는 수정이 지원되지 않습니다.&lt;/li&gt;
&lt;li&gt;기본 문서에서 설명하지 않고 있는 ClusterMesh, ServiceMesh와 같은 Cilium의 확장된 기능은 지원되지 않습니다.&lt;/li&gt;
&lt;/ul&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;마치며&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;살펴보기로 Azure CNI Powered by Cilium 은 data plane 영역을 담당하는 제한적인 역할과 Security, Observability 부분을 제공하고 있습니다. 이를 통해서 확장성과 성능 측면에서 효과가 있는 것은 맞습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;span&gt;아래 문서를 확인해보면 많은 노드와 파드를 사용하는 환경에서 Azure CNI Powered by Cilium에서 큰 성능에 이점이 있는 것을 확인할 수 있습니다.&lt;/span&gt;&lt;/p&gt;
&lt;p contenteditable=&quot;true&quot; data-ke-size=&quot;size16&quot;&gt;&lt;span&gt;&lt;a href=&quot;https://azure.microsoft.com/en-us/blog/azure-cni-with-cilium-most-scalable-and-performant-container-networking-in-the-cloud/&quot;&gt;https://azure.microsoft.com/en-us/blog/azure-cni-with-cilium-most-scalable-and-performant-container-networking-in-the-cloud/&lt;/a&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;다만 제한적인 기능을 제공하고 있기 때문에 Cilium의 모든 기능을 사용할 수 있는 것은 아닌 점은 유의가 필요합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;b&gt;&lt;span&gt;참고 링크&lt;/span&gt;&lt;/b&gt;&lt;/p&gt;
&lt;p contenteditable=&quot;true&quot; data-ke-size=&quot;size16&quot;&gt;&lt;span&gt;&lt;a href=&quot;https://learn.microsoft.com/en-us/azure/aks/azure-cni-powered-by-cilium&quot;&gt;https://learn.microsoft.com/en-us/azure/aks/azure-cni-powered-by-cilium&lt;/a&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p contenteditable=&quot;true&quot; data-ke-size=&quot;size16&quot;&gt;&lt;span&gt;&lt;a href=&quot;https://learn.microsoft.com/en-us/azure/aks/advanced-container-networking-services-overview?tabs=cilium&quot;&gt;https://learn.microsoft.com/en-us/azure/aks/advanced-container-networking-services-overview?tabs=cilium&lt;/a&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p contenteditable=&quot;true&quot; data-ke-size=&quot;size16&quot;&gt;&lt;span&gt;&lt;a href=&quot;https://docs.cilium.io/en/latest/installation/k8s-install-aks/&quot;&gt;https://docs.cilium.io/en/latest/installation/k8s-install-aks/&lt;/a&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p contenteditable=&quot;true&quot; data-ke-size=&quot;size16&quot;&gt;&lt;span&gt;&lt;a href=&quot;https://isovalent.com/blog/post/azure-cni-cilium/&quot;&gt;https://isovalent.com/blog/post/azure-cni-cilium/&lt;/a&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p contenteditable=&quot;true&quot; data-ke-size=&quot;size16&quot;&gt;&lt;span&gt;&lt;a href=&quot;https://isovalent.com/blog/post/tutorial-azure-cni-powered-by-cilium/&quot;&gt;https://isovalent.com/blog/post/tutorial-azure-cni-powered-by-cilium/&lt;/a&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p contenteditable=&quot;true&quot; data-ke-size=&quot;size16&quot;&gt;&lt;span&gt;&lt;a href=&quot;https://isovalent.com/blog/post/upgrade-cilium-in-azure/#scenario-6-kubenet-to-azure-cni-powered-by-cilium-disabling-network-policy&quot;&gt;https://isovalent.com/blog/post/upgrade-cilium-in-azure/#scenario-6-kubenet-to-azure-cni-powered-by-cilium-disabling-network-policy&lt;/a&gt;&lt;/span&gt;&lt;/p&gt;</description>
      <category>Cilium</category>
      <category>AKS</category>
      <category>azure cni powered by cilium</category>
      <category>cilium</category>
      <author>한명</author>
      <guid isPermaLink="true">https://a-person.tistory.com/59</guid>
      <comments>https://a-person.tistory.com/59#entry59comment</comments>
      <pubDate>Fri, 29 Aug 2025 23:53:19 +0900</pubDate>
    </item>
    <item>
      <title>ssh 패스워드 없이 접속하기</title>
      <link>https://a-person.tistory.com/58</link>
      <description>&lt;p data-ke-size=&quot;size16&quot;&gt;먼저 PC에서 ssh용 key가 없으면 &lt;code&gt;ssh-keygen&lt;/code&gt;을 통해서 Private key와 Public key를 생성합니다. 해당 key는 &lt;code&gt;~/.ssh&lt;/code&gt;에 생성됩니다.&lt;/p&gt;
&lt;pre class=&quot;applescript&quot;&gt;&lt;code&gt;$ ssh-keygen
Generating public/private ed25519 key pair.
Enter file in which to save the key (/home/local/.ssh/id_ed25519):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/local/.ssh/id_ed25519
Your public key has been saved in /home/local/.ssh/id_ed25519.pub
The key fingerprint is:
xxx local@xxx
The key's randomart image is:
+--[ED25519 256]--+
|   .o+   .o+=+.  |
...
|             o..o|
+----[SHA256]-----+&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;ssh 명령 중 &lt;code&gt;ssh-copy-id&lt;/code&gt;를 하면 PC에 설치된 Public Key를 서버의 사용자의 &lt;code&gt;~/.ssh/authorized_keys&lt;/code&gt;에 등록합니다. Private Key는 PC에 있고, Public Key를 서버의 허가된 키에 등록하는 과정입니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;해당 서버의 사용자 계정에 대한 인증이 필요합니다.&lt;/p&gt;
&lt;pre class=&quot;applescript&quot;&gt;&lt;code&gt;$ ssh-copy-id username@192.168.10.10
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: &quot;/home/username/.ssh/id_ed25519.pub&quot;
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
username@192.168.10.10's password:   # username의 패스워드 입력

Number of key(s) added: 1

Now try logging into the machine, with:   &quot;ssh 'username@192.168.10.10'&quot;
and check to make sure that only the key(s) you wanted were added.&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;등록이 완료되면 이후에는 비밀번호 없이 SSH 접속이 가능합니다.&lt;/p&gt;</description>
      <category>기타</category>
      <category>no password</category>
      <category>passwordless</category>
      <category>SSH</category>
      <author>한명</author>
      <guid isPermaLink="true">https://a-person.tistory.com/58</guid>
      <comments>https://a-person.tistory.com/58#entry58comment</comments>
      <pubDate>Sun, 24 Aug 2025 11:44:30 +0900</pubDate>
    </item>
    <item>
      <title>[9] Cilium - ServiceMesh</title>
      <link>https://a-person.tistory.com/57</link>
      <description>&lt;p data-ke-size=&quot;size16&quot;&gt;Service Mesh는 마이크로 서비스 환경에서 분산된 서비스에서 가시성, 연결성 및 보안 요구사항과 같은 다양한 공통 기능을 각 애플리케이션에서 구현하지 않고, 인프라의 일부로 기능을 제공합니다. Istio가 Service Mesh를 제공하는 가장 잘 알려진 제품입니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이번 게시물에서는 Cilium ServiceMesh에 대해서 알아보겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;목차&lt;/h2&gt;
&lt;ol style=&quot;list-style-type: decimal;&quot; data-ke-list-type=&quot;decimal&quot;&gt;
&lt;li&gt;실습 환경 구성&lt;/li&gt;
&lt;li&gt;Cilium ServiceMesh&lt;/li&gt;
&lt;li&gt;Kubernetes Ingress Support&lt;/li&gt;
&lt;li&gt;Gateway API Support&lt;/li&gt;
&lt;li&gt;L7 Aware Traffic Management&lt;/li&gt;
&lt;/ol&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;1. 실습 환경 구성&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;실습 환경을 구성하기 위해서 vagrant up을 수행합니다.&lt;/p&gt;
&lt;pre class=&quot;awk&quot;&gt;&lt;code&gt;mkdir cilium-lab &amp;amp;&amp;amp; cd cilium-lab

curl -O https://raw.githubusercontent.com/gasida/vagrant-lab/refs/heads/main/cilium-study/6w/Vagrantfile

vagrant up&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;실습환경을 구성하면 아래와 같이 3대의 가상머신이 생성됩니다. 컨트롤 플레인과 워커노드로 구성된 쿠버네티스 클러스터와 라우터 한대가 있습니다.&lt;/p&gt;
&lt;pre class=&quot;routeros&quot;&gt;&lt;code&gt;PS C:\Users\chuir\projects\cilium-lab\w6&amp;gt; vagrant status
Current machine states:

k8s-ctr                   running (virtualbox)
k8s-w1                    running (virtualbox)
router                    running (virtualbox)

This environment represents multiple VMs. The VMs are all listed
above with their current state. For more information about a specific
VM, run `vagrant status NAME`.

# vagrant ssh k8s-ctr 로 접속하여 클러스터 확인
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get no
NAME      STATUS   ROLES           AGE   VERSION
k8s-ctr   Ready    control-plane   22h   v1.33.4
k8s-w1    Ready    &amp;lt;none&amp;gt;          22h   v1.33.4

(⎈|HomeLab:N/A) root@k8s-ctr:~# cilium config view | grep ^ipam
ipam                                              cluster-pool
ipam-cilium-node-update-rate                      15s
(⎈|HomeLab:N/A) root@k8s-ctr:~# cilium config view | grep ^routing
routing-mode                                      native&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;2. Cilium ServiceMesh&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Service Mesh는 마이크로 서비스 환경에서 분산된 서비스에서 가시성, 연결성 및 보안 요구사항과 같은 다양한 공통 기능을 각 애플리케이션에서 구현하지 않고, 인프라의 일부로 기능을 제공하는 것을 목적으로 합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아래는 Cilium Service Mesh에서 소개하고 있는 Service Mesh의 기능입니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;참고: &lt;a href=&quot;https://docs.cilium.io/en/stable/network/servicemesh/&quot;&gt;https://docs.cilium.io/en/stable/network/servicemesh/&lt;/a&gt;&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;&lt;b&gt;Resilient Connectivity&lt;/b&gt;: Service to service communication must be possible across boundaries such as clouds, clusters, and premises. Communication must be resilient and fault tolerant.&lt;/li&gt;
&lt;li&gt;&lt;b&gt;L7 Traffic Management&lt;/b&gt;: Load balancing, rate limiting, and resiliency must be L7-aware (HTTP, REST, gRPC, WebSocket, &amp;hellip;).&lt;/li&gt;
&lt;li&gt;&lt;b&gt;Identity-based Security&lt;/b&gt;: Relying on network identifiers to achieve security is no longer sufficient, both the sending and receiving services must be able to authenticate each other based on identities instead of a network identifier.&lt;/li&gt;
&lt;li&gt;&lt;b&gt;Observability &amp;amp; Tracing&lt;/b&gt;: Observability in the form of tracing and metrics is critical to understanding, monitoring, and troubleshooting application stability, performance, and availability.&lt;/li&gt;
&lt;li&gt;&lt;b&gt;Transparency&lt;/b&gt;: The functionality must be available to applications in a transparent manner, i.e. without requiring to change application code.&lt;/li&gt;
&lt;/ul&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Istio는 Service Mesh에서 가장 잘 알려진 제품으로, Istio문서를 살펴보면 주요 작업으로 Traffic Management, Security, Observability, Policy Enforcement, Extensibility 와 같은 범주로 설명하고 있어, 어느 정도의 유사성이 있기는 합니다. Istio에서 제공하는 모든 기능을 대체하는 것은 아니며, Traffic Management와 Observability 와 같은 기능을 제한적으로 제공하고, 일부 기능은 아직 beta 수준으로 보입니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;참고: &lt;a href=&quot;https://istio.io/latest/docs/tasks/&quot;&gt;https://istio.io/latest/docs/tasks/&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;전통적인 Envoy Sidecar 기반의 ServiceMesh와 Cilium Service Mesh는 아래와 같은 차이가 있습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;960&quot; data-origin-height=&quot;540&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/whnZp/btsP5flikAS/Wjisn8hdnqliAHpXKSXkm0/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/whnZp/btsP5flikAS/Wjisn8hdnqliAHpXKSXkm0/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/whnZp/btsP5flikAS/Wjisn8hdnqliAHpXKSXkm0/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FwhnZp%2FbtsP5flikAS%2FWjisn8hdnqliAHpXKSXkm0%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;960&quot; height=&quot;540&quot; data-origin-width=&quot;960&quot; data-origin-height=&quot;540&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;출처: &lt;a href=&quot;https://cilium.io/blog/2021/12/01/cilium-service-mesh-beta/&quot;&gt;https://cilium.io/blog/2021/12/01/cilium-service-mesh-beta/&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Envoy Sidecar 기반의 Service Mesh는 Iptables를 통해서 파드의 Inbound 트래픽을 envoy에서 처리하게되고, 또한 Outbound 트래픽도 envoy를 거치도록 하여, 프록시를 통한 Service Mesh의 기능을 구현합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Cilium Service Mesh는 기존 Service Mesh 구현을 효율화하기 위해서, Sidecar 대신 envoy 데몬 셋을 사용하고, 복잡한 연결을 eBPF 기반의 Cilium으로 구현합니다. 이때 L3/4 기반의 통신은 Cilium에서 처리하고, L7 수준의 통신은 envoy를 경유하도록 합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;3. Kubernetes Ingress Support&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;먼저 ServiceMesh의 Traffic Management 관점에서 쿠버네티스 Ingress 지원부터 설명을 이어가겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Cilium의 ServiceMesh에서는 쿠버네티스의 Ingress를 지원하고 있으며 이를 사용하기 위해서는 Cilium 설치 과정에서 &lt;code&gt;nodePort.enabled=true&lt;/code&gt;나 &lt;code&gt;kubeProxyReplacement=true&lt;/code&gt;를 설정하여야 합니다. 추가로&lt;code&gt;l7Proxy=true&lt;/code&gt; 가 설정되어야 하는데 기본값이 true 입니다.&lt;/p&gt;
&lt;pre class=&quot;routeros&quot;&gt;&lt;code&gt;# cilium 설치 시 아래 파라미터 적용되어 있음
## --set ingressController.enabled=true
## --set ingressController.loadbalancerMode=shared
## --set loadBalancer.l7.backend=envoy \

# 옵션 확인
(⎈|HomeLab:N/A) root@k8s-ctr:~# cilium config view | grep -E 'kube-proxy-replacement|l7'
enable-l7-proxy                                   true
kube-proxy-replacement                            true
kube-proxy-replacement-healthz-bind-address
loadbalancer-l7                                   envoy
loadbalancer-l7-algorithm                         round_robin
loadbalancer-l7-ports&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;일반적으로 Ingress는 Ingress Controller라는 구현체가 있어야 합니다. 다만 Cilum의 Ingress 또는 Gateway API 컨트롤러는 LoadBalancer, NodePort 서비스 혹은 Host Network으로 노출이 되지만, 트래픽이 서비스 포트로 도착하면 eBPF 프로그램이 트래픽을 가로채고 Tproxy 커널 기능을 사용해 Envoy에게 전달하도록 구현이 되어 있습니다. 이때 Envoy는 실제 Client IP를 x-forwarded-for로 처리하여 전달합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;blockquote data-ke-style=&quot;style3&quot;&gt;&lt;b&gt;[Note]&lt;/b&gt; &lt;br /&gt;Tproxy(Transparent proxy): 일반적 프록시는 클라이언트가 프록시 서버를 명시적으로 지정해야 합니다. 반면 Transparent proxy는 클라이언트가 프록시를 인식하지 못한 채 트래픽이 프록시를 거지게 하므로 이를 Transparent라고 하는게 아닌가 생각됩니다.&lt;br /&gt;이를 위해서 Non-local socket을 처리 해야하는데, 로컬에 존재하지 않은 IP주소로 소켓을 바인딩하고 트래픽을 처리할 수 있습니다(일반적으로는 로컬 IP 주소가 아닌 곳으로는 소켓을 바인딩 할 수 없음).&lt;br /&gt;Cilum이 구현한 Ingress Controller의 컨텍스트에서 보면, 노드에 존재하지 않은 Ingress IP를 Tproxy를 통해서 처리하고, Envoy로 전달하도록 구현되어 있는 것으로 이해할 수 있습니다.&lt;br /&gt;&lt;br /&gt;리눅스 커널 수준의 Tproxy에 대한 설명은 아래 문서를 참고 부탁드립니다.&lt;br /&gt;참고:&amp;nbsp;https://docs.kernel.org/networking/tproxy.html&lt;/blockquote&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이 경우 트래픽 흐름은 아래 그림에서 &lt;code&gt;TC@Enpoint&lt;/code&gt; 에서 L7 Proxy인 Envoy로 흘러가고, 다시 Pod로 향하게 됩니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1626&quot; data-origin-height=&quot;783&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/dlJ1dM/dJMb9WFjvm8/FijZ8YKUiYQDtWdzYFfUF0/tfile.svg&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/dlJ1dM/dJMb9WFjvm8/FijZ8YKUiYQDtWdzYFfUF0/tfile.svg&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/dlJ1dM/dJMb9WFjvm8/FijZ8YKUiYQDtWdzYFfUF0/tfile.svg&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FdlJ1dM%2FdJMb9WFjvm8%2FFijZ8YKUiYQDtWdzYFfUF0%2Ftfile.svg&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1626&quot; height=&quot;783&quot; data-origin-width=&quot;1626&quot; data-origin-height=&quot;783&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;출처: &lt;a href=&quot;https://docs.cilium.io/en/stable/network/ebpf/lifeofapacket/#ingress-to-endpoint&quot;&gt;https://docs.cilium.io/en/stable/network/ebpf/lifeofapacket/#ingress-to-endpoint&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;실습을 통해서 Ingress 구성 정보를 확인해보겠습니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;# ingress 에 예약된 내부 IP 확인 : node(cilium-envoy) 별로 존재
kubectl exec -it -n kube-system ds/cilium -- cilium ip list | grep ingress

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it -n kube-system ds/cilium -- cilium ip list | grep ingress
172.20.0.4/32       reserved:ingress
172.20.1.78/32      reserved:ingress

# cilium-envoy 확인
kubectl get pod -n kube-system -l k8s-app=cilium-envoy -owide

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get pod -n kube-system -l k8s-app=cilium-envoy -owide
NAME                 READY   STATUS    RESTARTS      AGE   IP               NODE      NOMINATED NODE   READINESS GATES
cilium-envoy-fbdhj   1/1     Running   0             24h   192.168.10.100   k8s-ctr   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
cilium-envoy-pjwhb   1/1     Running   1 (20h ago)   24h   192.168.10.101   k8s-w1    &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;


kubectl describe pod -n kube-system -l k8s-app=cilium-envoy
...
Containers:
  cilium-envoy:
    Container ID:  containerd://df0215f93e3193eaf81281e40cbdaa6ca1136c2ee3268fe3bcb60875f34bdbbf
    Image:         quay.io/cilium/cilium-envoy:v1.34.4-1754895458-68cffdfa568b6b226d70a7ef81fc65dda3b890bf@sha256:247e908700012f7ef56f75908f8c965215c26a27762f296068645eb55450bda2
    Image ID:      quay.io/cilium/cilium-envoy@sha256:247e908700012f7ef56f75908f8c965215c26a27762f296068645eb55450bda2
    Port:          9964/TCP
    Host Port:     9964/TCP
    Command:
      /usr/bin/cilium-envoy-starter
    Args:
      --
      -c /var/run/cilium/envoy/bootstrap-config.json
      --base-id 0
    ...
    Mounts:
      /sys/fs/bpf from bpf-maps (rw)
      /var/run/cilium/envoy/ from envoy-config (ro)
      /var/run/cilium/envoy/artifacts from envoy-artifacts (ro)
      /var/run/cilium/envoy/sockets from envoy-sockets (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6gsbl (ro)
...
Volumes:
  envoy-sockets: # cilim-agent와 socket 통신을 하기 위해서 hostPath로 socket을 마운트
    Type:          HostPath (bare host directory volume)
    Path:          /var/run/cilium/envoy/sockets
    HostPathType:  DirectoryOrCreate
  envoy-artifacts:
    Type:          HostPath (bare host directory volume)
    Path:          /var/run/cilium/envoy/artifacts
    HostPathType:  DirectoryOrCreate
  envoy-config:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      cilium-envoy-config
    Optional:  false
  bpf-maps:
    Type:          HostPath (bare host directory volume)
    Path:          /sys/fs/bpf
    HostPathType:  DirectoryOrCreate

# hostpath에 위치하는 실제 socket
ls -al /var/run/cilium/envoy/sockets
total 0
drwxr-xr-x 3 root root 120 Aug 16 17:47 .
drwxr-xr-x 4 root root  80 Aug 16 16:16 ..
srw-rw---- 1 root 1337   0 Aug 16 17:47 access_log.sock
srwxr-xr-x 1 root root   0 Aug 16 16:16 admin.sock
drwxr-xr-x 3 root root  60 Aug 16 16:16 envoy
srw-rw---- 1 root 1337   0 Aug 16 17:47 xds.sock


# envoy configmap 설정 내용 확인 -&amp;gt; envoy 설정
kubectl -n kube-system get configmap cilium-envoy-config
kubectl -n kube-system get configmap cilium-envoy-config -o json \
  | jq -r '.data[&quot;bootstrap-config.json&quot;]' \
  | jq .
...
{
  &quot;admin&quot;: {
    &quot;address&quot;: {
      &quot;pipe&quot;: {
        &quot;path&quot;: &quot;/var/run/cilium/envoy/sockets/admin.sock&quot;
      }
    }
  },
...
    &quot;listeners&quot;: [
      {
        &quot;address&quot;: {
          &quot;socketAddress&quot;: {
            &quot;address&quot;: &quot;0.0.0.0&quot;,
            &quot;portValue&quot;: 9964
...


# envoy는 headless 서비스로 구성됨
kubectl get svc,ep -n kube-system cilium-envoy

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get svc,ep -n kube-system cilium-envoy
Warning: v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
NAME                   TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)    AGE
service/cilium-envoy   ClusterIP   None         &amp;lt;none&amp;gt;        9964/TCP   24h

NAME                     ENDPOINTS                                 AGE
endpoints/cilium-envoy   192.168.10.100:9964,192.168.10.101:9964   24h


# ingress 서비스가 생성되어 있으며, 현재는 EXTERNAL-IP가 pending으로 확인됨
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get svc,ep -n kube-system cilium-ingress
Warning: v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
NAME                     TYPE           CLUSTER-IP    EXTERNAL-IP   PORT(S)                      AGE
service/cilium-ingress   LoadBalancer   10.96.71.89   &amp;lt;pending&amp;gt;     80:31646/TCP,443:30837/TCP   24h

NAME                       ENDPOINTS              AGE
endpoints/cilium-ingress   192.192.192.192:9999   24h 
# 실제로 Loadbalancer로 호출되면 모든 노드에서 eBPF로 처리하므로, 실제 존재하는 endpoint는 아님
# 내부적으로 사용되는 논리 IP&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이제 LoadBalancer IPAM을 설정하여 Ingress용 LoadBalancer IP를 가져오도록 설정하겠습니다.&lt;/p&gt;
&lt;pre class=&quot;routeros&quot;&gt;&lt;code&gt;# 현재 L2 Announcement 활성화 상태
cilium config view | grep l2

(⎈|HomeLab:N/A) root@k8s-ctr:~# cilium config view | grep l2
enable-l2-announcements                           true
enable-l2-neigh-discovery                         false

# 충돌나지 않는지 대역으로 IPPool 생성
cat &amp;lt;&amp;lt; EOF | kubectl apply -f -
apiVersion: &quot;cilium.io/v2&quot; 
kind: CiliumLoadBalancerIPPool
metadata:
  name: &quot;cilium-lb-ippool&quot;
spec:
  blocks:
  - start: &quot;192.168.10.211&quot;
    stop:  &quot;192.168.10.215&quot;
EOF

# 확인
kubectl get ippool
kubectl get ippools -o jsonpath='{.items[*].status.conditions[?(@.type!=&quot;cilium.io/PoolConflict&quot;)]}' | jq

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get ippool
NAME               DISABLED   CONFLICTING   IPS AVAILABLE   AGE
cilium-lb-ippool   false      False         4               6s
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get ippools -o jsonpath='{.items[*].status.conditions[?(@.type!=&quot;cilium.io/PoolConflict&quot;)]}' | jq
{
  &quot;lastTransitionTime&quot;: &quot;2025-08-21T14:37:41Z&quot;,
  &quot;message&quot;: &quot;5&quot;,
  &quot;observedGeneration&quot;: 1,
  &quot;reason&quot;: &quot;noreason&quot;,
  &quot;status&quot;: &quot;Unknown&quot;,
  &quot;type&quot;: &quot;cilium.io/IPsTotal&quot;
}
{
  &quot;lastTransitionTime&quot;: &quot;2025-08-21T14:37:41Z&quot;,
  &quot;message&quot;: &quot;4&quot;,
  &quot;observedGeneration&quot;: 1,
  &quot;reason&quot;: &quot;noreason&quot;,
  &quot;status&quot;: &quot;Unknown&quot;,
  &quot;type&quot;: &quot;cilium.io/IPsAvailable&quot;
}
{
  &quot;lastTransitionTime&quot;: &quot;2025-08-21T14:37:42Z&quot;,
  &quot;message&quot;: &quot;1&quot;,
  &quot;observedGeneration&quot;: 1,
  &quot;reason&quot;: &quot;noreason&quot;,
  &quot;status&quot;: &quot;Unknown&quot;,
  &quot;type&quot;: &quot;cilium.io/IPsUsed&quot;
}

# L2 Announcement 정책 설정
cat &amp;lt;&amp;lt; EOF | kubectl apply -f -
apiVersion: &quot;cilium.io/v2alpha1&quot;
kind: CiliumL2AnnouncementPolicy
metadata:
  name: policy1
spec:
  interfaces:
  - eth1
  externalIPs: true
  loadBalancerIPs: true
EOF

# 현재 리더 역할 노드 확인
kubectl -n kube-system get lease | grep &quot;cilium-l2announce&quot;
kubectl -n kube-system get lease/cilium-l2announce-kube-system-cilium-ingress -o yaml | yq

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl -n kube-system get lease | grep &quot;cilium-l2announce&quot;
cilium-l2announce-kube-system-cilium-ingress   k8s-w1                                                                      10s
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl -n kube-system get lease/cilium-l2announce-kube-system-cilium-ingress -o yaml | yq
{
  &quot;apiVersion&quot;: &quot;coordination.k8s.io/v1&quot;,
  &quot;kind&quot;: &quot;Lease&quot;,
  &quot;metadata&quot;: {
    &quot;creationTimestamp&quot;: &quot;2025-08-21T14:38:36Z&quot;,
    &quot;name&quot;: &quot;cilium-l2announce-kube-system-cilium-ingress&quot;,
    &quot;namespace&quot;: &quot;kube-system&quot;,
    &quot;resourceVersion&quot;: &quot;29907&quot;,
    &quot;uid&quot;: &quot;7236a912-14d9-4aab-b37c-431faf4fe18b&quot;
  },
  &quot;spec&quot;: {
    &quot;acquireTime&quot;: &quot;2025-08-21T14:38:36.718001Z&quot;,
    &quot;holderIdentity&quot;: &quot;k8s-w1&quot;,
    &quot;leaseDurationSeconds&quot;: 15,
    &quot;leaseTransitions&quot;: 0,
    &quot;renewTime&quot;: &quot;2025-08-21T14:38:51.105438Z&quot;
  }
}

# K8S 클러스터 내부 LB EX-IP로 호출 가능
(⎈|HomeLab:N/A) root@k8s-ctr:~#  kubectl get svc,ep -n kube-system cilium-ingress
Warning: v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
NAME                     TYPE           CLUSTER-IP    EXTERNAL-IP      PORT(S)                      AGE
service/cilium-ingress   LoadBalancer   10.96.71.89   192.168.10.211   80:31646/TCP,443:30837/TCP   24h

NAME                       ENDPOINTS              AGE
endpoints/cilium-ingress   192.192.192.192:9999   24h

LBIP=$(kubectl get svc -n kube-system cilium-ingress -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
echo $LBIP
arping -i eth1 $LBIP -c 2

(⎈|HomeLab:N/A) root@k8s-ctr:~# LBIP=$(kubectl get svc -n kube-system cilium-ingress -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
echo $LBIP
arping -i eth1 $LBIP -c 2
192.168.10.211
ARPING 192.168.10.211
60 bytes from 08:00:27:d8:5c:fd (192.168.10.211): index=0 time=4.397 msec
60 bytes from 08:00:27:d8:5c:fd (192.168.10.211): index=1 time=3.683 msec

--- 192.168.10.211 statistics ---
2 packets transmitted, 2 packets received,   0% unanswered (0 extra)
rtt min/avg/max/std-dev = 3.683/4.040/4.397/0.357 ms

# k8s 외부 노드(router)에서 LB EX-IP로 호출 가능 확인
for i in k8s-w1 router ; do echo &quot;&amp;gt;&amp;gt; node : $i &amp;lt;&amp;lt;&quot;; sshpass -p 'vagrant' ssh -o StrictHostKeyChecking=no vagrant@$i hostname; echo; done

sshpass -p 'vagrant' ssh vagrant@router sudo arping -i eth1 $LBIP -c 2

(⎈|HomeLab:N/A) root@k8s-ctr:~# sshpass -p 'vagrant' ssh vagrant@router sudo arping -i eth1 $LBIP -c 2
ARPING 192.168.10.211
60 bytes from 08:00:27:d8:5c:fd (192.168.10.211): index=0 time=20.001 usec
60 bytes from 08:00:27:d8:5c:fd (192.168.10.211): index=1 time=16.367 usec

--- 192.168.10.211 statistics ---
2 packets transmitted, 2 packets received,   0% unanswered (0 extra)
rtt min/avg/max/std-dev = 0.016/0.018/0.020/0.002 ms&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;LoadBalancer IPAM을 구성하여 LoadBalancer 의 IP가 정상적으로 통신되는 것으로 확인됩니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이제 Ingress를 직접 테스트 해보기 위해서 샘플 예제를 배포해보겠습니다.&lt;/p&gt;
&lt;pre class=&quot;yaml&quot;&gt;&lt;code&gt;# Deploy the Demo App : release-1.11는 ARM CPU 에서 실패함 1.26 버전을 높여서 샘플 배포
kubectl apply -f https://raw.githubusercontent.com/istio/istio/release-1.26/samples/bookinfo/platform/kube/bookinfo.yaml

# istio 와 다르게 사이드카 컨테이너가 없다 1/1, NodePort와 LoadBalancer 서비스 없다.
kubectl get pod,svc,ep

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get pod,svc,ep
Warning: v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
NAME                                  READY   STATUS    RESTARTS   AGE
pod/details-v1-766844796b-fl9tx       1/1     Running   0          3m36s
pod/productpage-v1-54bb874995-sxx28   1/1     Running   0          3m34s
pod/ratings-v1-5dc79b6bcd-lvzfv       1/1     Running   0          3m36s
pod/reviews-v1-598b896c9d-kv66w       1/1     Running   0          3m35s
pod/reviews-v2-556d6457d-w98v8        1/1     Running   0          3m35s
pod/reviews-v3-564544b4d6-h2g2d       1/1     Running   0          3m35s

NAME                  TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)    AGE
service/details       ClusterIP   10.96.177.64   &amp;lt;none&amp;gt;        9080/TCP   3m37s
service/kubernetes    ClusterIP   10.96.0.1      &amp;lt;none&amp;gt;        443/TCP    24h
service/productpage   ClusterIP   10.96.247.93   &amp;lt;none&amp;gt;        9080/TCP   3m35s
service/ratings       ClusterIP   10.96.207.66   &amp;lt;none&amp;gt;        9080/TCP   3m36s
service/reviews       ClusterIP   10.96.97.81    &amp;lt;none&amp;gt;        9080/TCP   3m36s

NAME                    ENDPOINTS                                              AGE
endpoints/details       172.20.1.182:9080                                      3m36s
endpoints/kubernetes    192.168.10.100:6443                                    24h
endpoints/productpage   172.20.1.1:9080                                        3m35s
endpoints/ratings       172.20.1.251:9080                                      3m36s
endpoints/reviews       172.20.1.218:9080,172.20.1.227:9080,172.20.1.48:9080   3m36s

# Ingress Class 생성 확인
kubectl get ingressclasses.networking.k8s.io

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get ingressclasses.networking.k8s.io
NAME     CONTROLLER                     PARAMETERS   AGE
cilium   cilium.io/ingress-controller   &amp;lt;none&amp;gt;       24h


# Ingress 생성
cat &amp;lt;&amp;lt; EOF | kubectl apply -f -
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: basic-ingress
  namespace: default
spec:
  ingressClassName: cilium # IngressClassName을 맞춰준다.
  rules:
  - http:
      paths:
      - backend:
          service:
            name: details
            port:
              number: 9080
        path: /details
        pathType: Prefix
      - backend:
          service:
            name: productpage
            port:
              number: 9080
        path: /
        pathType: Prefix
EOF

# Adress 는 cilium-ingress LoadBalancer 의 EXTERNAL-IP
kubectl get svc -n kube-system cilium-ingress
kubectl get ingress
kubectl describe ingress basic-ingress

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get svc -n kube-system cilium-ingress
NAME             TYPE           CLUSTER-IP    EXTERNAL-IP      PORT(S)                      AGE
cilium-ingress   LoadBalancer   10.96.71.89   192.168.10.211   80:31646/TCP,443:30837/TCP   24h
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get ingress
NAME            CLASS    HOSTS   ADDRESS          PORTS   AGE
basic-ingress   cilium   *       192.168.10.211   80      112s

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl describe ingress basic-ingress
Name:             basic-ingress
Labels:           &amp;lt;none&amp;gt;
Namespace:        default
Address:          192.168.10.211
Ingress Class:    cilium
Default backend:  &amp;lt;default&amp;gt;
Rules:
  Host        Path  Backends
  ----        ----  --------
  *
              /details   details:9080 (172.20.1.182:9080)
              /          productpage:9080 (172.20.1.1:9080)
Annotations:  &amp;lt;none&amp;gt;
Events:       &amp;lt;none&amp;gt;

# 호출 확인
LBIP=$(kubectl get svc -n kube-system cilium-ingress -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
echo $LBIP
curl -so /dev/null -w &quot;%{http_code}\n&quot; http://$LBIP/
curl -so /dev/null -w &quot;%{http_code}\n&quot; http://$LBIP/details/1
curl -so /dev/null -w &quot;%{http_code}\n&quot; http://$LBIP/ratings

(⎈|HomeLab:N/A) root@k8s-ctr:~# curl -so /dev/null -w &quot;%{http_code}\n&quot; http://$LBIP/
200
(⎈|HomeLab:N/A) root@k8s-ctr:~# curl -so /dev/null -w &quot;%{http_code}\n&quot; http://$LBIP/details/1
200
(⎈|HomeLab:N/A) root@k8s-ctr:~# curl -so /dev/null -w &quot;%{http_code}\n&quot; http://$LBIP/ratings # 현재 구성되지 않은 path임
404

# 모니터링
cilium hubble port-forward&amp;amp;
hubble observe -f -t l7
or 
hubble observe -f --identity ingress
...

(⎈|HomeLab:N/A) root@k8s-ctr:~# hubble observe -f -t l7
Aug 21 15:09:36.465: 192.168.10.200:50646 (ingress) -&amp;gt; default/productpage-v1-54bb874995-sxx28:9080 (ID:6397) http-request FORWARDED (HTTP/1.1 GET http://192.168.10.211/)
Aug 21 15:09:36.487: 192.168.10.200:50646 (ingress) &amp;lt;- default/productpage-v1-54bb874995-sxx28:9080 (ID:6397) http-response FORWARDED (HTTP/1.1 200 32ms (GET http://192.168.10.211/))
Aug 21 15:09:42.087: 192.168.10.200:47782 (ingress) -&amp;gt; default/details-v1-766844796b-fl9tx:9080 (ID:39514) http-request FORWARDED (HTTP/1.1 GET http://192.168.10.211/details/1)
Aug 21 15:09:42.102: 192.168.10.200:47782 (ingress) &amp;lt;- default/details-v1-766844796b-fl9tx:9080 (ID:39514) http-response FORWARDED (HTTP/1.1 200 17ms (GET http://192.168.10.211/details/1))
Aug 21 15:09:49.591: 192.168.10.200:56376 (ingress) -&amp;gt; default/productpage-v1-54bb874995-sxx28:9080 (ID:6397) http-request FORWARDED (HTTP/1.1 GET http://192.168.10.211/ratings)
Aug 21 15:09:49.602: 192.168.10.200:56376 (ingress) &amp;lt;- default/productpage-v1-54bb874995-sxx28:9080 (ID:6397) http-response FORWARDED (HTTP/1.1 404 18ms (GET http://192.168.10.211/ratings))


# router에서 호출
LBIP=192.168.10.211
curl -so /dev/null -w &quot;%{http_code}\n&quot; http://$LBIP/
curl -so /dev/null -w &quot;%{http_code}\n&quot; http://$LBIP/details/1
curl -so /dev/null -w &quot;%{http_code}\n&quot; http://$LBIP/ratings
curl -s http://$LBIP/details/1 -v

root@router:~# curl -s http://$LBIP/details/1 -v
*   Trying 192.168.10.211:80...
* Connected to 192.168.10.211 (192.168.10.211) port 80
&amp;gt; GET /details/1 HTTP/1.1
&amp;gt; Host: 192.168.10.211
&amp;gt; User-Agent: curl/8.5.0
&amp;gt; Accept: */*
&amp;gt;
&amp;lt; HTTP/1.1 200 OK
&amp;lt; content-type: application/json
&amp;lt; server: envoy  # envoy에서 응답을 보내주는 것 처럼 보인다.
&amp;lt; date: Thu, 21 Aug 2025 15:12:30 GMT
&amp;lt; content-length: 178
&amp;lt; x-envoy-upstream-service-time: 14
&amp;lt;
* Connection #0 to host 192.168.10.211 left intact
{&quot;id&quot;:1,&quot;author&quot;:&quot;William Shakespeare&quot;,&quot;year&quot;:1595,&quot;type&quot;:&quot;paperback&quot;,&quot;pages&quot;:200,&quot;publisher&quot;:&quot;PublisherA&quot;,&quot;language&quot;:&quot;English&quot;,&quot;ISBN-10&quot;:&quot;1234567890&quot;,&quot;ISBN-13&quot;:&quot;123-1234567890&quot;}&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Cilium의 Ingress에는 두 가지 Loadbalancer 모드가 있습니다.&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;&lt;code&gt;dedicated&lt;/code&gt;: The Ingress controller will create a dedicated loadbalancer for the Ingress.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;shared&lt;/code&gt;: The Ingress controller will use a shared loadbalancer for all Ingress resources.&lt;/li&gt;
&lt;/ul&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;실습 환경은 shared 모드로 구성이 되어 있으며, 이 경우 생성되는 Ingress 들은 모두 ingress controller용 서비스 IP를 사용하게 됩니다.&lt;/p&gt;
&lt;pre class=&quot;routeros&quot;&gt;&lt;code&gt;(⎈|HomeLab:N/A) root@k8s-ctr:~# cilium config view |grep shared
ingress-default-lb-mode                           shared
ingress-hostnetwork-shared-listener-port          8080
ingress-shared-lb-service-name                    cilium-ingress

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get svc cilium-ingress -n kube-system
NAME             TYPE           CLUSTER-IP    EXTERNAL-IP      PORT(S)                      AGE
cilium-ingress   LoadBalancer   10.96.71.89   192.168.10.211   80:31646/TCP,443:30837/TCP   47h

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get ing
NAME            CLASS    HOSTS   ADDRESS          PORTS   AGE
basic-ingress   cilium   *       192.168.10.211   80      22h&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Cilium에서 Loadbalancer 모드 자체를 변경할 수 도 있으며, 혹은 리소스에서 어노테이션으로 직접 지정할 수도 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;shared 모드에서 추가 Ingress를 생성해보고, dedicated 모드를 명시한 Ingress도 추가해보겠습니다.&lt;/p&gt;
&lt;pre class=&quot;routeros&quot;&gt;&lt;code&gt;# 추가 Ingriess 생성
cat &amp;lt;&amp;lt; EOF | kubectl apply -f -
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: basic-ingress2
  namespace: default
spec:
  ingressClassName: cilium
  rules:
  - http:
      paths:
      - backend:
          service:
            name: webpod
            port:
              number: 80
        path: /
        pathType: Prefix
EOF

# dedicated 모드로 생성
cat &amp;lt;&amp;lt; EOF | kubectl apply -f -
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: webpod-ingress
  namespace: default
  annotations:
    ingress.cilium.io/loadbalancer-mode: dedicated
spec:
  ingressClassName: cilium
  rules:
  - http:
      paths:
      - backend:
          service:
            name: webpod
            port:
              number: 80
        path: /
        pathType: Prefix
EOF

# 확인
kubectl get svc -A |grep LoadBalancer
kubectl get ingress

# 논리적인 구현체가 생성됨
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get svc -A |grep LoadBalancer
default       cilium-ingress-webpod-ingress   LoadBalancer   10.96.188.161   192.168.10.212   80:32576/TCP,443:30413/TCP   42s
kube-system   cilium-ingress                  LoadBalancer   10.96.71.89     192.168.10.211   80:31646/TCP,443:30837/TCP   47h

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get ing
NAME             CLASS    HOSTS   ADDRESS          PORTS   AGE
basic-ingress    cilium   *       192.168.10.211   80      22h # shared 모드
basic-ingress2   cilium   *       192.168.10.211   80      9s # shared 모드
webpod-ingress   cilium   *       192.168.10.212   80      4s # dedicated 모드


kubectl get svc,ep cilium-ingress-webpod-ingress

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get svc,ep cilium-ingress-webpod-ingress
Warning: v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
NAME                                    TYPE           CLUSTER-IP      EXTERNAL-IP      PORT(S)                      AGE
service/cilium-ingress-webpod-ingress   LoadBalancer   10.96.188.161   192.168.10.212   80:32576/TCP,443:30413/TCP   117s

NAME                                      ENDPOINTS              AGE
endpoints/cilium-ingress-webpod-ingress   192.192.192.192:9999   116s


# LB EX-IP에 대한 L2 Announcement 의 Leader 노드 확인
kubectl get lease -n kube-system | grep ingress

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get lease -n kube-system | grep ingress
cilium-l2announce-default-cilium-ingress-webpod-ingress   k8s-w1                                                                      2m15s
cilium-l2announce-kube-system-cilium-ingress              k8s-ctr                                                                     22h&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Cilium Ingress에서 활용 가능한 어노테이션은 아래를 참고하실 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;https://docs.cilium.io/en/stable/network/servicemesh/ingress/#supported-ingress-annotations&quot;&gt;https://docs.cilium.io/en/stable/network/servicemesh/ingress/#supported-ingress-annotations&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;참고로 Cilium Ingress Controller와 Cilium Gateway API Controller는 동시에 사용이 불가합니다. 이후 실습을 진행하기 전에 생성된 Ingress를 모두 정리하겠습니다.&lt;/p&gt;
&lt;pre class=&quot;armasm&quot;&gt;&lt;code&gt;kubectl delete ingress basic-ingress basic-ingress2 webpod-ingress&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;4. Gateway API Support&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Ingress가 가지는 아래와 같은 한계점을 극복하기 위해서 Gateway API를 사용할 수 있습니다.&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;&lt;b&gt;다양한 프로토콜을 지원 부족&lt;/b&gt;: Ingress 리소스는 HTTP, HTTPS와 같은 L7 트래픽에 최적화 되어있고, gRPC 및 TCP, UDP와 같은 비 L7 프로토콜에 대한 라우팅은 제공되지 않음.&lt;/li&gt;
&lt;li&gt;&lt;b&gt;다양한 구현체별 표준화되지 않은 고급 기능&lt;/b&gt;: Ingress에서는 각 구현체별로 제공하는 기능에 차이가 있고 각 제품별 표준화된 기능을 제공하기 어려움. 이로 인해 Ingress의 고급 기능(인증, 속도 제한 정책, 고급 트래픽 관리 등)은 각 제품 별 사용자 정의 어노테이션을 통해서 구현함. 이로 인해서 점차 어노테이션이 복잡해지고 표준화가 어려워지며, 또한 한 구현체에서 다른 구현체로서의 이식성에 한계가 존재함.&lt;/li&gt;
&lt;li&gt;&lt;b&gt;역할 분리가 어려운 구조&lt;/b&gt;: Ingress 리소스는 네트워크 설정, 인증, 인증서 등 중요한 인프라 설정까지도 개발자가 Ingress 안에서 작성해야 하고, 기반 인프라로 제공되는 Ingress Controller에서 제공하는 기능을 파악해 Ingress에서 어노테이션으로 처리해야 하는 부담이 있음.&lt;/li&gt;
&lt;/ul&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Cilium에서도 Gateway API를 지원합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;참고: &lt;a href=&quot;https://docs.cilium.io/en/stable/network/servicemesh/gateway-api/gateway-api/&quot;&gt;https://docs.cilium.io/en/stable/network/servicemesh/gateway-api/gateway-api/&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Cilium에서는 앞선 Ingress 지원와 동일하게 &lt;code&gt;nodePort.enabled=true&lt;/code&gt;나 &lt;code&gt;kubeProxyReplacement=true&lt;/code&gt;를 설정하여야 하며, &lt;code&gt;l7Proxy=true&lt;/code&gt; 가 설정되어야 합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;다만 Gateway API에서 사용되는 CRD가 설치되어 있지 않아 이를 별도로 설치를 진행해야 합니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;# CRD 설치
kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/gateway-api/v1.2.0/config/crd/standard/gateway.networking.k8s.io_gatewayclasses.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/gateway-api/v1.2.0/config/crd/standard/gateway.networking.k8s.io_gateways.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/gateway-api/v1.2.0/config/crd/standard/gateway.networking.k8s.io_httproutes.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/gateway-api/v1.2.0/config/crd/standard/gateway.networking.k8s.io_referencegrants.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/gateway-api/v1.2.0/config/crd/standard/gateway.networking.k8s.io_grpcroutes.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/gateway-api/v1.2.0/config/crd/experimental/gateway.networking.k8s.io_tlsroutes.yaml

# 확인
kubectl get crd | grep gateway.networking.k8s.io

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get crd | grep gateway.networking.k8s.io
gatewayclasses.gateway.networking.k8s.io     2025-08-22T15:22:43Z
gateways.gateway.networking.k8s.io           2025-08-22T15:22:45Z
grpcroutes.gateway.networking.k8s.io         2025-08-22T15:22:50Z
httproutes.gateway.networking.k8s.io         2025-08-22T15:22:47Z
referencegrants.gateway.networking.k8s.io    2025-08-22T15:22:49Z
tlsroutes.gateway.networking.k8s.io          2025-08-22T15:22:52Z&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;실습을 이어 나가기 위해서 이전 실습에서 사용한 Ingress Controller를 비활성화하고, Gateway API를 활성화 하겠습니다.&lt;/p&gt;
&lt;pre class=&quot;routeros&quot;&gt;&lt;code&gt;# Gateway API 활성화
helm upgrade cilium cilium/cilium --version 1.18.1 --namespace kube-system --reuse-values \
--set ingressController.enabled=false --set gatewayAPI.enabled=true

# Gateway API 활성화 이후 반영을 위해서 cilium-operator, cilium-agent 재시작
kubectl -n kube-system rollout restart deployment/cilium-operator
kubectl -n kube-system rollout restart ds/cilium

# 설정 확인
cilium config view | grep gateway-api

(⎈|HomeLab:N/A) root@k8s-ctr:~# cilium config view | grep gateway-api
enable-gateway-api                                true
enable-gateway-api-alpn                           false
enable-gateway-api-app-protocol                   false
enable-gateway-api-proxy-protocol                 false
enable-gateway-api-secrets-sync                   true
gateway-api-hostnetwork-enabled                   false
gateway-api-hostnetwork-nodelabelselector
gateway-api-secrets-namespace                     cilium-secrets
gateway-api-service-externaltrafficpolicy         Cluster
gateway-api-xff-num-trusted-hops                  0

# cilium-ingress 제거 확인 -&amp;gt;  cilium-ingress 사라짐
kubectl get svc,pod -n kube-system

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get svc,pod -n kube-system
NAME                     TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)                  AGE
service/cilium-envoy     ClusterIP   None           &amp;lt;none&amp;gt;        9964/TCP                 2d1h
service/hubble-metrics   ClusterIP   None           &amp;lt;none&amp;gt;        9965/TCP                 2d1h
service/hubble-peer      ClusterIP   10.96.5.178    &amp;lt;none&amp;gt;        443/TCP                  2d1h
service/hubble-relay     ClusterIP   10.96.162.54   &amp;lt;none&amp;gt;        80/TCP                   2d1h
service/hubble-ui        NodePort    10.96.248.26   &amp;lt;none&amp;gt;        80:30003/TCP             2d1h
service/kube-dns         ClusterIP   10.96.0.10     &amp;lt;none&amp;gt;        53/UDP,53/TCP,9153/TCP   2d1h
service/metrics-server   ClusterIP   10.96.148.51   &amp;lt;none&amp;gt;        443/TCP                  2d1h

# 확인
kubectl get GatewayClass
kubectl get gateway -A

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get GatewayClass
NAME     CONTROLLER                     ACCEPTED   AGE
cilium   io.cilium/gateway-controller   True       2m11s
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get gateway -A
No resources found
(⎈|HomeLab:N/A) root@k8s-ctr:~#&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Cilium에서는 Ciliun Agent와 Cilium Operator가 Gateway API를 처리하는 역할을 합니다.&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;Cilium Operator는 Gateway API를 watch하고 리소스가 유효한지 검증합니다. 리소스가 유효하다면 Cilium Operator는 리소스를 Accepted 된 것으로 표시하고 Cilium Envoy 설정으로 변경하는 작업을 시작합니다.&lt;/li&gt;
&lt;li&gt;Cilium Agent는 Cilium Envoy 구성을 가져오고, Envoy나 Envoy 데몬 셋에 설정을 제공하게되며, Envoy가 트래픽을 처리합니다.&lt;/li&gt;
&lt;/ul&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아래와 같이 Gateway API 실습을 이어 나가겠습니다.&lt;/p&gt;
&lt;pre class=&quot;yaml&quot;&gt;&lt;code&gt;# 배포
cat &amp;lt;&amp;lt; EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
  name: my-gateway
spec:
  gatewayClassName: cilium
  listeners:
  - protocol: HTTP
    port: 80
    name: web-gw
    allowedRoutes:
      namespaces:
        from: Same
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: http-app-1
spec:
  parentRefs:
  - name: my-gateway
    namespace: default
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /details
    backendRefs:
    - name: details
      port: 9080
  - matches:
    - headers: # http header 기반 라우팅
      - type: Exact
        name: magic
        value: foo
      queryParams:
      - type: Exact
        name: great
        value: example
      path:
        type: PathPrefix
        value: /
      method: GET
    backendRefs:
    - name: productpage
      port: 9080
EOF

# cilium-gateway-my-gateway 라는 LoadBalancer 서비스가 생성됨
kubectl get svc,ep cilium-gateway-my-gateway

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get svc,ep cilium-gateway-my-gateway
Warning: v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
NAME                                TYPE           CLUSTER-IP     EXTERNAL-IP      PORT(S)        AGE
service/cilium-gateway-my-gateway   LoadBalancer   10.96.89.140   192.168.10.211   80:31194/TCP   83s

NAME                                  ENDPOINTS              AGE
endpoints/cilium-gateway-my-gateway   192.192.192.192:9999   83s

# Gateway의 Address가 서비스의 External IP와 동일함
kubectl get gateway

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get gateway
NAME         CLASS    ADDRESS          PROGRAMMED   AGE
my-gateway   cilium   192.168.10.211   True         4s

## Accepted: the Gateway configuration was accepted.
## Programmed: the Gateway configuration was programmed into Envoy.
## ResolvedRefs: all referenced secrets were found and have permission for use.
kubectl describe gateway

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl describe gateway
Name:         my-gateway
Namespace:    default
Labels:       &amp;lt;none&amp;gt;
Annotations:  &amp;lt;none&amp;gt;
API Version:  gateway.networking.k8s.io/v1
Kind:         Gateway
Metadata:
  Creation Timestamp:  2025-08-22T15:44:08Z
  Generation:          1
  Resource Version:    83428
  UID:                 0bcfc472-621c-4daf-8656-945c42cdd7c4
Spec:
  Gateway Class Name:  cilium
  Listeners:
    Allowed Routes:
      Namespaces:
        From:  Same
    Name:      web-gw
    Port:      80
    Protocol:  HTTP
Status:
  Addresses:
    Type:   IPAddress
    Value:  192.168.10.211
  Conditions:
    Last Transition Time:  2025-08-22T15:43:59Z
    Message:               Gateway successfully scheduled
    Observed Generation:   1
    Reason:                Accepted
    Status:                True
    Type:                  Accepted
    Last Transition Time:  2025-08-22T15:43:59Z
    Message:               Gateway successfully reconciled
    Observed Generation:   1
    Reason:                Programmed
    Status:                True
    Type:                  Programmed
  Listeners:
    Attached Routes:  1
    Conditions:
      Last Transition Time:  2025-08-22T15:44:00Z
      Message:               Listener Programmed
      Observed Generation:   1
      Reason:                Programmed
      Status:                True
      Type:                  Programmed
      Last Transition Time:  2025-08-22T15:44:00Z
      Message:               Listener Accepted
      Observed Generation:   1
      Reason:                Accepted
      Status:                True
      Type:                  Accepted
      Last Transition Time:  2025-08-22T15:44:00Z
      Message:               Resolved Refs
      Reason:                ResolvedRefs
      Status:                True
      Type:                  ResolvedRefs
    Name:                    web-gw
    Supported Kinds:
      Group:  gateway.networking.k8s.io
      Kind:   HTTPRoute
Events:       &amp;lt;none&amp;gt;


kubectl get httproutes -A

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get httproutes -A
NAMESPACE   NAME         HOSTNAMES   AGE
default     http-app-1               2m2s

# Accepted: The HTTPRoute configuration was correct and accepted.
# ResolvedRefs: The referenced services were found and are valid references.
kubectl describe httproutes


(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl describe httproutes
Name:         http-app-1
Namespace:    default
Labels:       &amp;lt;none&amp;gt;
Annotations:  &amp;lt;none&amp;gt;
API Version:  gateway.networking.k8s.io/v1
Kind:         HTTPRoute
Metadata:
  Creation Timestamp:  2025-08-22T15:44:08Z
  Generation:          1
  Resource Version:    83414
  UID:                 36c5adde-26ed-4822-9208-05c3fcaf7e4a
Spec:
  Parent Refs:
    Group:      gateway.networking.k8s.io
    Kind:       Gateway
    Name:       my-gateway
    Namespace:  default
  Rules:
    Backend Refs:
      Group:
      Kind:    Service
      Name:    details
      Port:    9080
      Weight:  1
    Matches:
      Path:
        Type:   PathPrefix
        Value:  /details
    Backend Refs:
      Group:
      Kind:    Service
      Name:    productpage
      Port:    9080
      Weight:  1
    Matches:
      Headers:
        Name:   magic
        Type:   Exact
        Value:  foo
      Method:   GET
      Path:
        Type:   PathPrefix
        Value:  /
      Query Params:
        Name:   great
        Type:   Exact
        Value:  example
Status:
  Parents:
    Conditions:
      Last Transition Time:  2025-08-22T15:43:59Z
      Message:               Accepted HTTPRoute
      Observed Generation:   1
      Reason:                Accepted
      Status:                True
      Type:                  Accepted
      Last Transition Time:  2025-08-22T15:43:59Z
      Message:               Service reference is valid
      Observed Generation:   1
      Reason:                ResolvedRefs
      Status:                True
      Type:                  ResolvedRefs
    Controller Name:         io.cilium/gateway-controller
    Parent Ref:
      Group:      gateway.networking.k8s.io
      Kind:       Gateway
      Name:       my-gateway
      Namespace:  default
Events:           &amp;lt;none&amp;gt;&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;생성된 Gateway 서비스로 테스트 호출을 해보겠습니다.&lt;/p&gt;
&lt;pre class=&quot;django&quot;&gt;&lt;code&gt;# 호출
GATEWAY=$(kubectl get gateway my-gateway -o jsonpath='{.status.addresses[0].value}')
echo $GATEWAY

# HTTP Path 기반 라우팅
# Let's now check that traffic based on the URL path is proxied by the Gateway API.
# Check that you can make HTTP requests to that external address:
# Because the path starts with /details, this traffic will match the first rule and will be proxied to the details Service over port 9080.
curl --fail -s http://&quot;$GATEWAY&quot;/details/1 | jq
sshpass -p 'vagrant' ssh vagrant@router &quot;curl -s --fail -v http://&quot;$GATEWAY&quot;/details/1&quot;


(⎈|HomeLab:N/A) root@k8s-ctr:~# curl --fail -s http://&quot;$GATEWAY&quot;/details/1 | jq
{
  &quot;id&quot;: 1,
  &quot;author&quot;: &quot;William Shakespeare&quot;,
  &quot;year&quot;: 1595,
  &quot;type&quot;: &quot;paperback&quot;,
  &quot;pages&quot;: 200,
  &quot;publisher&quot;: &quot;PublisherA&quot;,
  &quot;language&quot;: &quot;English&quot;,
  &quot;ISBN-10&quot;: &quot;1234567890&quot;,
  &quot;ISBN-13&quot;: &quot;123-1234567890&quot;
}
(⎈|HomeLab:N/A) root@k8s-ctr:~# sshpass -p 'vagrant' ssh vagrant@router &quot;curl -s --fail -v http://&quot;$GATEWAY&quot;/details/1&quot;
*   Trying 192.168.10.211:80...
* Connected to 192.168.10.211 (192.168.10.211) port 80
&amp;gt; GET /details/1 HTTP/1.1
&amp;gt; Host: 192.168.10.211
&amp;gt; User-Agent: curl/8.5.0
&amp;gt; Accept: */*
&amp;gt;
&amp;lt; {&quot;id&quot;:1,&quot;author&quot;:&quot;William Shakespeare&quot;,&quot;year&quot;:1595,&quot;type&quot;:&quot;paperback&quot;,&quot;pages&quot;:200,&quot;publisher&quot;:&quot;PublisherA&quot;,&quot;language&quot;:&quot;English&quot;,&quot;ISBN-10&quot;:&quot;1234567890&quot;,&quot;ISBN-13&quot;:&quot;123-1234567890&quot;}HTTP/1.1 200 OK
&amp;lt; content-type: application/json
&amp;lt; server: envoy
&amp;lt; date: Fri, 22 Aug 2025 15:49:20 GMT
&amp;lt; content-length: 178
&amp;lt; x-envoy-upstream-service-time: 14
&amp;lt;
{ [178 bytes data]
* Connection #0 to host 192.168.10.211 left intact


# HTTP Header기반 라우팅
# This time, we will route traffic based on HTTP parameters like header values, method and query parameters. Run the following command:
curl -v -H 'magic: foo' http://&quot;$GATEWAY&quot;\?great\=example
sshpass -p 'vagrant' ssh vagrant@router &quot;curl -s -v -H 'magic: foo' http://&quot;$GATEWAY&quot;\?great\=example&quot;


(⎈|HomeLab:N/A) root@k8s-ctr:~# curl -v -H 'magic: foo' http://&quot;$GATEWAY&quot;\?great\=example
*   Trying 192.168.10.211:80...
* Connected to 192.168.10.211 (192.168.10.211) port 80
&amp;gt; GET /?great=example HTTP/1.1
&amp;gt; Host: 192.168.10.211
&amp;gt; User-Agent: curl/8.5.0
&amp;gt; Accept: */*
&amp;gt; magic: foo
&amp;gt;
&amp;lt; HTTP/1.1 200 OK
&amp;lt; server: envoy
&amp;lt; date: Fri, 22 Aug 2025 15:50:01 GMT
&amp;lt; content-type: text/html; charset=utf-8
&amp;lt; content-length: 2080
&amp;lt; x-envoy-upstream-service-time: 28
&amp;lt;

&amp;lt;meta charset=&quot;utf-8&quot;&amp;gt;
&amp;lt;meta http-equiv=&quot;X-UA-Compatible&quot; content=&quot;IE=edge&quot;&amp;gt;
&amp;lt;meta name=&quot;viewport&quot; content=&quot;width=device-width, initial-scale=1&quot;&amp;gt;
...
&amp;lt;script src=&quot;static/tailwind/tailwind.css&quot;&amp;gt;&amp;lt;/script&amp;gt;
...
&amp;lt;div class=&quot;mx-auto px-4 sm:px-6 lg:px-8&quot;&amp;gt;
    &amp;lt;div class=&quot;flex flex-col space-y-5 py-32 mx-auto max-w-7xl&quot;&amp;gt;
        &amp;lt;h3 class=&quot;text-2xl&quot;&amp;gt;Hello! This is a simple bookstore application consisting of three services as shown below
        &amp;lt;/h3&amp;gt;

        &amp;lt;table class=&quot;table table-condensed table-bordered table-hover&quot;&amp;gt;&amp;lt;tr&amp;gt;&amp;lt;th&amp;gt;name&amp;lt;/th&amp;gt;&amp;lt;td&amp;gt;http://details:9080&amp;lt;/td&amp;gt;&amp;lt;/tr&amp;gt;&amp;lt;tr&amp;gt;&amp;lt;th&amp;gt;endpoint&amp;lt;/th&amp;gt;&amp;lt;td&amp;gt;details&amp;lt;/td&amp;gt;&amp;lt;/tr&amp;gt;&amp;lt;tr&amp;gt;&amp;lt;th&amp;gt;children&amp;lt;/th&amp;gt;&amp;lt;td&amp;gt;&amp;lt;table class=&quot;table table-condensed table-bordered table-hover&quot;&amp;gt;&amp;lt;thead&amp;gt;&amp;lt;tr&amp;gt;&amp;lt;th&amp;gt;name&amp;lt;/th&amp;gt;&amp;lt;th&amp;gt;endpoint&amp;lt;/th&amp;gt;&amp;lt;th&amp;gt;children&amp;lt;/th&amp;gt;&amp;lt;/tr&amp;gt;&amp;lt;/thead&amp;gt;&amp;lt;tbody&amp;gt;&amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;http://details:9080&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;details&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;&amp;lt;/td&amp;gt;&amp;lt;/tr&amp;gt;&amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;http://reviews:9080&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;reviews&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;&amp;lt;table class=&quot;table table-condensed table-bordered table-hover&quot;&amp;gt;&amp;lt;thead&amp;gt;&amp;lt;tr&amp;gt;&amp;lt;th&amp;gt;name&amp;lt;/th&amp;gt;&amp;lt;th&amp;gt;endpoint&amp;lt;/th&amp;gt;&amp;lt;th&amp;gt;children&amp;lt;/th&amp;gt;&amp;lt;/tr&amp;gt;&amp;lt;/thead&amp;gt;&amp;lt;tbody&amp;gt;&amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;http://ratings:9080&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;ratings&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;&amp;lt;/td&amp;gt;&amp;lt;/tr&amp;gt;&amp;lt;/tbody&amp;gt;&amp;lt;/table&amp;gt;&amp;lt;/td&amp;gt;&amp;lt;/tr&amp;gt;&amp;lt;/tbody&amp;gt;&amp;lt;/table&amp;gt;&amp;lt;/td&amp;gt;&amp;lt;/tr&amp;gt;&amp;lt;/table&amp;gt;

        &amp;lt;p&amp;gt;
            Click on one of the links below to auto generate a request to the backend as a real user or a tester
        &amp;lt;/p&amp;gt;
        &amp;lt;ul&amp;gt;
            &amp;lt;li&amp;gt;
                &amp;lt;a href=&quot;/productpage?u=normal&quot; class=&quot;text-blue-500 hover:text-blue-600&quot;&amp;gt;Normal user&amp;lt;/a&amp;gt;
            &amp;lt;/li&amp;gt;
            &amp;lt;li&amp;gt;
                &amp;lt;a href=&quot;/productpage?u=test&quot; class=&quot;text-blue-500 hover:text-blue-600&quot;&amp;gt;Test user&amp;lt;/a&amp;gt;
            &amp;lt;/li&amp;gt;
        &amp;lt;/ul&amp;gt;
    &amp;lt;/div&amp;gt;
&amp;lt;/div&amp;gt;
* Connection #0 to host 192.168.10.211 left intact
(⎈|HomeLab:N/A) root@k8s-ctr:~# sshpass -p 'vagrant' ssh vagrant@router &quot;curl -s -v -H 'magic: foo' http://&quot;$GATEWAY&quot;\?great\=example&quot;
*   Trying 192.168.10.211:80...
* Connected to 192.168.10.211 (192.168.10.211) port 80
&amp;gt; GET /?great=example HTTP/1.1
&amp;gt; Host: 192.168.10.211
&amp;gt; User-Agent: curl/8.5.0
&amp;gt; Accept: */*
&amp;gt; magic: foo
&amp;gt;
&amp;lt;
&amp;lt;meta charset=&quot;utf-8&quot;&amp;gt;
&amp;lt;meta http-equiv=&quot;X-UA-Compatible&quot; content=&quot;IE=edge&quot;&amp;gt;
&amp;lt;meta name=&quot;viewport&quot; content=&quot;width=device-width, initial-scale=1&quot;&amp;gt;
...
&amp;lt;script src=&quot;static/tailwind/tailwind.css&quot;&amp;gt;&amp;lt;/script&amp;gt;
...
&amp;lt;div class=&quot;mx-auto px-4 sm:px-6 lg:px-8&quot;&amp;gt;
    &amp;lt;div class=&quot;flex flex-col space-y-5 py-32 mx-auto max-w-7xl&quot;&amp;gt;
        &amp;lt;h3 class=&quot;text-2xl&quot;&amp;gt;Hello! This is a simple bookstore application consisting of three services as shown below
        &amp;lt;/h3&amp;gt;

        &amp;lt;table class=&quot;table table-condensed table-bordered table-hover&quot;&amp;gt;&amp;lt;tr&amp;gt;&amp;lt;th&amp;gt;name&amp;lt;/th&amp;gt;&amp;lt;td&amp;gt;http://details:9080&amp;lt;/td&amp;gt;&amp;lt;/tr&amp;gt;&amp;lt;tr&amp;gt;&amp;lt;th&amp;gt;endpoint&amp;lt;/th&amp;gt;&amp;lt;td&amp;gt;details&amp;lt;/td&amp;gt;&amp;lt;/tr&amp;gt;&amp;lt;tr&amp;gt;&amp;lt;th&amp;gt;children&amp;lt;/th&amp;gt;&amp;lt;td&amp;gt;&amp;lt;table class=&quot;table table-condensed table-bordered table-hover&quot;&amp;gt;&amp;lt;thead&amp;gt;&amp;lt;tr&amp;gt;&amp;lt;th&amp;gt;name&amp;lt;/th&amp;gt;&amp;lt;th&amp;gt;endpoint&amp;lt;/th&amp;gt;&amp;lt;th&amp;gt;children&amp;lt;/th&amp;gt;&amp;lt;/tr&amp;gt;&amp;lt;/thead&amp;gt;&amp;lt;tbody&amp;gt;&amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;http://details:9080&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;details&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;&amp;lt;/td&amp;gt;&amp;lt;/tr&amp;gt;&amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;http://reviews:9080&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;reviews&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;&amp;lt;table class=&quot;table table-condensed table-bordered table-hover&quot;&amp;gt;&amp;lt;thead&amp;gt;&amp;lt;tr&amp;gt;&amp;lt;th&amp;gt;name&amp;lt;/th&amp;gt;&amp;lt;th&amp;gt;endpoint&amp;lt;/th&amp;gt;&amp;lt;th&amp;gt;children&amp;lt;/th&amp;gt;&amp;lt;/tr&amp;gt;&amp;lt;/thead&amp;gt;&amp;lt;tbody&amp;gt;&amp;lt;tr&amp;gt;&amp;lt;td&amp;gt;http://ratings:9080&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;ratings&amp;lt;/td&amp;gt;&amp;lt;td&amp;gt;&amp;lt;/td&amp;gt;&amp;lt;/tr&amp;gt;&amp;lt;/tbody&amp;gt;&amp;lt;/table&amp;gt;&amp;lt;/td&amp;gt;&amp;lt;/tr&amp;gt;&amp;lt;/tbody&amp;gt;&amp;lt;/table&amp;gt;&amp;lt;/td&amp;gt;&amp;lt;/tr&amp;gt;&amp;lt;/table&amp;gt;

        &amp;lt;p&amp;gt;
            Click on one of the links below to auto generate a request to the backend as a real user or a tester
        &amp;lt;/p&amp;gt;
        &amp;lt;ul&amp;gt;
            &amp;lt;li&amp;gt;
                &amp;lt;a href=&quot;/productpage?u=normal&quot; class=&quot;text-blue-500 hover:text-blue-600&quot;&amp;gt;Normal user&amp;lt;/a&amp;gt;
            &amp;lt;/li&amp;gt;
            &amp;lt;li&amp;gt;
                &amp;lt;a href=&quot;/productpage?u=test&quot; class=&quot;text-blue-500 hover:text-blue-600&quot;&amp;gt;Test user&amp;lt;/a&amp;gt;
            &amp;lt;/li&amp;gt;
        &amp;lt;/ul&amp;gt;
    &amp;lt;/div&amp;gt;
&amp;lt;/div&amp;gt;
HTTP/1.1 200 OK
&amp;lt; server: envoy
&amp;lt; date: Fri, 22 Aug 2025 15:50:07 GMT
&amp;lt; content-type: text/html; charset=utf-8
&amp;lt; content-length: 2080
&amp;lt; x-envoy-upstream-service-time: 29
&amp;lt;
{ [2080 bytes data]
* Connection #0 to host 192.168.10.211 left intact&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;5. L7 Aware Traffic Management&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Gateway API에서 일부 Traffic Management의 기능을 보완해주고 있지만, 일부 L7 Traffic Management를 Cilium을 통해세 제공하고 있습니다. 다만 Cilium의 L7 Aware Traffic Management는 별도의 CRD를 활용하기 때문에 쿠버네티스의 표준으로 제공되는 것은 아니라는 점에 유의가 필요합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;또한 문서를 살펴보면 CiliumEnvoyConfig는 최소한의 유효성 검사를 할 뿐, 기존에 정의된 설정과 충돌이 나는 경우 대처하는 동작을 하지 않습니다. 사용에 유의가 필요할 것 같습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;참고: &lt;a href=&quot;https://docs.cilium.io/en/stable/network/servicemesh/l7-traffic-management/&quot;&gt;https://docs.cilium.io/en/stable/network/servicemesh/l7-traffic-management/&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Cilium에서는 &lt;code&gt;CiliumEnvoyConfig&lt;/code&gt; 와 &lt;code&gt;CiliumClusterwideEnvoyConfig&lt;/code&gt;를 통해서 L7 Traffic 제어를 제공합니다. &lt;code&gt;CiliumEnvoyConfig&lt;/code&gt;는 네임스페이스 단위의 설정이고, &lt;code&gt;CiliumClusterwideEnvoyConfig&lt;/code&gt;는 클러스터 전체 단위의 설정입니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아래와 같이 설정을 진행 합니다.&lt;/p&gt;
&lt;pre class=&quot;yaml&quot;&gt;&lt;code&gt;# 설정
helm upgrade cilium cilium/cilium --version 1.18.1 --namespace kube-system --reuse-values \
--set ingressController.enabled=true --set gatewayAPI.enabled=false \
--set envoyConfig.enabled=true  --set loadBalancer.l7.backend=envoy

kubectl -n kube-system rollout restart deployment/cilium-operator
kubectl -n kube-system rollout restart ds/cilium
kubectl -n kube-system rollout restart ds/cilium-envoy

# 확인
cilium config view |grep -i envoy
cilium status --wait

(⎈|HomeLab:N/A) root@k8s-ctr:~# cilium config view |grep -i envoy
enable-envoy-config                               true
envoy-access-log-buffer-size                      4096
envoy-base-id                                     0
envoy-config-retry-interval                       15s
envoy-keep-cap-netbindservice                     false
envoy-secrets-namespace                           cilium-secrets
external-envoy-proxy                              true
loadbalancer-l7                                   envoy
(⎈|HomeLab:N/A) root@k8s-ctr:~# cilium status --wait
    /&amp;macr;&amp;macr;\
 /&amp;macr;&amp;macr;\__/&amp;macr;&amp;macr;\    Cilium:             OK
 \__/&amp;macr;&amp;macr;\__/    Operator:           OK
 /&amp;macr;&amp;macr;\__/&amp;macr;&amp;macr;\    Envoy DaemonSet:    OK
 \__/&amp;macr;&amp;macr;\__/    Hubble Relay:       OK
    \__/       ClusterMesh:        disabled

DaemonSet              cilium                   Desired: 2, Ready: 2/2, Available: 2/2
DaemonSet              cilium-envoy             Desired: 2, Ready: 2/2, Available: 2/2
Deployment             cilium-operator          Desired: 1, Ready: 1/1, Available: 1/1
Deployment             hubble-relay             Desired: 1, Ready: 1/1, Available: 1/1
Deployment             hubble-ui                Desired: 1, Ready: 1/1, Available: 1/1
Containers:            cilium                   Running: 2
                       cilium-envoy             Running: 2
                       cilium-operator          Running: 1
                       clustermesh-apiserver
                       hubble-relay             Running: 1
                       hubble-ui                Running: 1
Cluster Pods:          12/12 managed by Cilium
Helm chart version:    1.18.1
Image versions         cilium             quay.io/cilium/cilium:v1.18.1@sha256:65ab17c052d8758b2ad157ce766285e04173722df59bdee1ea6d5fda7149f0e9: 2
                       cilium-envoy       quay.io/cilium/cilium-envoy:v1.34.4-1754895458-68cffdfa568b6b226d70a7ef81fc65dda3b890bf@sha256:247e908700012f7ef56f75908f8c965215c26a27762f296068645eb55450bda2: 2
                       cilium-operator    quay.io/cilium/operator-generic:v1.18.1@sha256:97f4553afa443465bdfbc1cc4927c93f16ac5d78e4dd2706736e7395382201bc: 1
                       hubble-relay       quay.io/cilium/hubble-relay:v1.18.1@sha256:7e2fd4877387c7e112689db7c2b153a4d5c77d125b8d50d472dbe81fc1b139b0: 1
                       hubble-ui          quay.io/cilium/hubble-ui-backend:v0.13.2@sha256:a034b7e98e6ea796ed26df8f4e71f83fc16465a19d166eff67a03b822c0bfa15: 1
                       hubble-ui          quay.io/cilium/hubble-ui:v0.13.2@sha256:9e37c1296b802830834cc87342a9182ccbb71ffebb711971e849221bd9d59392: 1&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Cilium 공식 문서에서는 L7 Path Translation, L7 Load Balancing and URL re-writing, L7 Circuit Breaking, L7 Traffic Shifting 와 같은 기능을 설명하고 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;참고: &lt;a href=&quot;https://docs.cilium.io/en/stable/network/servicemesh/l7-traffic-management/#examples&quot;&gt;https://docs.cilium.io/en/stable/network/servicemesh/l7-traffic-management/#examples&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이 중 L7 Traffic Shifting과 L7 Circuit Breaking을 살펴보겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;L7 Traffic Shifting&lt;/h3&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;L7 Traffic Shifting은 &lt;code&gt;CilumEnvoyConfig&lt;/code&gt;를 통해서 요청을 백엔드에 분산하는 기능입니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;참고: &lt;a href=&quot;https://docs.cilium.io/en/stable/network/servicemesh/envoy-traffic-shifting/&quot;&gt;https://docs.cilium.io/en/stable/network/servicemesh/envoy-traffic-shifting/&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;실습의 예시에서는 &lt;code&gt;hellowworld&lt;/code&gt; 서비스에 요청이 들어오면 로드밸런싱을 통해 &lt;code&gt;helloworld-v1&lt;/code&gt;에 90%를, &lt;code&gt;helloworld-v2&lt;/code&gt;에 10%의 트래픽을 흘려보냅니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;테스트 애플리케이션을 배포합니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;# 배포
kubectl apply -f https://raw.githubusercontent.com/cilium/cilium/1.18.1/examples/kubernetes/servicemesh/envoy/client-helloworld.yaml

# 확인
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get po,svc
NAME                                 READY   STATUS    RESTARTS   AGE
pod/client-85b7f79db-kqkh9           1/1     Running   0          28s
pod/helloworld-v1-9c5dfd585-wp269    1/1     Running   0          28s
pod/helloworld-v2-6f85d9d76f-fsbsn   1/1     Running   0          28s

NAME                                TYPE           CLUSTER-IP     EXTERNAL-IP      PORT(S)        AGE
service/cilium-gateway-my-gateway   LoadBalancer   10.96.89.140   192.168.10.211   80:31194/TCP   15h
service/helloworld                  ClusterIP      10.96.82.105   &amp;lt;none&amp;gt;           5000/TCP       28s
service/kubernetes                  ClusterIP      10.96.0.1      &amp;lt;none&amp;gt;           443/TCP        2d16h

# app은 동일하고, version이 다름
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get po --show-labels
NAME                             READY   STATUS    RESTARTS   AGE    LABELS
client-85b7f79db-kqkh9           1/1     Running   0          3m8s   kind=client,name=client,pod-template-hash=85b7f79db
helloworld-v1-9c5dfd585-wp269    1/1     Running   0          3m8s   app=helloworld,pod-template-hash=9c5dfd585,version=v1
helloworld-v2-6f85d9d76f-fsbsn   1/1     Running   0          3m8s   app=helloworld,pod-template-hash=6f85d9d76f,version=v2&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아래와 같이 기본적인 로드밸런싱을 확인합니다. 대략적으로 두 파드에 분산이 되는 것으로 보입니다.&lt;/p&gt;
&lt;pre class=&quot;smali&quot;&gt;&lt;code&gt;# client ip 획득
export CLIENT=$(kubectl get pods -l name=client -o jsonpath='{.items[0].metadata.name}')

# 테스트
for i in {1..10}; do  kubectl exec -it $CLIENT -- curl  helloworld:5000/hello; done

(⎈|HomeLab:N/A) root@k8s-ctr:~# for i in {1..10}; do  kubectl exec -it $CLIENT -- curl  helloworld:5000/hello; done
Hello version: v1, instance: helloworld-v1-9c5dfd585-wp269
Hello version: v1, instance: helloworld-v1-9c5dfd585-wp269
Hello version: v1, instance: helloworld-v1-9c5dfd585-wp269
Hello version: v2, instance: helloworld-v2-6f85d9d76f-fsbsn
Hello version: v2, instance: helloworld-v2-6f85d9d76f-fsbsn
Hello version: v2, instance: helloworld-v2-6f85d9d76f-fsbsn
Hello version: v1, instance: helloworld-v1-9c5dfd585-wp269
Hello version: v1, instance: helloworld-v1-9c5dfd585-wp269
Hello version: v1, instance: helloworld-v1-9c5dfd585-wp269
Hello version: v2, instance: helloworld-v2-6f85d9d76f-fsbsn&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;code&gt;CiliumEnvoyConfig&lt;/code&gt; 는 Service Group으로 트래픽을 로드밸런싱 합니다. Service Group에 포함될 각 version의 워크로드를 지정한 서비스를 각각 생성합니다.&lt;/p&gt;
&lt;pre class=&quot;yaml&quot;&gt;&lt;code&gt;kubectl apply -f https://raw.githubusercontent.com/cilium/cilium/1.18.1/examples/kubernetes/servicemesh/envoy/helloworld-service-v1-v2.yaml

cat helloworld-service-v1-v2.yaml
apiVersion: v1
kind: Service
metadata:
  name: helloworld-v1
  labels:
    app: helloworld
    service: helloworld
    version: v1
spec:
  ports:
    - port: 5000
      name: http
  selector:
    app: helloworld
    version: v1
---
apiVersion: v1
kind: Service
metadata:
  name: helloworld-v2
  labels:
    app: helloworld
    service: helloworld
    version: v2
spec:
  ports:
    - port: 5000
      name: http
  selector:
    app: helloworld
    version: v2&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이제 아래와 같이 &lt;code&gt;CiliumEnvoyConfig&lt;/code&gt;를 생성합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;CR을 살펴보면, 먼저 추가로 생성한 서비스는 &lt;code&gt;Service Group&lt;/code&gt;인 &lt;code&gt;helloworld&lt;/code&gt;의 &lt;code&gt;backendService&lt;/code&gt;로 지정됩니다. 또한 이들 backendService는 &lt;code&gt;Cluster&lt;/code&gt;로 선언되고, 이후 &lt;code&gt;Route Configuration&lt;/code&gt;의 &lt;code&gt;virtual_host&lt;/code&gt;에서 &lt;code&gt;weighted_clusters&lt;/code&gt;를 통해서 &lt;code&gt;weight&lt;/code&gt;를 지정합니다.&lt;/p&gt;
&lt;pre class=&quot;yaml&quot;&gt;&lt;code&gt;kubectl apply -f https://raw.githubusercontent.com/cilium/cilium/1.18.1/examples/kubernetes/servicemesh/envoy/envoy-helloworld-v1-90-v2-10.yaml

cat envoy-helloworld-v1-90-v2-10.yaml
apiVersion: cilium.io/v2
kind: CiliumEnvoyConfig
metadata:
  name: envoy-lb-listener
spec:
  services:
    - name: helloworld
      namespace: default
  backendServices:
    - name: helloworld-v1
      namespace: default
    - name: helloworld-v2
      namespace: default
  resources:
    - &quot;@type&quot;: type.googleapis.com/envoy.config.listener.v3.Listener
      name: envoy-lb-listener
      filter_chains:
        - filters:
            - name: envoy.filters.network.http_connection_manager
              typed_config:
                &quot;@type&quot;: type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
                stat_prefix: envoy-lb-listener
                rds:
                  route_config_name: lb_route
                http_filters:
                  - name: envoy.filters.http.router
                    typed_config:
                      &quot;@type&quot;: type.googleapis.com/envoy.extensions.filters.http.router.v3.Router
    - &quot;@type&quot;: type.googleapis.com/envoy.config.route.v3.RouteConfiguration
      name: lb_route
      virtual_hosts:
        - name: &quot;lb_route&quot;
          domains: [ &quot;*&quot; ]
          routes:
            - match:
                prefix: &quot;/&quot;
              route:
                weighted_clusters:
                  clusters:
                    - name: &quot;default/helloworld-v1&quot;
                      weight: 90
                    - name: &quot;default/helloworld-v2&quot;
                      weight: 10
                retry_policy:
                  retry_on: 5xx
                  num_retries: 3
                  per_try_timeout: 1s
    - &quot;@type&quot;: type.googleapis.com/envoy.config.cluster.v3.Cluster
      name: &quot;default/helloworld-v1&quot;
      connect_timeout: 5s
      lb_policy: ROUND_ROBIN
      type: EDS
      outlier_detection:
        split_external_local_origin_errors: true
        consecutive_local_origin_failure: 2
    - &quot;@type&quot;: type.googleapis.com/envoy.config.cluster.v3.Cluster
      name: &quot;default/helloworld-v2&quot;
      connect_timeout: 3s
      lb_policy: ROUND_ROBIN
      type: EDS
      outlier_detection:
        split_external_local_origin_errors: true
        consecutive_local_origin_failure: 2&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아래와 같이 리소스가 생성되었습니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get pods --show-labels -o wide
NAME                             READY   STATUS    RESTARTS   AGE   IP             NODE     NOMINATED NODE   READINESS GATES   LABELS
client-85b7f79db-kqkh9           1/1     Running   0          16m   172.20.1.143   k8s-w1   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;            kind=client,name=client,pod-template-hash=85b7f79db
helloworld-v1-9c5dfd585-wp269    1/1     Running   0          16m   172.20.1.31    k8s-w1   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;            app=helloworld,pod-template-hash=9c5dfd585,version=v1
helloworld-v2-6f85d9d76f-fsbsn   1/1     Running   0          16m   172.20.1.154   k8s-w1   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;            app=helloworld,pod-template-hash=6f85d9d76f,version=v2
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get svc --show-labels
NAME                        TYPE           CLUSTER-IP      EXTERNAL-IP      PORT(S)        AGE     LABELS
cilium-gateway-my-gateway   LoadBalancer   10.96.89.140    192.168.10.211   80:31194/TCP   15h     gateway.networking.k8s.io/gateway-name=my-gateway,io.cilium.gateway/owning-gateway=my-gateway
helloworld                  ClusterIP      10.96.82.105    &amp;lt;none&amp;gt;           5000/TCP       16m     app=helloworld,service=helloworld
helloworld-v1               ClusterIP      10.96.15.244    &amp;lt;none&amp;gt;           5000/TCP       19s     app=helloworld,service=helloworld,version=v1
helloworld-v2               ClusterIP      10.96.238.129   &amp;lt;none&amp;gt;           5000/TCP       19s     app=helloworld,service=helloworld,version=v2
kubernetes                  ClusterIP      10.96.0.1       &amp;lt;none&amp;gt;           443/TCP        2d17h   component=apiserver,provider=kubernetes&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아래와 같이 추가로 테스트를 해보겠습니다. client는 여전히 helloworld를 호출하며, 이것이 &lt;code&gt;Service Group&lt;/code&gt;에 해당합니다.&lt;/p&gt;
&lt;pre class=&quot;smali&quot;&gt;&lt;code&gt;for i in {1..10}; do  kubectl exec -it $CLIENT -- curl  helloworld:5000/hello; done

(⎈|HomeLab:N/A) root@k8s-ctr:~# for i in {1..10}; do  kubectl exec -it $CLIENT -- curl  helloworld:5000/hello; done
Hello version: v1, instance: helloworld-v1-9c5dfd585-wp269
Hello version: v1, instance: helloworld-v1-9c5dfd585-wp269
Hello version: v1, instance: helloworld-v1-9c5dfd585-wp269
Hello version: v1, instance: helloworld-v1-9c5dfd585-wp269
Hello version: v1, instance: helloworld-v1-9c5dfd585-wp269
Hello version: v2, instance: helloworld-v2-6f85d9d76f-fsbsn
Hello version: v1, instance: helloworld-v1-9c5dfd585-wp269
Hello version: v1, instance: helloworld-v1-9c5dfd585-wp269
Hello version: v1, instance: helloworld-v1-9c5dfd585-wp269
Hello version: v1, instance: helloworld-v1-9c5dfd585-wp269&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;테스트 결과 v2에는 10%의 요청만 전달되는 것으로 확인됩니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;L7 Circuit Breaking&lt;/h3&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;두 번째로 살펴볼 실습은 Circuit Breaking입니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;L7 Circuit Breaking은 &lt;code&gt;CiliumClusterwideEnvoyConfig&lt;/code&gt;을 정의하여, 복원력 있는 마이크로서비스 애플리케이션을 만드는데 중요한 패턴으로, Circuit Breaking(회로 차단)을 통해서 애플리케이션에 에러나 대기 시간 증가와 같은 이상이 있을 때, 이를 차단하여 이것이 전파되어 발생하는 전반적인 영향을 제한합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;참고: &lt;a href=&quot;https://docs.cilium.io/en/stable/network/servicemesh/envoy-circuit-breaker/&quot;&gt;https://docs.cilium.io/en/stable/network/servicemesh/envoy-circuit-breaker/&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;테스트 애플리케이션을 배포합니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;kubectl apply -f https://raw.githubusercontent.com/cilium/cilium/1.18.1/examples/kubernetes/servicemesh/envoy/test-application-proxy-circuit-breaker.yaml

# 확인
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get pods --show-labels -o wide
NAME                             READY   STATUS    RESTARTS   AGE    IP             NODE     NOMINATED NODE   READINESS GATES   LABELS
echo-service-67788f4d97-bf7xn    2/2     Running   0          2m5s   172.20.1.157   k8s-w1   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;            kind=echo,name=echo-service,other=echo,pod-template-hash=67788f4d97
fortio-deploy-74ffb9b4d6-cr8bp   1/1     Running   0          2m5s   172.20.1.211   k8s-w1   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;            app=fortio,pod-template-hash=74ffb9b4d6&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아래와 같이 &lt;code&gt;CiliumClusterwideEnvoyConfig&lt;/code&gt;을 배포합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;CR을 살펴보면, &lt;code&gt;max_pending_requests: 1&lt;/code&gt; 및 &lt;code&gt;max_requests: 2&lt;/code&gt;를 지정했습니다. 이 설정에서 요청이 2개 이상일 때 대기 요청이 발생하는 경우 envoy에서 Circuit을 열어서 이후 요청에 대해서는 실패를 반환합니다.&lt;/p&gt;
&lt;pre class=&quot;yaml&quot;&gt;&lt;code&gt;kubectl apply -f https://raw.githubusercontent.com/cilium/cilium/1.18.1/examples/kubernetes/servicemesh/envoy/envoy-circuit-breaker.yaml

cat envoy-circuit-breaker.yaml
apiVersion: cilium.io/v2
kind: CiliumClusterwideEnvoyConfig
metadata:
  name: envoy-circuit-breaker
spec:
  services:
    - name: echo-service
      namespace: default
  resources:
    - &quot;@type&quot;: type.googleapis.com/envoy.config.listener.v3.Listener
      name: envoy-lb-listener
      filter_chains:
        - filters:
            - name: envoy.filters.network.http_connection_manager
              typed_config:
                &quot;@type&quot;: type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
                stat_prefix: envoy-lb-listener
                rds:
                  route_config_name: lb_route
                use_remote_address: true
                skip_xff_append: true
                http_filters:
                  - name: envoy.filters.http.router
                    typed_config:
                      &quot;@type&quot;: type.googleapis.com/envoy.extensions.filters.http.router.v3.Router
    - &quot;@type&quot;: type.googleapis.com/envoy.config.route.v3.RouteConfiguration
      name: lb_route
      virtual_hosts:
        - name: &quot;lb_route&quot;
          domains: [ &quot;*&quot; ]
          routes:
            - match:
                prefix: &quot;/&quot;
              route:
                weighted_clusters:
                  clusters:
                    - name: &quot;default/echo-service&quot;
                      weight: 100
    - &quot;@type&quot;: type.googleapis.com/envoy.config.cluster.v3.Cluster
      name: &quot;default/echo-service&quot;
      connect_timeout: 5s
      lb_policy: ROUND_ROBIN
      type: EDS
      edsClusterConfig:
        serviceName: default/echo-service
      circuit_breakers:
        thresholds:
        - priority: &quot;DEFAULT&quot;
          max_requests: 2
          max_pending_requests: 1
      outlier_detection:
        split_external_local_origin_errors: true
        consecutive_local_origin_failure: 2&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;테스트를 진행해보겠습니다. fortio -&amp;gt; echo-service로 요청을 전달합니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;# client(Fortio) 파드 이름을 환경 변수로 노출
export FORTIO_POD=$(kubectl get pods -l app=fortio -o 'jsonpath={.items[0].metadata.name}')

# 테스트 (-c 2: 동시에 2개 요청, -n 20: 총 20개 요청)
kubectl exec &quot;$FORTIO_POD&quot; -c fortio -- /usr/bin/fortio load -c 2 -qps 0 -n 20 http://echo-service:8080

# 결과
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl exec &quot;$FORTIO_POD&quot; -c fortio -- /usr/bin/fortio load -c 2 -qps 0 -n 20 http://echo-service:8080
{&quot;ts&quot;:1755934919.902383,&quot;level&quot;:&quot;info&quot;,&quot;r&quot;:1,&quot;file&quot;:&quot;scli.go&quot;,&quot;line&quot;:122,&quot;msg&quot;:&quot;Starting&quot;,&quot;command&quot;:&quot;&amp;Phi;&amp;omicron;&amp;rho;&amp;tau;ί&amp;omicron;&quot;,&quot;version&quot;:&quot;1.69.5 h1:h+42fJ1HF61Jj+WgPmC+C2wPtM5Ct8JLHSLDyEgGID4= go1.23.9 amd64 linux&quot;,&quot;go-max-procs&quot;:4}
Fortio 1.69.5 running at 0 queries per second, 4-&amp;gt;4 procs, for 20 calls: http://echo-service:8080
{&quot;ts&quot;:1755934919.913098,&quot;level&quot;:&quot;info&quot;,&quot;r&quot;:1,&quot;file&quot;:&quot;httprunner.go&quot;,&quot;line&quot;:121,&quot;msg&quot;:&quot;Starting http test&quot;,&quot;run&quot;:0,&quot;url&quot;:&quot;http://echo-service:8080&quot;,&quot;threads&quot;:2,&quot;qps&quot;:&quot;-1.0&quot;,&quot;warmup&quot;:&quot;parallel&quot;,&quot;conn-reuse&quot;:&quot;&quot;}
Starting at max qps with 2 thread(s) [gomax 4] for exactly 20 calls (10 per thread + 0)
{&quot;ts&quot;:1755934920.020824,&quot;level&quot;:&quot;warn&quot;,&quot;r&quot;:21,&quot;file&quot;:&quot;http_client.go&quot;,&quot;line&quot;:1151,&quot;msg&quot;:&quot;Non ok http code&quot;,&quot;code&quot;:503,&quot;status&quot;:&quot;HTTP/1.1 503&quot;,&quot;thread&quot;:1,&quot;run&quot;:0}
{&quot;ts&quot;:1755934920.242941,&quot;level&quot;:&quot;info&quot;,&quot;r&quot;:20,&quot;file&quot;:&quot;periodic.go&quot;,&quot;line&quot;:851,&quot;msg&quot;:&quot;T000 ended after 274.554816ms : 10 calls. qps=36.422599121335395&quot;}
{&quot;ts&quot;:1755934920.266847,&quot;level&quot;:&quot;info&quot;,&quot;r&quot;:21,&quot;file&quot;:&quot;periodic.go&quot;,&quot;line&quot;:851,&quot;msg&quot;:&quot;T001 ended after 298.834839ms : 10 calls. qps=33.46330044202109&quot;}
Ended after 299.008252ms : 20 calls. qps=66.888
{&quot;ts&quot;:1755934920.267067,&quot;level&quot;:&quot;info&quot;,&quot;r&quot;:1,&quot;file&quot;:&quot;periodic.go&quot;,&quot;line&quot;:581,&quot;msg&quot;:&quot;Run ended&quot;,&quot;run&quot;:0,&quot;elapsed&quot;:299008252,&quot;calls&quot;:20,&quot;qps&quot;:66.88778609360922}
Aggregated Function Time : count 20 avg 0.028639969 +/- 0.02031 min 0.008476393 max 0.083982601 sum 0.572799376
# range, mid point, percentile, count
&amp;gt;= 0.00847639 &amp;lt;= 0.009 , 0.0087382 , 5.00, 1
&amp;gt; 0.011 &amp;lt;= 0.012 , 0.0115 , 10.00, 1
&amp;gt; 0.012 &amp;lt;= 0.014 , 0.013 , 15.00, 1
&amp;gt; 0.014 &amp;lt;= 0.016 , 0.015 , 25.00, 2
&amp;gt; 0.016 &amp;lt;= 0.018 , 0.017 , 35.00, 2
&amp;gt; 0.02 &amp;lt;= 0.025 , 0.0225 , 65.00, 6
&amp;gt; 0.025 &amp;lt;= 0.03 , 0.0275 , 75.00, 2
&amp;gt; 0.03 &amp;lt;= 0.035 , 0.0325 , 80.00, 1
&amp;gt; 0.035 &amp;lt;= 0.04 , 0.0375 , 85.00, 1
&amp;gt; 0.05 &amp;lt;= 0.06 , 0.055 , 90.00, 1
&amp;gt; 0.07 &amp;lt;= 0.08 , 0.075 , 95.00, 1
&amp;gt; 0.08 &amp;lt;= 0.0839826 , 0.0819913 , 100.00, 1
# target 50% 0.0225
# target 75% 0.03
# target 90% 0.06
# target 99% 0.0831861
# target 99.9% 0.0839029
Error cases : count 1 avg 0.05452972 +/- 0 min 0.05452972 max 0.05452972 sum 0.05452972
# range, mid point, percentile, count
&amp;gt;= 0.0545297 &amp;lt;= 0.0545297 , 0.0545297 , 100.00, 1
# target 50% 0.0545297
# target 75% 0.0545297
# target 90% 0.0545297
# target 99% 0.0545297
# target 99.9% 0.0545297
# Socket and IP used for each connection:
[0]   1 socket used, resolved to 10.96.170.88:8080, connection timing : count 1 avg 0.009122352 +/- 0 min 0.009122352 max 0.009122352 sum 0.009122352
[1]   2 socket used, resolved to 10.96.170.88:8080, connection timing : count 2 avg 0.018599093 +/- 0.006149 min 0.012450165 max 0.024748021 sum 0.037198186
Connection time histogram (s) : count 3 avg 0.015440179 +/- 0.00672 min 0.009122352 max 0.024748021 sum 0.046320538
# range, mid point, percentile, count
&amp;gt;= 0.00912235 &amp;lt;= 0.01 , 0.00956118 , 33.33, 1
&amp;gt; 0.012 &amp;lt;= 0.014 , 0.013 , 66.67, 1
&amp;gt; 0.02 &amp;lt;= 0.024748 , 0.022374 , 100.00, 1
# target 50% 0.013
# target 75% 0.021187
# target 90% 0.0233236
# target 99% 0.0246056
# target 99.9% 0.0247338
Sockets used: 3 (for perfect keepalive, would be 2)
Uniform: false, Jitter: false, Catchup allowed: true
IP addresses distribution:
10.96.170.88:8080: 3
Code 200 : 19 (95.0 %)
Code 503 : 1 (5.0 %)
Response Header Sizes : count 20 avg 371.3 +/- 85.18 min 0 max 391 sum 7426
Response Body/Total Sizes : count 20 avg 2337.5 +/- 481 min 241 max 2448 sum 46750
All done 20 calls (plus 0 warmup) 28.640 ms avg, 66.9 qps&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;요청 결과를 보면 일부 요청(Code 503 : 1 (5.0 %))이 503이 발생한 것으로 확인됩니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;동시 연결 수를 최대 4개로 지정하여 결과가 달라지는 확인해보겠습니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl exec &quot;$FORTIO_POD&quot; -c fortio -- /usr/bin/fortio load -c 4 -qps 0 -n 20 http://echo-service:8080
{&quot;ts&quot;:1755935051.750785,&quot;level&quot;:&quot;info&quot;,&quot;r&quot;:1,&quot;file&quot;:&quot;scli.go&quot;,&quot;line&quot;:122,&quot;msg&quot;:&quot;Starting&quot;,&quot;command&quot;:&quot;&amp;Phi;&amp;omicron;&amp;rho;&amp;tau;ί&amp;omicron;&quot;,&quot;version&quot;:&quot;1.69.5 h1:h+42fJ1HF61Jj+WgPmC+C2wPtM5Ct8JLHSLDyEgGID4= go1.23.9 amd64 linux&quot;,&quot;go-max-procs&quot;:4}
Fortio 1.69.5 running at 0 queries per second, 4-&amp;gt;4 procs, for 20 calls: http://echo-service:8080
{&quot;ts&quot;:1755935051.760080,&quot;level&quot;:&quot;info&quot;,&quot;r&quot;:1,&quot;file&quot;:&quot;httprunner.go&quot;,&quot;line&quot;:121,&quot;msg&quot;:&quot;Starting http test&quot;,&quot;run&quot;:0,&quot;url&quot;:&quot;http://echo-service:8080&quot;,&quot;threads&quot;:4,&quot;qps&quot;:&quot;-1.0&quot;,&quot;warmup&quot;:&quot;parallel&quot;,&quot;conn-reuse&quot;:&quot;&quot;}
Starting at max qps with 4 thread(s) [gomax 4] for exactly 20 calls (5 per thread + 0)
{&quot;ts&quot;:1755935051.927076,&quot;level&quot;:&quot;warn&quot;,&quot;r&quot;:57,&quot;file&quot;:&quot;http_client.go&quot;,&quot;line&quot;:1151,&quot;msg&quot;:&quot;Non ok http code&quot;,&quot;code&quot;:503,&quot;status&quot;:&quot;HTTP/1.1 503&quot;,&quot;thread&quot;:1,&quot;run&quot;:0}
{&quot;ts&quot;:1755935051.944449,&quot;level&quot;:&quot;warn&quot;,&quot;r&quot;:56,&quot;file&quot;:&quot;http_client.go&quot;,&quot;line&quot;:1151,&quot;msg&quot;:&quot;Non ok http code&quot;,&quot;code&quot;:503,&quot;status&quot;:&quot;HTTP/1.1 503&quot;,&quot;thread&quot;:0,&quot;run&quot;:0}
{&quot;ts&quot;:1755935052.003096,&quot;level&quot;:&quot;warn&quot;,&quot;r&quot;:56,&quot;file&quot;:&quot;http_client.go&quot;,&quot;line&quot;:1151,&quot;msg&quot;:&quot;Non ok http code&quot;,&quot;code&quot;:503,&quot;status&quot;:&quot;HTTP/1.1 503&quot;,&quot;thread&quot;:0,&quot;run&quot;:0}
{&quot;ts&quot;:1755935052.070080,&quot;level&quot;:&quot;warn&quot;,&quot;r&quot;:56,&quot;file&quot;:&quot;http_client.go&quot;,&quot;line&quot;:1151,&quot;msg&quot;:&quot;Non ok http code&quot;,&quot;code&quot;:503,&quot;status&quot;:&quot;HTTP/1.1 503&quot;,&quot;thread&quot;:0,&quot;run&quot;:0}
{&quot;ts&quot;:1755935052.092354,&quot;level&quot;:&quot;warn&quot;,&quot;r&quot;:57,&quot;file&quot;:&quot;http_client.go&quot;,&quot;line&quot;:1151,&quot;msg&quot;:&quot;Non ok http code&quot;,&quot;code&quot;:503,&quot;status&quot;:&quot;HTTP/1.1 503&quot;,&quot;thread&quot;:1,&quot;run&quot;:0}
{&quot;ts&quot;:1755935052.176591,&quot;level&quot;:&quot;warn&quot;,&quot;r&quot;:56,&quot;file&quot;:&quot;http_client.go&quot;,&quot;line&quot;:1151,&quot;msg&quot;:&quot;Non ok http code&quot;,&quot;code&quot;:503,&quot;status&quot;:&quot;HTTP/1.1 503&quot;,&quot;thread&quot;:0,&quot;run&quot;:0}
{&quot;ts&quot;:1755935052.190861,&quot;level&quot;:&quot;warn&quot;,&quot;r&quot;:57,&quot;file&quot;:&quot;http_client.go&quot;,&quot;line&quot;:1151,&quot;msg&quot;:&quot;Non ok http code&quot;,&quot;code&quot;:503,&quot;status&quot;:&quot;HTTP/1.1 503&quot;,&quot;thread&quot;:1,&quot;run&quot;:0}
{&quot;ts&quot;:1755935052.242797,&quot;level&quot;:&quot;warn&quot;,&quot;r&quot;:56,&quot;file&quot;:&quot;http_client.go&quot;,&quot;line&quot;:1151,&quot;msg&quot;:&quot;Non ok http code&quot;,&quot;code&quot;:503,&quot;status&quot;:&quot;HTTP/1.1 503&quot;,&quot;thread&quot;:0,&quot;run&quot;:0}
{&quot;ts&quot;:1755935052.247198,&quot;level&quot;:&quot;info&quot;,&quot;r&quot;:56,&quot;file&quot;:&quot;periodic.go&quot;,&quot;line&quot;:851,&quot;msg&quot;:&quot;T000 ended after 363.953089ms : 5 calls. qps=13.738034244297896&quot;}
{&quot;ts&quot;:1755935052.261238,&quot;level&quot;:&quot;info&quot;,&quot;r&quot;:59,&quot;file&quot;:&quot;periodic.go&quot;,&quot;line&quot;:851,&quot;msg&quot;:&quot;T003 ended after 377.997668ms : 5 calls. qps=13.227594832674999&quot;}
{&quot;ts&quot;:1755935052.261716,&quot;level&quot;:&quot;info&quot;,&quot;r&quot;:58,&quot;file&quot;:&quot;periodic.go&quot;,&quot;line&quot;:851,&quot;msg&quot;:&quot;T002 ended after 378.478799ms : 5 calls. qps=13.210779608291878&quot;}
{&quot;ts&quot;:1755935052.262146,&quot;level&quot;:&quot;warn&quot;,&quot;r&quot;:57,&quot;file&quot;:&quot;http_client.go&quot;,&quot;line&quot;:1151,&quot;msg&quot;:&quot;Non ok http code&quot;,&quot;code&quot;:503,&quot;status&quot;:&quot;HTTP/1.1 503&quot;,&quot;thread&quot;:1,&quot;run&quot;:0}
{&quot;ts&quot;:1755935052.355042,&quot;level&quot;:&quot;info&quot;,&quot;r&quot;:57,&quot;file&quot;:&quot;periodic.go&quot;,&quot;line&quot;:851,&quot;msg&quot;:&quot;T001 ended after 471.785135ms : 5 calls. qps=10.59804480698613&quot;}
Ended after 472.656268ms : 20 calls. qps=42.314
{&quot;ts&quot;:1755935052.355966,&quot;level&quot;:&quot;info&quot;,&quot;r&quot;:1,&quot;file&quot;:&quot;periodic.go&quot;,&quot;line&quot;:581,&quot;msg&quot;:&quot;Run ended&quot;,&quot;run&quot;:0,&quot;elapsed&quot;:472656268,&quot;calls&quot;:20,&quot;qps&quot;:42.31404797534601}
Aggregated Function Time : count 20 avg 0.077426397 +/- 0.03038 min 0.027672363 max 0.164824901 sum 1.54852794
# range, mid point, percentile, count
&amp;gt;= 0.0276724 &amp;lt;= 0.03 , 0.0288362 , 5.00, 1
&amp;gt; 0.035 &amp;lt;= 0.04 , 0.0375 , 10.00, 1
&amp;gt; 0.05 &amp;lt;= 0.06 , 0.055 , 20.00, 2
&amp;gt; 0.06 &amp;lt;= 0.07 , 0.065 , 45.00, 5
&amp;gt; 0.07 &amp;lt;= 0.08 , 0.075 , 65.00, 4
&amp;gt; 0.08 &amp;lt;= 0.09 , 0.085 , 75.00, 2
&amp;gt; 0.09 &amp;lt;= 0.1 , 0.095 , 80.00, 1
&amp;gt; 0.1 &amp;lt;= 0.12 , 0.11 , 90.00, 2
&amp;gt; 0.12 &amp;lt;= 0.14 , 0.13 , 95.00, 1
&amp;gt; 0.16 &amp;lt;= 0.164825 , 0.162412 , 100.00, 1
# target 50% 0.0725
# target 75% 0.09
# target 90% 0.12
# target 99% 0.16386
# target 99.9% 0.164728
Error cases : count 9 avg 0.080604074 +/- 0.03648 min 0.039005792 max 0.164824901 sum 0.725436665
# range, mid point, percentile, count
&amp;gt;= 0.0390058 &amp;lt;= 0.04 , 0.0395029 , 11.11, 1
&amp;gt; 0.05 &amp;lt;= 0.06 , 0.055 , 22.22, 1
&amp;gt; 0.06 &amp;lt;= 0.07 , 0.065 , 55.56, 3
&amp;gt; 0.07 &amp;lt;= 0.08 , 0.075 , 66.67, 1
&amp;gt; 0.1 &amp;lt;= 0.12 , 0.11 , 88.89, 2
&amp;gt; 0.16 &amp;lt;= 0.164825 , 0.162412 , 100.00, 1
# target 50% 0.0683333
# target 75% 0.1075
# target 90% 0.160482
# target 99% 0.164391
# target 99.9% 0.164781
# Socket and IP used for each connection:
[0]   5 socket used, resolved to 10.96.170.88:8080, connection timing : count 5 avg 0.016470151 +/- 0.0118 min 0.006279733 max 0.039608839 sum 0.082350754
[1]   5 socket used, resolved to 10.96.170.88:8080, connection timing : count 5 avg 0.019381182 +/- 0.01619 min 0.004276236 max 0.050706462 sum 0.096905911
[2]   1 socket used, resolved to 10.96.170.88:8080, connection timing : count 1 avg 0.007946486 +/- 0 min 0.007946486 max 0.007946486 sum 0.007946486
[3]   1 socket used, resolved to 10.96.170.88:8080, connection timing : count 1 avg 0.014975546 +/- 0 min 0.014975546 max 0.014975546 sum 0.014975546
Connection time histogram (s) : count 12 avg 0.016848225 +/- 0.0133 min 0.004276236 max 0.050706462 sum 0.202178697
# range, mid point, percentile, count
&amp;gt;= 0.00427624 &amp;lt;= 0.005 , 0.00463812 , 8.33, 1
&amp;gt; 0.006 &amp;lt;= 0.007 , 0.0065 , 16.67, 1
&amp;gt; 0.007 &amp;lt;= 0.008 , 0.0075 , 25.00, 1
&amp;gt; 0.01 &amp;lt;= 0.011 , 0.0105 , 33.33, 1
&amp;gt; 0.011 &amp;lt;= 0.012 , 0.0115 , 50.00, 2
&amp;gt; 0.012 &amp;lt;= 0.014 , 0.013 , 58.33, 1
&amp;gt; 0.014 &amp;lt;= 0.016 , 0.015 , 83.33, 3
&amp;gt; 0.035 &amp;lt;= 0.04 , 0.0375 , 91.67, 1
&amp;gt; 0.05 &amp;lt;= 0.0507065 , 0.0503532 , 100.00, 1
# target 50% 0.012
# target 75% 0.0153333
# target 90% 0.039
# target 99% 0.0506217
# target 99.9% 0.050698
Sockets used: 12 (for perfect keepalive, would be 4)
Uniform: false, Jitter: false, Catchup allowed: true
IP addresses distribution:
10.96.170.88:8080: 12
Code 200 : 11 (55.0 %)
Code 503 : 9 (45.0 %)
Response Header Sizes : count 20 avg 215.05 +/- 194.5 min 0 max 391 sum 4301
Response Body/Total Sizes : count 20 avg 1454.85 +/- 1098 min 241 max 2448 sum 29097
All done 20 calls (plus 0 warmup) 77.426 ms avg, 42.3 qps&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;동시 연결이 많아지는 경우, Circuit Breaking에 의한 에러의 비율 조금 더 높아지는 것을 볼 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이상으로 Cilium의 L7 Aware Traffic Management를 살펴봤습니다. 개인적으로는 CRD가 envoy의 용어나 설정을 노출하는 방식이라 너무 복잡하지 않나라는 생각이 들었습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;마치며&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이번 게시물에서는 Cilium의 ServiceMesh에 대해서 살펴봤습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;기존에 Istio를 사용해보신 분은 다소 혼란스러울 수 있을 것 같습니다. Istio에서는 istio를 설치하면 Istio가 제공하는 CRD를 통해서 Service Mesh의 여러가지 목적을 달성합니다. Cilium의 ServiceMesh는 Cilium이 CNI Plugin의 역할에서 Ingress, Gateway API의 역할을 확장하고, 별도의 CRD를 설치해 Traffic Management 기능을 제한적으로 제공하는 단계로 보입니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Service Mesh라는 관점에서 Istio의 완성도가 높지만, 오히려 애플리케이션 보다 Istio가 비대해지는 경향이 있기 때문에 작은 규모의 워크로드에는 적합하지 않을 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;한편으로 Cilium이 제공하는 ServiceMesh는 Cilium 즉, eBPF를 통한 구현이 되는데, 중간 과정에 대한 &lt;b&gt;과도한&lt;/b&gt; &lt;b&gt;생략&lt;/b&gt;이 동작을 직관적으로 이해하는데 어렵게 느껴집니다. eBPF로 구현되어 효과적일 것이라는 가정을 하지만 '제대로 동작하지 않을 때는 어디를 봐야할까?'라는 고민이 되는 부분도 있었습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;다음 게시물에서는 Ciilum의 Security에 대해서 살펴보겠습니다.&lt;/p&gt;</description>
      <category>Cilium</category>
      <category>cilium</category>
      <category>circuit break</category>
      <category>gateway api</category>
      <category>Ingress</category>
      <category>kubernetes</category>
      <category>servicemesh</category>
      <category>traffic shift</category>
      <author>한명</author>
      <guid isPermaLink="true">https://a-person.tistory.com/57</guid>
      <comments>https://a-person.tistory.com/57#entry57comment</comments>
      <pubDate>Sat, 23 Aug 2025 17:04:32 +0900</pubDate>
    </item>
    <item>
      <title>[8] Cilium - Cluster Mesh</title>
      <link>https://a-person.tistory.com/56</link>
      <description>&lt;p data-ke-size=&quot;size16&quot;&gt;이번 게시물에서는 Cilium 환경의 Multi Cluster 기능을 제공하는 ClusterMesh에 대해서 알아보겠습니다. ClusterMesh는 Cilium으로 구성된 Multi-Cluster 환경에서 각 클러스터의 Service를 손쉽게 접근 가능하게 하며,서로 다른 클러스터의 워크로드를 하나의 Service로 노출하는 방식도 제공합니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1438&quot; data-origin-height=&quot;774&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bIQlRP/btsPTbEGS4z/PTfc0rGlMAybY2WJgRdHS0/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bIQlRP/btsPTbEGS4z/PTfc0rGlMAybY2WJgRdHS0/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bIQlRP/btsPTbEGS4z/PTfc0rGlMAybY2WJgRdHS0/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbIQlRP%2FbtsPTbEGS4z%2FPTfc0rGlMAybY2WJgRdHS0%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1438&quot; height=&quot;774&quot; data-origin-width=&quot;1438&quot; data-origin-height=&quot;774&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;출처: &lt;a href=&quot;https://cilium.io/use-cases/cluster-mesh/&quot;&gt;https://cilium.io/use-cases/cluster-mesh/&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;목차&lt;/h2&gt;
&lt;ol style=&quot;list-style-type: decimal;&quot; data-ke-list-type=&quot;decimal&quot;&gt;
&lt;li&gt;실습 환경 구성&lt;/li&gt;
&lt;li&gt;ClusterMesh 구성&lt;/li&gt;
&lt;li&gt;ClusterMesh 동작 확인&lt;/li&gt;
&lt;li&gt;Global Service 생성&lt;/li&gt;
&lt;li&gt;Service Affinity 활용&lt;/li&gt;
&lt;/ol&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;1. 실습 환경 구성&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;해당 실습에서는 kind를 통해서 실습 환경을 구성하겠습니다. 아래와 같이 각 east, west 클러스터를 생성합니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;# west 클러스터 생성
kind create cluster --name west --image kindest/node:v1.33.2 --config - &amp;lt;&amp;lt;EOF
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
  extraPortMappings:
  - containerPort: 30000 # sample apps
    hostPort: 30000
  - containerPort: 30001 # hubble ui
    hostPort: 30001
- role: worker
  extraPortMappings:
  - containerPort: 30002 # sample apps
    hostPort: 30002
networking:
  podSubnet: &quot;10.0.0.0/16&quot;
  serviceSubnet: &quot;10.2.0.0/16&quot;
  disableDefaultCNI: true
  kubeProxyMode: none
EOF

# 설치 확인
kubectl get node 
kubectl get pod -A

$ kubectl get node
NAME                 STATUS     ROLES           AGE   VERSION
west-control-plane   NotReady   control-plane   17s   v1.33.2
west-worker          NotReady   &amp;lt;none&amp;gt;          7s    v1.33.2
$ kubectl get pod -A
NAMESPACE            NAME                                         READY   STATUS    RESTARTS   AGE
kube-system          coredns-674b8bbfcf-b2hsv                     0/1     Pending   0          14s
kube-system          coredns-674b8bbfcf-p5fcx                     0/1     Pending   0          14s
kube-system          etcd-west-control-plane                      1/1     Running   0          21s
kube-system          kube-apiserver-west-control-plane            1/1     Running   0          19s
kube-system          kube-controller-manager-west-control-plane   1/1     Running   0          19s
kube-system          kube-scheduler-west-control-plane            1/1     Running   0          19s
local-path-storage   local-path-provisioner-7dc846544d-nhxdw      0/1     Pending   0          14s


# 노드에 기본 툴 설치
docker exec -it west-control-plane sh -c 'apt update &amp;amp;&amp;amp; apt install tree psmisc lsof wget net-tools dnsutils tcpdump ngrep iputils-ping git -y'
docker exec -it west-worker sh -c 'apt update &amp;amp;&amp;amp; apt install tree psmisc lsof wget net-tools dnsutils tcpdump ngrep iputils-ping git -y'

# east 클러스터 생성
kind create cluster --name east --image kindest/node:v1.33.2 --config - &amp;lt;&amp;lt;EOF
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
  extraPortMappings:
  - containerPort: 31000 # sample apps
    hostPort: 31000
  - containerPort: 31001 # hubble ui
    hostPort: 31001
- role: worker
  extraPortMappings:
  - containerPort: 31002 # sample apps
    hostPort: 31002
networking:
  podSubnet: &quot;10.1.0.0/16&quot;
  serviceSubnet: &quot;10.3.0.0/16&quot;
  disableDefaultCNI: true
  kubeProxyMode: none
EOF

# 노드에 기본 툴 설치
docker exec -it east-control-plane sh -c 'apt update &amp;amp;&amp;amp; apt install tree psmisc lsof wget net-tools dnsutils tcpdump ngrep iputils-ping git -y'
docker exec -it east-worker sh -c 'apt update &amp;amp;&amp;amp; apt install tree psmisc lsof wget net-tools dnsutils tcpdump ngrep iputils-ping git -y'


# kind로 클러스터를 생성하면, 각 클러스터 노드가 하나의 파드 처럼 실행된다.
$ docker ps
CONTAINER ID   IMAGE                  COMMAND                  CREATED             STATUS             PORTS                                                             NAMES
92f3d35a477b   kindest/node:v1.33.2   &quot;/usr/local/bin/entr&amp;hellip;&quot;   6 minutes ago       Up 6 minutes       0.0.0.0:31000-31001-&amp;gt;31000-31001/tcp, 127.0.0.1:45365-&amp;gt;6443/tcp   east-control-plane
5e1fbf3599d2   kindest/node:v1.33.2   &quot;/usr/local/bin/entr&amp;hellip;&quot;   6 minutes ago       Up 6 minutes       0.0.0.0:31002-&amp;gt;31002/tcp                                          east-worker
4c5e055b9538   kindest/node:v1.33.2   &quot;/usr/local/bin/entr&amp;hellip;&quot;   About an hour ago   Up About an hour   0.0.0.0:30000-30001-&amp;gt;30000-30001/tcp, 127.0.0.1:39275-&amp;gt;6443/tcp   west-control-plane
8dc2bd93443e   kindest/node:v1.33.2   &quot;/usr/local/bin/entr&amp;hellip;&quot;   About an hour ago   Up About an hour   0.0.0.0:30002-&amp;gt;30002/tcp                                          west-worker

# 설치 확인
kubectl config get-contexts 

CURRENT   NAME        CLUSTER     AUTHINFO    NAMESPACE
*         kind-east   kind-east   kind-east
          kind-west   kind-west   kind-west

kubectl config set-context kind-east
kubectl get node
kubectl get node --context kind-west
kubectl get pod -A
kubectl get pod -A --context kind-west

$ kubectl config set-context kind-east
Context &quot;kind-east&quot; modified.
$ kubectl get node
NAME                 STATUS     ROLES           AGE    VERSION
east-control-plane   NotReady   control-plane   118s   v1.33.2
east-worker          NotReady   &amp;lt;none&amp;gt;          109s   v1.33.2
$ kubectl get node --context kind-west
NAME                 STATUS     ROLES           AGE   VERSION
west-control-plane   NotReady   control-plane   76m   v1.33.2
west-worker          NotReady   &amp;lt;none&amp;gt;          76m   v1.33.2
$ kubectl get pod -A
NAMESPACE            NAME                                         READY   STATUS    RESTARTS   AGE
kube-system          coredns-674b8bbfcf-pwhf8                     0/1     Pending   0          119s
kube-system          coredns-674b8bbfcf-qr2dw                     0/1     Pending   0          119s
kube-system          etcd-east-control-plane                      1/1     Running   0          2m6s
kube-system          kube-apiserver-east-control-plane            1/1     Running   0          2m5s
kube-system          kube-controller-manager-east-control-plane   1/1     Running   0          2m5s
kube-system          kube-scheduler-east-control-plane            1/1     Running   0          2m6s
local-path-storage   local-path-provisioner-7dc846544d-mp7wb      0/1     Pending   0          119s
$ kubectl get pod -A --context kind-west
NAMESPACE            NAME                                         READY   STATUS    RESTARTS   AGE
kube-system          coredns-674b8bbfcf-b2hsv                     0/1     Pending   0          76m
kube-system          coredns-674b8bbfcf-p5fcx                     0/1     Pending   0          76m
kube-system          etcd-west-control-plane                      1/1     Running   0          76m
kube-system          kube-apiserver-west-control-plane            1/1     Running   0          76m
kube-system          kube-controller-manager-west-control-plane   1/1     Running   0          76m
kube-system          kube-scheduler-west-control-plane            1/1     Running   0          76m
local-path-storage   local-path-provisioner-7dc846544d-nhxdw      0/1     Pending   0          76m&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;kind 클러스터에 cilium을 설치하기 위해서 cilium CLI를 설치하고 이후 &lt;code&gt;cilium install&lt;/code&gt;로 CNI Plugin을 설치합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Cilium 설치 옵션을 살펴보면 &lt;code&gt;--set cluster.name=west --set cluster.id=1&lt;/code&gt;와 같은 옵션을 사용한 것을 알 수 있습니다. 이 값을 통해서 ClusterMesh로 구성된 각 클러스터를 구분할 수 있습니다.&lt;/p&gt;
&lt;pre class=&quot;bash&quot; data-ke-language=&quot;bash&quot;&gt;&lt;code&gt;# cilium cli 설치
CILIUM_CLI_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/cilium-cli/main/stable.txt)
CLI_ARCH=amd64
if [ &quot;$(uname -m)&quot; = &quot;aarch64&quot; ]; then CLI_ARCH=arm64; fi
curl -L --fail --remote-name-all https://github.com/cilium/cilium-cli/releases/download/${CILIUM_CLI_VERSION}/cilium-linux-${CLI_ARCH}.tar.gz{,.sha256sum}
sha256sum --check cilium-linux-${CLI_ARCH}.tar.gz.sha256sum
sudo tar xzvfC cilium-linux-${CLI_ARCH}.tar.gz /usr/local/bin
rm cilium-linux-${CLI_ARCH}.tar.gz{,.sha256sum}

# cilium cli 로 cilium cni 설치해보기 : dry-run
cilium install --version 1.17.6 --set ipam.mode=kubernetes \
--set kubeProxyReplacement=true --set bpf.masquerade=true \
--set endpointHealthChecking.enabled=false --set healthChecking=false \
--set operator.replicas=1 --set debug.enabled=true \
--set routingMode=native --set autoDirectNodeRoutes=true --set ipv4NativeRoutingCIDR=10.0.0.0/16 \
--set ipMasqAgent.enabled=true --set ipMasqAgent.config.nonMasqueradeCIDRs='{10.1.0.0/16}' \
--set cluster.name=west --set cluster.id=1 \
--context kind-west --dry-run-helm-values

# cilium cli 로 cilium cni 설치
cilium install --version 1.17.6 --set ipam.mode=kubernetes \
--set kubeProxyReplacement=true --set bpf.masquerade=true \
--set endpointHealthChecking.enabled=false --set healthChecking=false \
--set operator.replicas=1 --set debug.enabled=true \
--set routingMode=native --set autoDirectNodeRoutes=true --set ipv4NativeRoutingCIDR=10.0.0.0/16 \
--set ipMasqAgent.enabled=true --set ipMasqAgent.config.nonMasqueradeCIDRs='{10.1.0.0/16}' \
--set cluster.name=west --set cluster.id=1 \
--context kind-west

# 배포 확인
watch kubectl get pod -n kube-system --context kind-west

NAME                                         READY   STATUS    RESTARTS   AGE
cilium-2zw9h                                 1/1     Running   0          3m13s
cilium-envoy-72896                           1/1     Running   0          3m13s
cilium-envoy-dzgbf                           1/1     Running   0          3m13s
cilium-gtnns                                 1/1     Running   0          3m13s
cilium-operator-66dd84cf7c-tntlf             1/1     Running   0          3m13s
coredns-674b8bbfcf-b2hsv                     1/1     Running   0          87m
coredns-674b8bbfcf-p5fcx                     1/1     Running   0          87m
etcd-west-control-plane                      1/1     Running   0          87m
kube-apiserver-west-control-plane            1/1     Running   0          87m
kube-controller-manager-west-control-plane   1/1     Running   0          87m
kube-scheduler-west-control-plane            1/1     Running   0          87m

# dry-run
cilium install --version 1.17.6 --set ipam.mode=kubernetes \
--set kubeProxyReplacement=true --set bpf.masquerade=true \
--set endpointHealthChecking.enabled=false --set healthChecking=false \
--set operator.replicas=1 --set debug.enabled=true \
--set routingMode=native --set autoDirectNodeRoutes=true --set ipv4NativeRoutingCIDR=10.1.0.0/16 \
--set ipMasqAgent.enabled=true --set ipMasqAgent.config.nonMasqueradeCIDRs='{10.0.0.0/16}' \
--set cluster.name=east --set cluster.id=2 \ # 클러스터 구분자
--context kind-east --dry-run-helm-values

cilium install --version 1.17.6 --set ipam.mode=kubernetes \
--set kubeProxyReplacement=true --set bpf.masquerade=true \
--set endpointHealthChecking.enabled=false --set healthChecking=false \
--set operator.replicas=1 --set debug.enabled=true \
--set routingMode=native --set autoDirectNodeRoutes=true --set ipv4NativeRoutingCIDR=10.1.0.0/16 \
--set ipMasqAgent.enabled=true --set ipMasqAgent.config.nonMasqueradeCIDRs='{10.0.0.0/16}' \
--set cluster.name=east --set cluster.id=2 \ # 클러스터 구분자
--context kind-east

# 배포 확인
watch kubectl get pod -n kube-system --context kind-east

NAME                                         READY   STATUS    RESTARTS      AGE
cilium-5r7nk                                 1/1     Running   6 (62m ago)   65m
cilium-clfsn                                 1/1     Running   6 (62m ago)   65m
cilium-envoy-7tg4m                           1/1     Running   1 (61m ago)   65m
cilium-envoy-fs8w8                           1/1     Running   1 (61m ago)   65m
cilium-operator-7df4989d4b-7pb9d             1/1     Running   2 (61m ago)   65m
coredns-674b8bbfcf-pwhf8                     1/1     Running   0             100m
coredns-674b8bbfcf-qr2dw                     1/1     Running   0             100m
etcd-east-control-plane                      1/1     Running   2 (61m ago)   100m
kube-apiserver-east-control-plane            1/1     Running   2 (61m ago)   100m
kube-controller-manager-east-control-plane   1/1     Running   2 (61m ago)   100m
kube-scheduler-east-control-plane            1/1     Running   2 (61m ago)   100m

# 확인
cilium status --context kind-west
cilium status --context kind-east
cilium config view --context kind-west
cilium config view --context kind-east
kubectl --context kind-west exec -it -n kube-system ds/cilium -- cilium status --verbose
kubectl --context kind-east exec -it -n kube-system ds/cilium -- cilium status --verbose

kubectl --context kind-west -n kube-system exec ds/cilium -c cilium-agent -- cilium-dbg bpf ipmasq list
kubectl --context kind-east -n kube-system exec ds/cilium -c cilium-agent -- cilium-dbg bpf ipmasq list

# coredns 확인 : 둘 다, cluster.local 기본 도메인 네임 사용 중
kubectl describe cm -n kube-system coredns --context kind-west | grep kubernetes
    kubernetes cluster.local in-addr.arpa ip6.arpa {

kubectl describe cm -n kube-system coredns --context kind-west | grep kubernetes
    kubernetes cluster.local in-addr.arpa ip6.arpa {

# 삭제 방법 (진행하지 않음)
# cilium uninstall --context kind-west
# cilium uninstall --context kind-east


# 클러스터 삭제 (실습 종료 후)
# kind delete cluster --name west &amp;amp;&amp;amp; kind delete cluster --name east &amp;amp;&amp;amp; kind delete cluster --name center&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;참고로, WSL 환경의 kind 클러스터라면 cilium에서 &lt;code&gt;failed to create fsnotify watcher: too many open files&lt;/code&gt; 와 같은 에러가 발생할 수 있습니다.&lt;/p&gt;
&lt;pre class=&quot;routeros&quot;&gt;&lt;code&gt;$ kubectl get po -A
NAMESPACE            NAME                                         READY   STATUS             RESTARTS        AGE
kube-system          cilium-5r7nk                                 0/1     CrashLoopBackOff   5 (23s ago)     3m35s
kube-system          cilium-clfsn                                 0/1     CrashLoopBackOff   5 (20s ago)     3m35s

$ kubectl logs -f -n kube-system          cilium-5r7nk
Defaulted container &quot;cilium-agent&quot; out of: cilium-agent, config (init), mount-cgroup (init), apply-sysctl-overwrites (init), mount-bpf-fs (init), clean-cilium-state (init), install-cni-binaries (init)
...
time=&quot;2025-08-14T06:20:54.404358872Z&quot; level=info msg=&quot;Stopped gops server&quot; address=&quot;127.0.0.1:9890&quot; subsys=gops
time=&quot;2025-08-14T06:20:54.40441131Z&quot; level=fatal msg=&quot;failed to start: unable to create config directory watcher: too many open files\nfailed to stop: unable to find controller ipcache-inject-labels&quot; subsys=daemon
failed to create fsnotify watcher: too many open files&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이 경우는 아래 문서의 known issue를 확인하시고 WSL에서 파라미터를 변경해주시기 바랍니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;https://kind.sigs.k8s.io/docs/user/known-issues#pod-errors-due-to-too-many-open-files&quot;&gt;https://kind.sigs.k8s.io/docs/user/known-issues#pod-errors-due-to-too-many-open-files&lt;/a&gt;&lt;/p&gt;
&lt;pre class=&quot;nix&quot;&gt;&lt;code&gt;$ sysctl -a |grep inotify
..
user.max_inotify_instances = 128
user.max_inotify_watches = 524288
...
$ sudo sysctl fs.inotify.max_user_instances=512
fs.inotify.max_user_instances = 512
$ sysctl -a |grep inotify
..
user.max_inotify_instances = 512
user.max_inotify_watches = 524288
$ sudo vi /etc/sysctl.conf # 영구 설정
fs.inotify.max_user_instances = 512&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이후 &lt;code&gt;docker restart &amp;lt;container name&amp;gt;&lt;/code&gt;를 통해서 각 노드에 대한 컨테이너를 재시작하면 cilium 파드가 정상화 됩니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;2. ClusterMesh 구성&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이제 두 kind 클러스터를 ClusterMesh로 구성해보겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;절차는 아래 문서를 참고하실 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;참고: &lt;a href=&quot;https://docs.cilium.io/en/stable/network/clustermesh/clustermesh/&quot;&gt;https://docs.cilium.io/en/stable/network/clustermesh/clustermesh/&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;먼저 클러스터는 &lt;code&gt;--set routingMode=native --set autoDirectNodeRoutes=true&lt;/code&gt;으로 구성되어 있으며, 다만 각 노드의 라우팅 정보에서 상대 PodCIDR에 대한 라우팅이 없음을 확인할 수 있습니다.&lt;/p&gt;
&lt;pre class=&quot;routeros&quot;&gt;&lt;code&gt;# 라우팅 정보 확인
docker exec -it west-control-plane ip -c route
docker exec -it west-worker ip -c route
docker exec -it east-control-plane ip -c route
docker exec -it east-worker ip -c route

$ docker exec -it west-control-plane ip -c route
default via 172.19.0.1 dev eth0
10.0.0.0/24 via 10.0.0.133 dev cilium_host proto kernel src 10.0.0.133
10.0.0.133 dev cilium_host proto kernel scope link
10.0.1.0/24 via 172.19.0.3 dev eth0 proto kernel
172.19.0.0/16 dev eth0 proto kernel scope link src 172.19.0.2

$ docker exec -it east-control-plane ip -c route
default via 172.19.0.1 dev eth0
10.1.0.0/24 via 10.1.0.224 dev cilium_host proto kernel src 10.1.0.224
10.1.0.224 dev cilium_host proto kernel scope link
10.1.1.0/24 via 172.19.0.4 dev eth0 proto kernel
172.19.0.0/16 dev eth0 proto kernel scope link src 172.19.0.5&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Cilium에서 mTLS에 사용된 CA 인증서를 각 클러스터가 공유하기 위해서 아래와 같이 secret을 동일하게 맞춰주겠습니다.&lt;/p&gt;
&lt;pre class=&quot;routeros&quot;&gt;&lt;code&gt;# Shared Certificate Authority
kubectl get secret -n kube-system cilium-ca --context kind-east
kubectl delete secret -n kube-system cilium-ca --context kind-east

kubectl --context kind-west get secret -n kube-system cilium-ca -o yaml | \
kubectl --context kind-east create -f -&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;실제 두 클러스터에 cluster mesh를 활성화 합니다.&lt;/p&gt;
&lt;pre class=&quot;routeros&quot;&gt;&lt;code&gt;# 모니터링 : 신규 터미널 2개
cilium clustermesh status --context kind-west --wait  
cilium clustermesh status --context kind-east --wait

# Enable Cluster Mesh : 실습 환경에서는 Cluster Mesh control plane 의 노출을 NodePort 로 지정함
cilium clustermesh enable --service-type NodePort --enable-kvstoremesh=false --context kind-west
cilium clustermesh enable --service-type NodePort --enable-kvstoremesh=false --context kind-east

# 모니터링 결과 확인
$ cilium clustermesh status --context kind-west --wait
⌛ Waiting (0s) for access information: unable to get clustermesh service &quot;clustermesh-apiserver&quot;: services &quot;clustermesh-apiserver&quot; not found
⌛ Waiting (10s) for access information: unable to get clustermesh service &quot;clustermesh-apiserver&quot;: services &quot;clustermesh-apiserver&quot; not found
⌛ Waiting (20s) for access information: unable to get clustermesh service &quot;clustermesh-apiserver&quot;: services &quot;clustermesh-apiserver&quot; not found

⌛ Waiting (30s) for access information: unable to get clustermesh service &quot;clustermesh-apiserver&quot;: services &quot;clustermesh-apiserver&quot; not found
Trying to get secret clustermesh-apiserver-remote-cert by deprecated name clustermesh-apiserver-client-cert
Trying to get secret clustermesh-apiserver-client-cert by deprecated name clustermesh-apiserver-client-certs
⌛ Waiting (40s) for access information: unable to get client secret to access clustermesh service: unable to get secret &quot;clustermesh-apiserver-client-certs&quot; and no deprecated names to try
Trying to get secret clustermesh-apiserver-remote-cert by deprecated name clustermesh-apiserver-client-cert
Trying to get secret clustermesh-apiserver-client-cert by deprecated name clustermesh-apiserver-client-certs
⚠️  Service type NodePort detected! Service may fail when nodes are removed from the cluster!
✅ Service &quot;clustermesh-apiserver&quot; of type &quot;NodePort&quot; found
✅ Cluster access information is available:
  - 172.19.0.2:32379
⌛ Waiting (0s) for deployment clustermesh-apiserver to become ready: only 0 of 1 replicas are available
⌛ Waiting (10s) for deployment clustermesh-apiserver to become ready: only 0 of 1 replicas are available
⌛ Waiting (20s) for deployment clustermesh-apiserver to become ready: only 0 of 1 replicas are available
✅ Deployment clustermesh-apiserver is ready
ℹ️  KVStoreMesh is disabled


  No cluster connected

  Global services: [ min:0 / avg:0.0 / max:0 ]


# 32379 NodePort 정보 : clustermesh-apiserver 서비스 정보
kubectl get svc,ep -n kube-system clustermesh-apiserver --context kind-west
NAME                            TYPE       CLUSTER-IP   EXTERNAL-IP   PORT(S)          AGE
service/clustermesh-apiserver   NodePort   10.2.26.7    &amp;lt;none&amp;gt;        2379:32379/TCP   9m9s

NAME                              ENDPOINTS        AGE
endpoints/clustermesh-apiserver   10.0.1.19:2379   9m9s # 대상 파드는 clustermesh-apiserver 파드 IP

kubectl get pod -n kube-system -owide --context kind-west | grep clustermesh

$ kubectl get pod -n kube-system -owide --context kind-west | grep clustermesh
clustermesh-apiserver-5cf45db9cc-76d2s       2/2     Running     0          9m57s   10.0.1.19    west-worker          &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
clustermesh-apiserver-generate-certs-t5s4k   0/1     Completed   0          9m56s   172.19.0.3   west-worker          &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;


kubectl get svc,ep -n kube-system clustermesh-apiserver --context kind-east
NAME                            TYPE       CLUSTER-IP    EXTERNAL-IP   PORT(S)          AGE
service/clustermesh-apiserver   NodePort   10.3.32.115   &amp;lt;none&amp;gt;        2379:32379/TCP   10m

NAME                              ENDPOINTS         AGE
endpoints/clustermesh-apiserver   10.1.1.148:2379   10m # 대상 파드는 clustermesh-apiserver 파드 IP

kubectl get pod -n kube-system -owide --context kind-east | grep clustermesh

$ kubectl get pod -n kube-system -owide --context kind-east | grep clustermesh
clustermesh-apiserver-5cf45db9cc-zcddx       2/2     Running     0              10m    10.1.1.148   east-worker          &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
clustermesh-apiserver-generate-certs-sf7cv   0/1     Completed   0              10m    172.19.0.4   east-worker          &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이제 실제 ClusterMesh가 활성화된 두 클러스터를 연결해보겠습니다.&lt;/p&gt;
&lt;pre class=&quot;routeros&quot;&gt;&lt;code&gt;# Connect Clusters
cilium clustermesh connect --context kind-west --destination-context kind-east

$ cilium clustermesh connect --context kind-west --destination-context kind-east
✨ Extracting access information of cluster west...
  Extracting secrets from cluster west...
⚠️  Service type NodePort detected! Service may fail when nodes are removed from the cluster!
ℹ️  Found ClusterMesh service IPs: [172.19.0.2]
✨ Extracting access information of cluster east...
  Extracting secrets from cluster east...
⚠️  Service type NodePort detected! Service may fail when nodes are removed from the cluster!
ℹ️  Found ClusterMesh service IPs: [172.19.0.5]
ℹ️ Configuring Cilium in cluster kind-west to connect to cluster kind-east
ℹ️ Configuring Cilium in cluster kind-east to connect to cluster kind-west
✅ Connected cluster kind-west &amp;lt;=&amp;gt; kind-east!

# 확인
cilium clustermesh status --context kind-west --wait
cilium clustermesh status --context kind-east --wait

# 사전 정보
cilium clustermesh status --context kind-west --wait
⚠️  Service type NodePort detected! Service may fail when nodes are removed from thecluster!
✅ Service &quot;clustermesh-apiserver&quot; of type &quot;NodePort&quot; found
✅ Cluster access information is available:
  - 172.19.0.2:32379
✅ Deployment clustermesh-apiserver is ready
ℹ️  KVStoreMesh is disabled


  No cluster connected

  Global services: [ min:0 / avg:0.0 / max:0 ]

# 사후 정보
cilium clustermesh status --context kind-west --wait
⚠️  Service type NodePort detected! Service may fail when nodes are removed from thecluster!
✅ Service &quot;clustermesh-apiserver&quot; of type &quot;NodePort&quot; found
✅ Cluster access information is available:
  - 172.19.0.2:32379
✅ Deployment clustermesh-apiserver is ready
ℹ️  KVStoreMesh is disabled

✅ All 2 nodes are connected to all clusters [min:1 / avg:1.0 / max:1]                   

  Cluster Connections:                                                             
- east: 2/2 configured, 2/2 connected

  Global services: [ min:0 / avg:0.0 / max:0 ]          &lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;ClusterMesh 연결이 완료되었고 아래와 같이 구성정보를 확인하겠습니다.&lt;/p&gt;
&lt;pre class=&quot;yaml&quot;&gt;&lt;code&gt;# cilium-dbg로 clustermesh에 troubleshoot 정보 확인
kubectl exec -it -n kube-system ds/cilium -c cilium-agent --context kind-west -- cilium-dbg troubleshoot clustermesh
kubectl exec -it -n kube-system ds/cilium -c cilium-agent --context kind-east -- cilium-dbg troubleshoot clustermesh

# west 클러스터에서 east 클러스터 정보가 연결된 것으로 확인됨
$ kubectl exec -it -n kube-system ds/cilium -c cilium-agent --context kind-west -- cilium-dbg troubleshoot clustermesh
Found 1 cluster configurations

Cluster &quot;east&quot;:
  Configuration path: /var/lib/cilium/clustermesh/east

  Endpoints:
   - https://east.mesh.cilium.io:32379
     ✅ Hostname resolved to: 172.19.0.5
     ✅ TCP connection successfully established to 172.19.0.5:32379
     ✅ TLS connection successfully established to 172.19.0.5:32379
     ℹ️  Negotiated TLS version: TLS 1.3, ciphersuite TLS_AES_128_GCM_SHA256
     ℹ️  Etcd server version: 3.5.21

  Digital certificates:
   ✅ TLS Root CA certificates:
      - Serial number:       8c:d7:b9:68:5a:41:d6:15:7f:63:df:99:65:b8:30:ea
        Subject:             CN=Cilium CA
        Issuer:              CN=Cilium CA
        Validity:
          Not before:  2025-08-14 05:55:34 +0000 UTC
          Not after:   2028-08-13 05:55:34 +0000 UTC
   ✅ TLS client certificates:
      - Serial number:       19:08:b5:6b:2d:c9:59:0b:5f:ba:04:04:b1:cc:34:62:fb:66:ec:9f
        Subject:             CN=remote
        Issuer:              CN=Cilium CA
        Validity:
          Not before:  2025-08-14 08:04:00 +0000 UTC
          Not after:   2028-08-13 08:04:00 +0000 UTC
        ⚠️ Cannot verify certificate with the configured root CAs

⚙️ Etcd client:
   ✅ Etcd connection successfully established
   ℹ️  Etcd cluster ID: edd8e57410e76ac9


# 확인
cilium status --context kind-west
cilium status --context kind-east

$ cilium status --context kind-west
    /&amp;macr;&amp;macr;\
 /&amp;macr;&amp;macr;\__/&amp;macr;&amp;macr;\    Cilium:             OK
 \__/&amp;macr;&amp;macr;\__/    Operator:           OK
 /&amp;macr;&amp;macr;\__/&amp;macr;&amp;macr;\    Envoy DaemonSet:    OK
 \__/&amp;macr;&amp;macr;\__/    Hubble Relay:       disabled
    \__/       ClusterMesh:        OK

DaemonSet              cilium                   Desired: 2, Ready: 2/2, Available: 2/2
DaemonSet              cilium-envoy             Desired: 2, Ready: 2/2, Available: 2/2
Deployment             cilium-operator          Desired: 1, Ready: 1/1, Available: 1/1
Deployment             clustermesh-apiserver    Desired: 1, Ready: 1/1, Available: 1/1
Containers:            cilium                   Running: 2
                       cilium-envoy             Running: 2
                       cilium-operator          Running: 1
                       clustermesh-apiserver    Running: 1
                       hubble-relay
Cluster Pods:          4/4 managed by Cilium
Helm chart version:    1.17.6
Image versions         cilium                   quay.io/cilium/cilium:v1.17.6@sha256:544de3d4fed7acba72758413812780a4972d47c39035f2a06d6145d8644a3353: 2
                       cilium-envoy             quay.io/cilium/cilium-envoy:v1.33.4-1752151664-7c2edb0b44cf95f326d628b837fcdd845102ba68@sha256:318eff387835ca2717baab42a84f35a83a5f9e7d519253df87269f80b9ff0171: 2
                       cilium-operator          quay.io/cilium/operator-generic:v1.17.6@sha256:91ac3bf7be7bed30e90218f219d4f3062a63377689ee7246062fa0cc3839d096: 1
                       clustermesh-apiserver    quay.io/cilium/clustermesh-apiserver:v1.17.6@sha256:f619e97432db427e1511bf91af3be8ded418c53a353a09629e04c5880659d1df: 2

cilium clustermesh status --context kind-west
cilium clustermesh status --context kind-east

$ cilium clustermesh status --context kind-west
⚠️  Service type NodePort detected! Service may fail when nodes are removed from the cluster!
✅ Service &quot;clustermesh-apiserver&quot; of type &quot;NodePort&quot; found
✅ Cluster access information is available:
  - 172.19.0.2:32379
✅ Deployment clustermesh-apiserver is ready
ℹ️  KVStoreMesh is disabled

✅ All 2 nodes are connected to all clusters [min:1 / avg:1.0 / max:1]

  Cluster Connections:
  - east: 2/2 configured, 2/2 connected

  Global services: [ min:0 / avg:0.0 / max:0 ]

# helm을 통해서도 구성이 가능하며, 실제로 추가된 정보가 확인된다.
helm get values -n kube-system cilium --kube-context kind-west 
...
clustermesh:
  apiserver:
    kvstoremesh:
      enabled: false
    service:
      type: NodePort
    tls:
      auto:
        enabled: true
        method: cronJob
        schedule: 0 0 1 */4 *
  config:
    clusters:
    - ips:
      - 172.19.0.5
      name: east
      port: 32379
    enabled: true
  useAPIServer: true
...&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;앞서 ClusterMesh 연결 전에 확인한 각 노드의 라우팅 정보를 다시 확인해보겠습니다.&lt;/p&gt;
&lt;pre class=&quot;routeros&quot;&gt;&lt;code&gt;# 라우팅 정보 확인 : 클러스터간 PodCIDR 라우팅 주입됨
docker exec -it west-control-plane ip -c route
docker exec -it west-worker ip -c route
docker exec -it east-control-plane ip -c route
docker exec -it east-worker ip -c route

# 사전
$ docker exec -it west-control-plane ip -c route
default via 172.19.0.1 dev eth0
10.0.0.0/24 via 10.0.0.133 dev cilium_host proto kernel src 10.0.0.133
10.0.0.133 dev cilium_host proto kernel scope link
10.0.1.0/24 via 172.19.0.3 dev eth0 proto kernel
172.19.0.0/16 dev eth0 proto kernel scope link src 172.19.0.2

$ docker exec -it east-control-plane ip -c route
default via 172.19.0.1 dev eth0
10.1.0.0/24 via 10.1.0.224 dev cilium_host proto kernel src 10.1.0.224
10.1.0.224 dev cilium_host proto kernel scope link
10.1.1.0/24 via 172.19.0.4 dev eth0 proto kernel
172.19.0.0/16 dev eth0 proto kernel scope link src 172.19.0.5

# 사후
$ docker exec -it west-control-plane ip -c route
default via 172.19.0.1 dev eth0
10.0.0.0/24 via 10.0.0.133 dev cilium_host proto kernel src 10.0.0.133
10.0.0.133 dev cilium_host proto kernel scope link
10.0.1.0/24 via 172.19.0.3 dev eth0 proto kernel
10.1.0.0/24 via 172.19.0.5 dev eth0 proto kernel # 추가됨
10.1.1.0/24 via 172.19.0.4 dev eth0 proto kernel # 추가됨
172.19.0.0/16 dev eth0 proto kernel scope link src 172.19.0.2

$ docker exec -it east-control-plane ip -c route
default via 172.19.0.1 dev eth0
10.0.0.0/24 via 172.19.0.2 dev eth0 proto kernel # 추가됨
10.0.1.0/24 via 172.19.0.3 dev eth0 proto kernel # 추가됨
10.1.0.0/24 via 10.1.0.224 dev cilium_host proto kernel src 10.1.0.224
10.1.0.224 dev cilium_host proto kernel scope link
10.1.1.0/24 via 172.19.0.4 dev eth0 proto kernel
172.19.0.0/16 dev eth0 proto kernel scope link src 172.19.0.5&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;3. ClusterMesh 동작 확인&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이제 구성된 환경에서 통신이 어떻게 이뤄지는 지 확인해보겟습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;실제 동작을 가시적으로 확인해보기 위해서 hubble을 사용해보겠습니다.&lt;/p&gt;
&lt;pre class=&quot;pgsql&quot;&gt;&lt;code&gt;# Helm repo 설정
helm repo add cilium https://helm.cilium.io/

# 설정
helm upgrade cilium cilium/cilium --version 1.17.6 --namespace kube-system --reuse-values \
--set hubble.enabled=true --set hubble.relay.enabled=true --set hubble.ui.enabled=true \
--set hubble.ui.service.type=NodePort --set hubble.ui.service.nodePort=30001 --kube-context kind-west
kubectl -n kube-system rollout restart ds/cilium --context kind-west

# 설정
helm upgrade cilium cilium/cilium --version 1.17.6 --namespace kube-system --reuse-values \
--set hubble.enabled=true --set hubble.relay.enabled=true --set hubble.ui.enabled=true \
--set hubble.ui.service.type=NodePort --set hubble.ui.service.nodePort=31001 --kube-context kind-east
kubectl -n kube-system rollout restart ds/cilium --context kind-east

# 확인
kubectl get svc,ep -n kube-system hubble-ui --context kind-west
kubectl get svc,ep -n kube-system hubble-ui --context kind-east

$ kubectl get svc,ep -n kube-system hubble-ui --context kind-west
Warning: v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
NAME                TYPE       CLUSTER-IP   EXTERNAL-IP   PORT(S)        AGE
service/hubble-ui   NodePort   10.2.67.87   &amp;lt;none&amp;gt;        80:30001/TCP   89s

NAME                  ENDPOINTS         AGE
endpoints/hubble-ui   10.0.1.107:8081   89s

$ kubectl get svc,ep -n kube-system hubble-ui --context kind-east
Warning: v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
NAME                TYPE       CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
service/hubble-ui   NodePort   10.3.239.253   &amp;lt;none&amp;gt;        80:31001/TCP   63s

NAME                  ENDPOINTS         AGE
endpoints/hubble-ui   10.1.1.101:8081   63s

# hubble-ui 접속
http://localhost:30001
http://localhost:31001&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Hubble UI로 정상 접속이 되면 이제 샘플 애플리케이션을 배포하여 각 클러스터 간 통신을 직접 확인해보겠습니다.&lt;/p&gt;
&lt;pre class=&quot;routeros&quot;&gt;&lt;code&gt;# 샘플 애플리케이션 배포
cat &amp;lt;&amp;lt; EOF | kubectl apply --context kind-west -f -
apiVersion: v1
kind: Pod
metadata:
  name: curl-pod
  labels:
    app: curl
spec:
  containers:
  - name: curl
    image: nicolaka/netshoot
    command: [&quot;tail&quot;]
    args: [&quot;-f&quot;, &quot;/dev/null&quot;]
  terminationGracePeriodSeconds: 0
EOF

cat &amp;lt;&amp;lt; EOF | kubectl apply --context kind-east -f -
apiVersion: v1
kind: Pod
metadata:
  name: curl-pod
  labels:
    app: curl
spec:
  containers:
  - name: curl
    image: nicolaka/netshoot
    command: [&quot;tail&quot;]
    args: [&quot;-f&quot;, &quot;/dev/null&quot;]
  terminationGracePeriodSeconds: 0
EOF


# 확인
kubectl get pod -owide --context kind-west  &amp;amp;&amp;amp; kubectl get pod -owide  --context kind-east 

$ kubectl get pod -owide --context kind-west  &amp;amp;&amp;amp; kubectl get pod -owide  --context kind-east
NAME       READY   STATUS    RESTARTS   AGE    IP           NODE          NOMINATED NODE   READINESS GATES
curl-pod   1/1     Running   0          112s   10.0.1.180   west-worker   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
NAME       READY   STATUS    RESTARTS   AGE    IP           NODE          NOMINATED NODE   READINESS GATES
curl-pod   1/1     Running   0          106s   10.1.1.125   east-worker   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;

# 각 파드간 ping 화인
kubectl exec -it curl-pod --context kind-west -- ping -c 1 10.1.1.125

$ kubectl exec -it curl-pod --context kind-west -- ping -c 1 10.1.1.125
PING 10.1.1.125 (10.1.1.125) 56(84) bytes of data.
64 bytes from 10.1.1.125: icmp_seq=1 ttl=62 time=0.545 ms

--- 10.1.1.125 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.545/0.545/0.545/0.000 ms


# 목적지 파드에서 tcpdump 로 확인 : NAT 없이 직접 라우팅.
kubectl exec -it curl-pod --context kind-east -- tcpdump -i eth0 -nn

$ kubectl exec -it curl-pod --context kind-east -- tcpdump -i eth0 -nn
tcpdump: verbose output suppressed, use -v[v]... for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), snapshot length 262144 bytes
12:48:50.769678 IP 10.0.1.180 &amp;gt; 10.1.1.125: ICMP echo request, id 2, seq 1, length 64
12:48:50.769738 IP 10.1.1.125 &amp;gt; 10.0.1.180: ICMP echo reply, id 2, seq 1, length 64&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Hubble UI에서 확인해보면 클러스터에 대한 이름이 표시되며 통신이 이뤄지는 것을 볼 수 있습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1807&quot; data-origin-height=&quot;746&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/IArfF/btsPSrnLDGX/IMk1DSujFqA7aZId7FRJL0/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/IArfF/btsPSrnLDGX/IMk1DSujFqA7aZId7FRJL0/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/IArfF/btsPSrnLDGX/IMk1DSujFqA7aZId7FRJL0/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FIArfF%2FbtsPSrnLDGX%2FIMk1DSujFqA7aZId7FRJL0%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1807&quot; height=&quot;746&quot; data-origin-width=&quot;1807&quot; data-origin-height=&quot;746&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;ClusterMesh로 클러스터를 연결하여 각 클러스터에 있는 파드들이 1:1로 통신하는 과정을 살펴봤습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;4. Global Service 생성&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이제 ClusterMesh에서 두 클러스터의 워크로드로 로드밸런싱 서비스를 제공하는 Global Service를 구성해보겠습니다. 아래 예시와 같이 쿠버네티스의 서비스에 &lt;code&gt;service.cilium.io/global: &quot;true&quot;&lt;/code&gt; 어노테이션을 사용합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;참고: &lt;a href=&quot;https://docs.cilium.io/en/stable/network/clustermesh/services/&quot;&gt;https://docs.cilium.io/en/stable/network/clustermesh/services/&lt;/a&gt;&lt;/p&gt;
&lt;pre class=&quot;yaml&quot;&gt;&lt;code&gt;# 샘플 애플리케이션 &amp;amp; 서비스 배포
cat &amp;lt;&amp;lt; EOF | kubectl apply --context kind-west -f -
apiVersion: apps/v1
kind: Deployment
metadata:
  name: webpod
spec:
  replicas: 2
  selector:
    matchLabels:
      app: webpod
  template:
    metadata:
      labels:
        app: webpod
    spec:
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchExpressions:
              - key: app
                operator: In
                values:
                - sample-app
            topologyKey: &quot;kubernetes.io/hostname&quot;
      containers:
      - name: webpod
        image: traefik/whoami
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: webpod
  labels:
    app: webpod
  annotations:
    service.cilium.io/global: &quot;true&quot;
spec:
  selector:
    app: webpod
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80
  type: ClusterIP
EOF


# 샘플 애플리케이션 &amp;amp; 서비스 배포
cat &amp;lt;&amp;lt; EOF | kubectl apply --context kind-east -f -
apiVersion: apps/v1
kind: Deployment
metadata:
  name: webpod
spec:
  replicas: 2
  selector:
    matchLabels:
      app: webpod
  template:
    metadata:
      labels:
        app: webpod
    spec:
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchExpressions:
              - key: app
                operator: In
                values:
                - sample-app
            topologyKey: &quot;kubernetes.io/hostname&quot;
      containers:
      - name: webpod
        image: traefik/whoami
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: webpod
  labels:
    app: webpod
  annotations:
    service.cilium.io/global: &quot;true&quot;
spec:
  selector:
    app: webpod
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80
  type: ClusterIP
EOF&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;두 클러스터에 webpod 디플로이먼트를 배포하고, 각 서비스를 생성하면서 &lt;code&gt;service.cilium.io/global: &quot;true&quot;&lt;/code&gt; 어노테이션을 사용했습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이제 배포된 워크로드를 확인하고 서비스를 호출해보겠습니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;# 확인
kubectl get po -owide --context kind-west &amp;amp;&amp;amp; kubectl get po -owide --context kind-east
kubectl get svc,ep webpod --context kind-west &amp;amp;&amp;amp; kubectl get svc,ep webpod --context kind-east

$ kubectl get po -owide --context kind-west &amp;amp;&amp;amp; kubectl get po -owide --context kind-east
NAME                      READY   STATUS    RESTARTS   AGE     IP           NODE          NOMINATED NODE   READINESS GATES
curl-pod                  1/1     Running   0          36m     10.0.1.180   west-worker   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
webpod-697b545f57-gr2cn   1/1     Running   0          3m35s   10.0.1.86    west-worker   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
webpod-697b545f57-qdq9f   1/1     Running   0          3m35s   10.0.1.131   west-worker   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;

NAME                      READY   STATUS    RESTARTS   AGE     IP           NODE          NOMINATED NODE   READINESS GATES
curl-pod                  1/1     Running   0          36m     10.1.1.125   east-worker   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
webpod-697b545f57-746cp   1/1     Running   0          3m36s   10.1.1.195   east-worker   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
webpod-697b545f57-9mnbq   1/1     Running   0          3m36s   10.1.1.35    east-worker   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;

# 이 결과에서 endpoint는 각 클러스터에 대한 endpoint만 확인된다.
$ kubectl get svc,ep webpod --context kind-west &amp;amp;&amp;amp; kubectl get svc,ep webpod --context kind-east
Warning: v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
NAME             TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
service/webpod   ClusterIP   10.2.53.27   &amp;lt;none&amp;gt;        80/TCP    98s

NAME               ENDPOINTS                    AGE
endpoints/webpod   10.0.1.131:80,10.0.1.86:80   98s

Warning: v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
NAME             TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)   AGE
service/webpod   ClusterIP   10.3.71.163   &amp;lt;none&amp;gt;        80/TCP    90s

NAME               ENDPOINTS                    AGE
endpoints/webpod   10.1.1.195:80,10.1.1.35:80   90s


# 다만 cilium service list에서는 동일한 label을 두 클러스터의 endpoint가 모두 확인된다.
kubectl --context kind-east exec -it -n kube-system ds/cilium -c cilium-agent -- cilium service list --clustermesh-affinity

$ kubectl --context kind-east exec -it -n kube-system ds/cilium -c cilium-agent -- cilium service list --clustermesh-affinity
ID   Frontend               Service Type   Backend
...
13   10.3.71.163:80/TCP     ClusterIP      1 =&amp;gt; 10.0.1.86:80/TCP (active)
                                           2 =&amp;gt; 10.0.1.131:80/TCP (active)
                                           3 =&amp;gt; 10.1.1.195:80/TCP (active)
                                           4 =&amp;gt; 10.1.1.35:80/TCP (active)

kubectl --context kind-west  exec -it -n kube-system ds/cilium -c cilium-agent -- cilium service list --clustermesh-affinity
ID   Frontend               Service Type   Backend
...
13   10.2.53.27:80/TCP      ClusterIP      1 =&amp;gt; 10.0.1.86:80/TCP (active)
                                           2 =&amp;gt; 10.0.1.131:80/TCP (active)
                                           3 =&amp;gt; 10.1.1.195:80/TCP (active)
                                           4 =&amp;gt; 10.1.1.35:80/TCP (active)

# 서비스 호출 테스트
kubectl exec -it curl-pod --context kind-west -- sh -c 'while true; do curl -s --connect-timeout 1 webpod ; sleep 1; echo &quot;---&quot;; done;'
kubectl exec -it curl-pod --context kind-east -- sh -c 'while true; do curl -s --connect-timeout 1 webpod ; sleep 1; echo &quot;---&quot;; done;'&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Hubble UI에서 확인해보면 서로 다른 클러스터의 webpod 로 호출되는 것이 확인됩니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;2004&quot; data-origin-height=&quot;950&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/B0dS7/btsPUVt0O0O/3uIXFItXFXBzVLqo22LbBk/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/B0dS7/btsPUVt0O0O/3uIXFItXFXBzVLqo22LbBk/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/B0dS7/btsPUVt0O0O/3uIXFItXFXBzVLqo22LbBk/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FB0dS7%2FbtsPUVt0O0O%2F3uIXFItXFXBzVLqo22LbBk%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;2004&quot; height=&quot;950&quot; data-origin-width=&quot;2004&quot; data-origin-height=&quot;950&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;5. Service Affinity 활용&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;ClusterMesh와 Gloabal Service를 사용할 때, &lt;code&gt;service.cilium.io/affinity: &quot;local|remote|none&quot;&lt;/code&gt; 어노테이션을 통해서 로걸 클러스터나 리포트 클러스터로 affinity를 줄 수도 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;참고: &lt;a href=&quot;https://docs.cilium.io/en/stable/network/clustermesh/affinity/&quot;&gt;https://docs.cilium.io/en/stable/network/clustermesh/affinity/&lt;/a&gt;&lt;/p&gt;
&lt;pre class=&quot;yaml&quot;&gt;&lt;code&gt;# 모니터링 : 반복 접속해두기
kubectl exec -it curl-pod --context kind-west -- sh -c 'while true; do curl -s --connect-timeout 1 webpod ; sleep 1; echo &quot;---&quot;; done;'

# 현재 Service Annotations 설정
kubectl --context kind-west describe svc webpod | grep Annotations -A1
kubectl --context kind-east describe svc webpod | grep Annotations -A1


$ kubectl --context kind-west describe svc webpod | grep Annotations -A1
Annotations:              service.cilium.io/global: true
Selector:                 app=webpod
$ kubectl --context kind-east describe svc webpod | grep Annotations -A1
Annotations:              service.cilium.io/global: true
Selector:                 app=webpod


# Session Affinity Local 설정
kubectl --context kind-west annotate service webpod service.cilium.io/affinity=local --overwrite
kubectl --context kind-west describe svc webpod | grep Annotations -A3

kubectl --context kind-east annotate service webpod service.cilium.io/affinity=local --overwrite
kubectl --context kind-east describe svc webpod | grep Annotations -A3


$ kubectl --context kind-west describe svc webpod | grep Annotations -A3
Annotations:              service.cilium.io/affinity: local
                          service.cilium.io/global: true
Selector:                 app=webpod
Type:                     ClusterIP
$ kubectl --context kind-east describe svc webpod | grep Annotations -A3
Annotations:              service.cilium.io/affinity: local
                          service.cilium.io/global: true
Selector:                 app=webpod
Type:                     ClusterIP

# 확인 (로컬에 위치한 파드에 대해서 prefferred 가 붙음)
kubectl --context kind-west exec -it -n kube-system ds/cilium -c cilium-agent -- cilium service list --clustermesh-affinity
13   10.2.53.27:80/TCP      ClusterIP      1 =&amp;gt; 10.0.1.86:80/TCP (active) (preferred)
                                           2 =&amp;gt; 10.0.1.131:80/TCP (active) (preferred)
                                           3 =&amp;gt; 10.1.1.195:80/TCP (active)         
                                           4 =&amp;gt; 10.1.1.35:80/TCP (active)   

kubectl --context kind-east exec -it -n kube-system ds/cilium -c cilium-agent -- cilium service list --clustermesh-affinity
13   10.3.71.163:80/TCP     ClusterIP      1 =&amp;gt; 10.0.1.86:80/TCP (active)          
                                           2 =&amp;gt; 10.0.1.131:80/TCP (active)         
                                           3 =&amp;gt; 10.1.1.195:80/TCP (active) (preferred)
                                           4 =&amp;gt; 10.1.1.35:80/TCP (active) (preferred)

# 호출 확인
kubectl exec -it curl-pod --context kind-west -- sh -c 'while true; do curl -s --connect-timeout 1 webpod ; sleep 1; echo &quot;---&quot;; done;'
kubectl exec -it curl-pod --context kind-east -- sh -c 'while true; do curl -s --connect-timeout 1 webpod ; sleep 1; echo &quot;---&quot;; done;'&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;호출 테스트를 해보면 각각 preferred 처리된 파드로 접근이 되는 것을 알 수 있습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;2097&quot; data-origin-height=&quot;1234&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/2ourj/btsPUnxNVSs/pWOksKTXp0kC9kCTBiwxKk/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/2ourj/btsPUnxNVSs/pWOksKTXp0kC9kCTBiwxKk/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/2ourj/btsPUnxNVSs/pWOksKTXp0kC9kCTBiwxKk/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2F2ourj%2FbtsPUnxNVSs%2FpWOksKTXp0kC9kCTBiwxKk%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;2097&quot; height=&quot;1234&quot; data-origin-width=&quot;2097&quot; data-origin-height=&quot;1234&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;마치며&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이번 게시물에서는 Cilium에서 ClusterMesh를 통해서 서로 다른 클러스터를 연결하여 서비스를 구성하는 방법을 살펴봤습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;다음 게시물에서는 Cilium 의 ServiceMesh를 살펴보겠습니다.&lt;/p&gt;</description>
      <category>Cilium</category>
      <category>cilium</category>
      <category>cluster mesh</category>
      <category>kubernetes</category>
      <author>한명</author>
      <guid isPermaLink="true">https://a-person.tistory.com/56</guid>
      <comments>https://a-person.tistory.com/56#entry56comment</comments>
      <pubDate>Thu, 14 Aug 2025 23:02:55 +0900</pubDate>
    </item>
    <item>
      <title>[7] Cilium - BGP Control Plane</title>
      <link>https://a-person.tistory.com/55</link>
      <description>&lt;p data-ke-size=&quot;size16&quot;&gt;이번 게시물에서는 Cilium에서 BGP Control Plane을 사용하는 방식을 살펴보겠습니다. 네트워크 환경에 따라 Cilium이 가지고 있는 PodCIDR이나 External IP를 외부에서 접근해야 하는 경우가 있을 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이러한 경우 Cilium에서 BGP Control Plane을 활용하여 외부 라우터와 BGP 연동을 할 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;실습을 통해서 Cilium의 PodCIDR과 External IP를 Advertise해 보도록 하겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;목차&lt;/h2&gt;
&lt;ol style=&quot;list-style-type: decimal;&quot; data-ke-list-type=&quot;decimal&quot;&gt;
&lt;li&gt;실습 환경 구성&lt;/li&gt;
&lt;li&gt;BPG Control Plane 적용&lt;/li&gt;
&lt;li&gt;External IP에 대한 BGP Advertisement&lt;/li&gt;
&lt;/ol&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;실습 환경 구성&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;vagrant를 통해서 실습 환경을 구성하겠습니다.&lt;/p&gt;
&lt;pre class=&quot;awk&quot;&gt;&lt;code&gt;mkdir cilium-lab &amp;amp;&amp;amp; cd cilium-lab

curl -O https://raw.githubusercontent.com/gasida/vagrant-lab/refs/heads/main/cilium-study/5w/Vagrantfile

vagrant up&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;명령이 완료되면 &lt;code&gt;vagrant status&lt;/code&gt;로 구성된 VM을 확인할 수 있습니다.&lt;/p&gt;
&lt;pre class=&quot;applescript&quot;&gt;&lt;code&gt;PS C:\cilium-lab\w5&amp;gt; vagrant status
Current machine states:

k8s-ctr                   running (virtualbox)
k8s-w1                    running (virtualbox)
router                    running (virtualbox)
k8s-w0                    running (virtualbox)

This environment represents multiple VMs. The VMs are all listed
above with their current state. For more information about a specific
VM, run `vagrant status NAME`.&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;컨트롤 플레인에 2대의 워커 노드와, 추가로 router라는 VM이 생성되어 있으며, 아래와 같이 각 네트워크에 연결되어 있습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1828&quot; data-origin-height=&quot;1022&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/myOpW/btsPUSD567G/Ae3Ol9P4CxLKJOvaA9frw1/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/myOpW/btsPUSD567G/Ae3Ol9P4CxLKJOvaA9frw1/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/myOpW/btsPUSD567G/Ae3Ol9P4CxLKJOvaA9frw1/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FmyOpW%2FbtsPUSD567G%2FAe3Ol9P4CxLKJOvaA9frw1%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1828&quot; height=&quot;1022&quot; data-origin-width=&quot;1828&quot; data-origin-height=&quot;1022&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;해당 실습 환경에서는 서로 다른 네트워크에 워커노드를 위치하고, router라는 VM을 통해서 서로 연결되도록 구성되어 있습니다. 각 네트워크에 위치한 VM들은 상태 네트워크에 대해서 라우팅을 router를 통해 처리하도록 static route가 적용되어 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;router는 192.168.10.0/24 대역과 192.168.20.0/24에 연결된 2개의 network Interface를 가지고, 각 네트워크에 대한 라우팅을 처리하는 역할을 합니다. 이번 실습에서 router라는 VM에는 &lt;a href=&quot;https://docs.frrouting.org/en/stable-10.4/about.html&quot;&gt;FRR&lt;/a&gt;이라는 라우팅 서비스를 추가하였습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;컨트롤 플레인 노드를 &lt;code&gt;vagrant ssh k8s-ctr&lt;/code&gt;로 접근하여 기본 정보를 확인하겠습니다.&lt;/p&gt;
&lt;pre class=&quot;routeros&quot;&gt;&lt;code&gt;(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get no
NAME      STATUS   ROLES           AGE     VERSION
k8s-ctr   Ready    control-plane   21m     v1.33.2
k8s-w0    Ready    &amp;lt;none&amp;gt;          7m35s   v1.33.2
k8s-w1    Ready    &amp;lt;none&amp;gt;          15m     v1.33.2
(⎈|HomeLab:N/A) root@k8s-ctr:~# cilium config view | grep ^ipam
ipam                                              cluster-pool
ipam-cilium-node-update-rate                      15s
(⎈|HomeLab:N/A) root@k8s-ctr:~# cilium config view | grep ^routing
routing-mode                                      native&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이번 실습에는 Cilium에서 &lt;code&gt;--set bgpControlPlane.enabled=true&lt;/code&gt;를 사용하여 Cilium에서 BPG 연동이 가능하도록 구성하였습니다. 이를 아래와 같이 확인할 수 있습니다.&lt;/p&gt;
&lt;pre class=&quot;routeros&quot;&gt;&lt;code&gt;(⎈|HomeLab:N/A) root@k8s-ctr:~# cilium config view | grep -i bgp
bgp-router-id-allocation-ip-pool
bgp-router-id-allocation-mode                     default
bgp-secrets-namespace                             kube-system
enable-bgp-control-plane                          true
enable-bgp-control-plane-status-report            true&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;또한 해당 환경에서는 &lt;code&gt;autoDirectNodeRoutes=false&lt;/code&gt;로 설정되어 PodCIDR에 대한 기본 라우팅이 적용되어 있지 않습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;code&gt;autoDirectNodeRoutes=true&lt;/code&gt;가 설정되면 같은 네트워크에 있는 노드에 대해서 PodCidr에 대한 static routing이 추가됩니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;(⎈|HomeLab:N/A) root@k8s-ctr:~# ip route
default via 10.0.2.2 dev eth0 proto dhcp src 10.0.2.15 metric 100
10.0.2.0/24 dev eth0 proto kernel scope link src 10.0.2.15 metric 100
10.0.2.2 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100
10.0.2.3 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100
172.20.0.0/24 via 172.20.0.66 dev cilium_host proto kernel src 172.20.0.66
172.20.0.66 dev cilium_host proto kernel scope link
192.168.10.0/24 dev eth1 proto kernel scope link src 192.168.10.100
192.168.20.0/24 via 192.168.10.200 dev eth1 proto static
(⎈|HomeLab:N/A) root@k8s-ctr:~# ip route |grep static
192.168.20.0/24 via 192.168.10.200 dev eth1 proto static
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get ciliumnode
NAME      CILIUMINTERNALIP   INTERNALIP       AGE
k8s-ctr   172.20.0.66        192.168.10.100   25m
k8s-w0    172.20.2.75        192.168.20.100   13m
k8s-w1    172.20.1.25        192.168.10.101   21m&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;샘플 애플리케이션을 배포하여 어떤 문제가 있는지 확인해보겠습니다.&lt;/p&gt;
&lt;pre class=&quot;yaml&quot;&gt;&lt;code&gt;# 샘플 애플리케이션 배포
cat &amp;lt;&amp;lt; EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
  name: webpod
spec:
  replicas: 3
  selector:
    matchLabels:
      app: webpod
  template:
    metadata:
      labels:
        app: webpod
    spec:
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchExpressions:
              - key: app
                operator: In
                values:
                - sample-app
            topologyKey: &quot;kubernetes.io/hostname&quot;
      containers:
      - name: webpod
        image: traefik/whoami
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: webpod
  labels:
    app: webpod
spec:
  selector:
    app: webpod
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80
  type: ClusterIP
EOF


# k8s-ctr 노드에 curl-pod 파드 배포
cat &amp;lt;&amp;lt;EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
  name: curl-pod
  labels:
    app: curl
spec:
  nodeName: k8s-ctr
  containers:
  - name: curl
    image: nicolaka/netshoot
    command: [&quot;tail&quot;]
    args: [&quot;-f&quot;, &quot;/dev/null&quot;]
  terminationGracePeriodSeconds: 0
EOF&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;배포가 완료되었고, curl-pod에서 webpod로 호출은 하겠습니다.&lt;/p&gt;
&lt;pre class=&quot;yaml&quot;&gt;&lt;code&gt;(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get po -owide
NAME                      READY   STATUS    RESTARTS   AGE   IP             NODE      NOMINATED NODE   READINESS GATES
curl-pod                  1/1     Running   0          11m   172.20.0.218   k8s-ctr   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
webpod-697b545f57-fgxbm   1/1     Running   0          11m   172.20.1.196   k8s-w1    &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
webpod-697b545f57-rtmv9   1/1     Running   0          11m   172.20.0.145   k8s-ctr   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
webpod-697b545f57-wr4tf   1/1     Running   0          11m   172.20.2.246   k8s-w0    &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it curl-pod -- sh -c 'while true; do curl -s --connect-timeout 1 webpod | grep Hostname; echo &quot;---&quot; ; sleep 1; done'
---
Hostname: webpod-697b545f57-rtmv9
---
---
---
---
Hostname: webpod-697b545f57-rtmv9
---
---&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;반복 호출을 해보면 실제로 같은 노드에 위치한 파드인 webpod-697b545f57-rtmv9를 제외하면 제대로 호출이 되지 않는 것을 알 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;2. BPG Control Plane 적용&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;문제를 해결하기 위해서 Cilium의 BPG Control Plane을 살펴보겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;실습 환경에서 router는 frr이라는 라우팅 서비스를 제공하고 있습니다. 여기서는 BGP Control Plane의 Custom Resource를 구성하여 각 Cilium 노드를 frr과 BGP 연동 하도록 설정합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;먼저 router에 접속하여 frr을 설정을 확인하고 Cilium 노드를 Neigbor로 추가 하겠습니다.&lt;/p&gt;
&lt;pre class=&quot;routeros&quot;&gt;&lt;code&gt;# frr의 기존 설정 확인
vtysh -c 'show running'
root@router:~# vtysh -c 'show running'
Building configuration...

Current configuration:
!
frr version 8.4.4
frr defaults traditional
hostname router
log syslog informational
no ipv6 forwarding
service integrated-vtysh-config
!
router bgp 65000
 bgp router-id 192.168.10.200
 no bgp ebgp-requires-policy
 bgp graceful-restart
 bgp bestpath as-path multipath-relax
 !
 address-family ipv4 unicast
  network 10.10.1.0/24
  maximum-paths 4
 exit-address-family
exit
!
end

# frr 설정 파일
cat /etc/frr/frr.conf 

root@router:~# cat /etc/frr/frr.conf
# default to using syslog. /etc/rsyslog.d/45-frr.conf places the log in
# /var/log/frr/frr.log
#
# Note:
# FRR's configuration shell, vtysh, dynamically edits the live, in-memory
# configuration while FRR is running. When instructed, vtysh will persist the
# live configuration to this file, overwriting its contents. If you want to
# avoid this, you can edit this file manually before starting FRR, or instruct
# vtysh to write configuration to a different file.
log syslog informational
!
router bgp 65000
  bgp router-id 192.168.10.200
  bgp graceful-restart
  no bgp ebgp-requires-policy
  bgp bestpath as-path multipath-relax
  maximum-paths 4
  network 10.10.1.0/24

# BPG 연동 정보 확인
vtysh -c 'show ip bgp summary'

root@router:~# vtysh -c 'show ip bgp summary'
% No BGP neighbors found in VRF default

# BGP 광고 정보 확인
vtysh -c 'show ip bgp'

root@router:~# vtysh -c 'show ip bgp'
BGP table version is 1, local router ID is 192.168.10.200, vrf id 0
Default local pref 100, local AS 65000
Status codes:  s suppressed, d damped, h history, * valid, &amp;gt; best, = multipath,
               i internal, r RIB-failure, S Stale, R Removed
Nexthop codes: @NNN nexthop's vrf id, &amp;lt; announce-nh-self
Origin codes:  i - IGP, e - EGP, ? - incomplete
RPKI validation codes: V valid, I invalid, N Not found

   Network          Next Hop            Metric LocPrf Weight Path
*&amp;gt; 10.10.1.0/24     0.0.0.0                  0         32768 i

Displayed  1 routes and 1 total paths&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아래와 같이 frr의 설정 파일에 Cilium 노드를 Neighbor로 추가합니다.&lt;/p&gt;
&lt;pre class=&quot;routeros&quot;&gt;&lt;code&gt;# Cilium node 연동 설정 (각 노드를 neighbor로 추가함)
cat &amp;lt;&amp;lt; EOF &amp;gt;&amp;gt; /etc/frr/frr.conf
  neighbor CILIUM peer-group
  neighbor CILIUM remote-as external
  neighbor 192.168.10.100 peer-group CILIUM
  neighbor 192.168.10.101 peer-group CILIUM
  neighbor 192.168.20.100 peer-group CILIUM 
EOF

cat /etc/frr/frr.conf

root@router:~# cat /etc/frr/frr.conf
# default to using syslog. /etc/rsyslog.d/45-frr.conf places the log in
# /var/log/frr/frr.log
#
# Note:
# FRR's configuration shell, vtysh, dynamically edits the live, in-memory
# configuration while FRR is running. When instructed, vtysh will persist the
# live configuration to this file, overwriting its contents. If you want to
# avoid this, you can edit this file manually before starting FRR, or instruct
# vtysh to write configuration to a different file.
log syslog informational
!
router bgp 65000
  bgp router-id 192.168.10.200
  bgp graceful-restart
  no bgp ebgp-requires-policy
  bgp bestpath as-path multipath-relax
  maximum-paths 4
  network 10.10.1.0/24
  neighbor CILIUM peer-group
  neighbor CILIUM remote-as external
  neighbor 192.168.10.100 peer-group CILIUM
  neighbor 192.168.10.101 peer-group CILIUM
  neighbor 192.168.20.100 peer-group CILIUM


systemctl daemon-reexec &amp;amp;&amp;amp; systemctl restart frr
systemctl status frr --no-pager --full

# 모니터링 걸어두기!
journalctl -u frr -f

root@router:~# journalctl -u frr -f
Aug 13 20:42:39 router watchfrr[6427]: [YFT0P-5Q5YX] Forked background command [pid 6428]: /usr/lib/frr/watchfrr.sh restart all
Aug 13 20:42:39 router zebra[6440]: [VTVCM-Y2NW3] Configuration Read in Took: 00:00:00
Aug 13 20:42:39 router staticd[6452]: [VTVCM-Y2NW3] Configuration Read in Took: 00:00:00
Aug 13 20:42:39 router bgpd[6445]: [VTVCM-Y2NW3] Configuration Read in Took: 00:00:00
Aug 13 20:42:39 router watchfrr[6427]: [QDG3Y-BY5TN] zebra state -&amp;gt; up : connect succeeded
Aug 13 20:42:39 router systemd[1]: Started frr.service - FRRouting.
Aug 13 20:42:39 router frrinit.sh[6417]:  * Started watchfrr
Aug 13 20:42:39 router watchfrr[6427]: [QDG3Y-BY5TN] bgpd state -&amp;gt; up : connect succeeded
Aug 13 20:42:39 router watchfrr[6427]: [QDG3Y-BY5TN] staticd state -&amp;gt; up : connect succeeded
Aug 13 20:42:39 router watchfrr[6427]: [KWE5Q-QNGFC] all daemons up, doing startup-complete notify
&amp;lt;대기&amp;gt;&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이제 Cilium 쪽에서 BGP 설정을 이어서 진행하겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;먼저 상태가 변경되기 위해서 frr 로그와, curl pod에서 webpod에 대한 호출을 모니터링을 수행합니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;# 신규 터미널 1 (router) : 모니터링 걸어두기!
journalctl -u frr -f

# 신규 터미널 2 (k8s-ctr) : 반복 호출
kubectl exec -it curl-pod -- sh -c 'while true; do curl -s --connect-timeout 1 webpod | grep Hostname; echo &quot;---&quot; ; sleep 1; done'

# BGP 동작할 노드를 위한 label 설정 (이후 BGP가 동작할 노드를 label로 지정)
kubectl label nodes k8s-ctr k8s-w0 k8s-w1 enable-bgp=true
kubectl get node -l enable-bgp=true

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get node -l enable-bgp=true
NAME      STATUS   ROLES           AGE   VERSION
k8s-ctr   Ready    control-plane   22h   v1.33.2
k8s-w0    Ready    &amp;lt;none&amp;gt;          22h   v1.33.2
k8s-w1    Ready    &amp;lt;none&amp;gt;          22h   v1.33.2&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이후 Cilium에서 Custom Resource 를 생성합니다.&lt;/p&gt;
&lt;pre class=&quot;yaml&quot;&gt;&lt;code&gt;# Config Cilium BGP
cat &amp;lt;&amp;lt; EOF | kubectl apply -f -
apiVersion: cilium.io/v2
kind: CiliumBGPAdvertisement
metadata:
  name: bgp-advertisements
  labels:
    advertise: bgp
spec:
  advertisements:
    - advertisementType: &quot;PodCIDR&quot; # PodCIDR를 BGP로 광고한다.
---
apiVersion: cilium.io/v2
kind: CiliumBGPPeerConfig
metadata:
  name: cilium-peer
spec:
  timers:
    holdTimeSeconds: 9
    keepAliveTimeSeconds: 3
  ebgpMultihop: 2
  gracefulRestart:
    enabled: true
    restartTimeSeconds: 15
  families:
    - afi: ipv4
      safi: unicast
      advertisements:
        matchLabels:
          advertise: &quot;bgp&quot; # CiliumBGPAdvertisement의 label과 일치
---
apiVersion: cilium.io/v2
kind: CiliumBGPClusterConfig
metadata:
  name: cilium-bgp
spec:
  nodeSelector: # nodeSelector로 이전에 지정한 노드 label을 사용
    matchLabels:
      &quot;enable-bgp&quot;: &quot;true&quot;
  bgpInstances:
  - name: &quot;instance-65001&quot;
    localASN: 65001
    peers: # 해당 실습에서는 router에서 정의한 AS 정보를 입력
    - name: &quot;tor-switch&quot;
      peerASN: 65000
      peerAddress: 192.168.10.200  # router ip address
      peerConfigRef:
        name: &quot;cilium-peer&quot; # CiliumBGPPeerConfig 로 생성한 Object를 참조
EOF

# 확인
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get ciliumbgpadvertisements,ciliumbgppeerconfigs,ciliumbgpclusterconfigs
NAME                                                  AGE
ciliumbgpadvertisement.cilium.io/bgp-advertisements   19m

NAME                                        AGE
ciliumbgppeerconfig.cilium.io/cilium-peer   19m

NAME                                          AGE
ciliumbgpclusterconfig.cilium.io/cilium-bgp   19m&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Cilium의 BGP Control Plane을 enable하면 이후 아래와 같은 Custom Resource를 통해서 BPG를 관리해줄 수 있습니다.&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;&lt;code&gt;CiliumBGPClusterConfig&lt;/code&gt;: Defines BGP instances and peer configurations that are applied to multiple nodes. (Cilium의 BGP instance 정보와 Peer/PeerConfig를 정의)&lt;/li&gt;
&lt;li&gt;&lt;code&gt;CiliumBGPPeerConfig&lt;/code&gt;: A common set of BGP peering setting. It can be used across multiple peers. (BGP Instance와 Peer간의 BGP 속성을 정의)&lt;/li&gt;
&lt;li&gt;&lt;code&gt;CiliumBGPAdvertisement&lt;/code&gt;: Defines prefixes that are injected into the BGP routing table. (어떤 대상을 광고하는지 정의)&lt;/li&gt;
&lt;li&gt;&lt;code&gt;CiliumBGPNodeConfigOverride&lt;/code&gt;: Defines node-specific BGP configuration to provide a finer control.&lt;/li&gt;
&lt;/ul&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;각 리소스의 참조 관계는 아래의 그림을 통해서 확인할 수 있습니다. CiliumBGPClusterConfig에서 BGP Instance는 CiliumBGPPeerConfig의 Peers를 Name으로 참조하고, Peers는 CiliumBGPAdvertisement의 Advertisement를 Label로 참조합니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1715&quot; data-origin-height=&quot;1011&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/mh0QN/btsPVGwmhzr/owqykvO7QM6iMEdUGwt8lk/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/mh0QN/btsPVGwmhzr/owqykvO7QM6iMEdUGwt8lk/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/mh0QN/btsPVGwmhzr/owqykvO7QM6iMEdUGwt8lk/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2Fmh0QN%2FbtsPVGwmhzr%2FowqykvO7QM6iMEdUGwt8lk%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1715&quot; height=&quot;1011&quot; data-origin-width=&quot;1715&quot; data-origin-height=&quot;1011&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;출처: &lt;a href=&quot;https://docs.cilium.io/en/stable/network/bgp-control-plane/bgp-control-plane-v2/&quot;&gt;https://docs.cilium.io/en/stable/network/bgp-control-plane/bgp-control-plane-v2/&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;해당 리소스를 생성하면 아래와 같이 frr에서 BGP Update를 받은 것을 알 수 있습니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;root@router:~# journalctl -u frr -f
...
Aug 13 21:30:02 router bgpd[6445]: [M59KS-A3ZXZ] bgp_update_receive: rcvd End-of-RIB for IPv4 Unicast from 192.168.20.100 in vrf default
Aug 13 21:30:02 router bgpd[6445]: [M59KS-A3ZXZ] bgp_update_receive: rcvd End-of-RIB for IPv4 Unicast from 192.168.10.101 in vrf default
Aug 13 21:30:03 router bgpd[6445]: [M59KS-A3ZXZ] bgp_update_receive: rcvd End-of-RIB for IPv4 Unicast from 192.168.10.100 in vrf default&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;노드에서 살펴보면 179 포트를 Listen하는 것은 아니며, cilium-agent에서 router의 179 포트로 연결하고 있는 것을 알 수 있습니다.&lt;/p&gt;
&lt;pre class=&quot;routeros&quot;&gt;&lt;code&gt;# BGP 연결 확인
ss -tnlp | grep 179
ss -tnp | grep 179

(⎈|HomeLab:N/A) root@k8s-ctr:~# ss -tnlp | grep 179
(⎈|HomeLab:N/A) root@k8s-ctr:~# ss -tnp | grep 179
ESTAB 0      0               192.168.10.100:49941          192.168.10.200:179   users:((&quot;cilium-agent&quot;,pid=5626,fd=57))

# cilium bgp 정보 확인
cilium bgp peers
cilium bgp routes available ipv4 unicast

# 각 노드가 router와 정상적으로 연결된 것을 확인할 수 있음
(⎈|HomeLab:N/A) root@k8s-ctr:~# cilium bgp peers
Node      Local AS   Peer AS   Peer Address     Session State   Uptime   Family         Received   Advertised
k8s-ctr   65001      65000     192.168.10.200   established     17m41s   ipv4/unicast   4          2
k8s-w0    65001      65000     192.168.10.200   established     17m41s   ipv4/unicast   4          2
k8s-w1    65001      65000     192.168.10.200   established     17m40s   ipv4/unicast   4          2

# 각 노드가 PodCIDR를 광고하고 있음
(⎈|HomeLab:N/A) root@k8s-ctr:~# cilium bgp routes available ipv4 unicast
Node      VRouter   Prefix          NextHop   Age      Attrs
k8s-ctr   65001     172.20.0.0/24   0.0.0.0   17m48s   [{Origin: i} {Nexthop: 0.0.0.0}]
k8s-w0    65001     172.20.2.0/24   0.0.0.0   17m48s   [{Origin: i} {Nexthop: 0.0.0.0}]
k8s-w1    65001     172.20.1.0/24   0.0.0.0   17m48s   [{Origin: i} {Nexthop: 0.0.0.0}]


# BGP 상태 정보를 확인할 수 있음
kubectl get ciliumbgpnodeconfigs -o yaml | yq

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get ciliumbgpnodeconfigs -o yaml | yq
{
  &quot;apiVersion&quot;: &quot;v1&quot;,
  &quot;items&quot;: [
    {
      &quot;apiVersion&quot;: &quot;cilium.io/v2&quot;,
      &quot;kind&quot;: &quot;CiliumBGPNodeConfig&quot;,
      &quot;metadata&quot;: {
        &quot;creationTimestamp&quot;: &quot;2025-08-13T12:30:13Z&quot;,
        &quot;generation&quot;: 1,
        &quot;name&quot;: &quot;k8s-ctr&quot;,
        &quot;ownerReferences&quot;: [
          {
            &quot;apiVersion&quot;: &quot;cilium.io/v2&quot;,
            &quot;controller&quot;: true,
            &quot;kind&quot;: &quot;CiliumBGPClusterConfig&quot;,
            &quot;name&quot;: &quot;cilium-bgp&quot;,
            &quot;uid&quot;: &quot;6ab5f53f-b4fd-428e-adfa-1e7fc1b3648d&quot;
          }
        ],
        &quot;resourceVersion&quot;: &quot;37233&quot;,
        &quot;uid&quot;: &quot;f43979f2-7d3c-4d10-8a89-d9fc0cf409e6&quot;
      },
      &quot;spec&quot;: {
        &quot;bgpInstances&quot;: [
          {
            &quot;localASN&quot;: 65001,
            &quot;name&quot;: &quot;instance-65001&quot;,
            &quot;peers&quot;: [
              {
                &quot;name&quot;: &quot;tor-switch&quot;,
                &quot;peerASN&quot;: 65000,
                &quot;peerAddress&quot;: &quot;192.168.10.200&quot;,
                &quot;peerConfigRef&quot;: {
                  &quot;name&quot;: &quot;cilium-peer&quot;
                }
              }
            ]
          }
        ]
      },
      &quot;status&quot;: {
        &quot;bgpInstances&quot;: [
          {
            &quot;localASN&quot;: 65001,
            &quot;name&quot;: &quot;instance-65001&quot;,
            &quot;peers&quot;: [
              {
                &quot;establishedTime&quot;: &quot;2025-08-13T12:30:16Z&quot;,
                &quot;name&quot;: &quot;tor-switch&quot;,
                &quot;peerASN&quot;: 65000,
                &quot;peerAddress&quot;: &quot;192.168.10.200&quot;,
                &quot;peeringState&quot;: &quot;established&quot;,
                &quot;routeCount&quot;: [
                  {
                    &quot;advertised&quot;: 2,
                    &quot;afi&quot;: &quot;ipv4&quot;,
                    &quot;received&quot;: 3,
                    &quot;safi&quot;: &quot;unicast&quot;
                  }
                ],
                &quot;timers&quot;: {
                  &quot;appliedHoldTimeSeconds&quot;: 9,
                  &quot;appliedKeepaliveSeconds&quot;: 3
                }
              }
            ]
          }
        ]
      }
    },
    {
      &quot;apiVersion&quot;: &quot;cilium.io/v2&quot;,
      &quot;kind&quot;: &quot;CiliumBGPNodeConfig&quot;,
      &quot;metadata&quot;: {
        &quot;creationTimestamp&quot;: &quot;2025-08-13T12:30:13Z&quot;,
        &quot;generation&quot;: 1,
        &quot;name&quot;: &quot;k8s-w0&quot;,
        &quot;ownerReferences&quot;: [
          {
            &quot;apiVersion&quot;: &quot;cilium.io/v2&quot;,
            &quot;controller&quot;: true,
            &quot;kind&quot;: &quot;CiliumBGPClusterConfig&quot;,
            &quot;name&quot;: &quot;cilium-bgp&quot;,
            &quot;uid&quot;: &quot;6ab5f53f-b4fd-428e-adfa-1e7fc1b3648d&quot;
          }
        ],
        &quot;resourceVersion&quot;: &quot;37226&quot;,
        &quot;uid&quot;: &quot;b570c84e-0520-43b7-85d0-2466df0c9e04&quot;
      },
      &quot;spec&quot;: {
        &quot;bgpInstances&quot;: [
          {
            &quot;localASN&quot;: 65001,
            &quot;name&quot;: &quot;instance-65001&quot;,
            &quot;peers&quot;: [
              {
                &quot;name&quot;: &quot;tor-switch&quot;,
                &quot;peerASN&quot;: 65000,
                &quot;peerAddress&quot;: &quot;192.168.10.200&quot;,
                &quot;peerConfigRef&quot;: {
                  &quot;name&quot;: &quot;cilium-peer&quot;
                }
              }
            ]
          }
        ]
      },
      &quot;status&quot;: {
        &quot;bgpInstances&quot;: [
          {
            &quot;localASN&quot;: 65001,
            &quot;name&quot;: &quot;instance-65001&quot;,
            &quot;peers&quot;: [
              {
                &quot;establishedTime&quot;: &quot;2025-08-13T12:30:27Z&quot;,
                &quot;name&quot;: &quot;tor-switch&quot;,
                &quot;peerASN&quot;: 65000,
                &quot;peerAddress&quot;: &quot;192.168.10.200&quot;,
                &quot;peeringState&quot;: &quot;established&quot;,
                &quot;routeCount&quot;: [
                  {
                    &quot;advertised&quot;: 2,
                    &quot;afi&quot;: &quot;ipv4&quot;,
                    &quot;received&quot;: 1,
                    &quot;safi&quot;: &quot;unicast&quot;
                  }
                ],
                &quot;timers&quot;: {
                  &quot;appliedHoldTimeSeconds&quot;: 9,
                  &quot;appliedKeepaliveSeconds&quot;: 3
                }
              }
            ]
          }
        ]
      }
    },
    {
      &quot;apiVersion&quot;: &quot;cilium.io/v2&quot;,
      &quot;kind&quot;: &quot;CiliumBGPNodeConfig&quot;,
      &quot;metadata&quot;: {
        &quot;creationTimestamp&quot;: &quot;2025-08-13T12:30:13Z&quot;,
        &quot;generation&quot;: 1,
        &quot;name&quot;: &quot;k8s-w1&quot;,
        &quot;ownerReferences&quot;: [
          {
            &quot;apiVersion&quot;: &quot;cilium.io/v2&quot;,
            &quot;controller&quot;: true,
            &quot;kind&quot;: &quot;CiliumBGPClusterConfig&quot;,
            &quot;name&quot;: &quot;cilium-bgp&quot;,
            &quot;uid&quot;: &quot;6ab5f53f-b4fd-428e-adfa-1e7fc1b3648d&quot;
          }
        ],
        &quot;resourceVersion&quot;: &quot;37239&quot;,
        &quot;uid&quot;: &quot;2c2497ce-e5de-40d2-aa80-c6b24d0d9dcc&quot;
      },
      &quot;spec&quot;: {
        &quot;bgpInstances&quot;: [
          {
            &quot;localASN&quot;: 65001,
            &quot;name&quot;: &quot;instance-65001&quot;,
            &quot;peers&quot;: [
              {
                &quot;name&quot;: &quot;tor-switch&quot;,
                &quot;peerASN&quot;: 65000,
                &quot;peerAddress&quot;: &quot;192.168.10.200&quot;,
                &quot;peerConfigRef&quot;: {
                  &quot;name&quot;: &quot;cilium-peer&quot;
                }
              }
            ]
          }
        ]
      },
      &quot;status&quot;: {
        &quot;bgpInstances&quot;: [
          {
            &quot;localASN&quot;: 65001,
            &quot;name&quot;: &quot;instance-65001&quot;,
            &quot;peers&quot;: [
              {
                &quot;establishedTime&quot;: &quot;2025-08-13T12:27:51Z&quot;,
                &quot;name&quot;: &quot;tor-switch&quot;,
                &quot;peerASN&quot;: 65000,
                &quot;peerAddress&quot;: &quot;192.168.10.200&quot;,
                &quot;peeringState&quot;: &quot;established&quot;,
                &quot;routeCount&quot;: [
                  {
                    &quot;advertised&quot;: 2,
                    &quot;afi&quot;: &quot;ipv4&quot;,
                    &quot;received&quot;: 1,
                    &quot;safi&quot;: &quot;unicast&quot;
                  }
                ],
                &quot;timers&quot;: {
                  &quot;appliedHoldTimeSeconds&quot;: 9,
                  &quot;appliedKeepaliveSeconds&quot;: 3
                }
              }
            ]
          }
        ]
      }
    }
  ],
  &quot;kind&quot;: &quot;List&quot;,
  &quot;metadata&quot;: {
    &quot;resourceVersion&quot;: &quot;&quot;
  }
}

# router에는 전달 받은 IP 대역에 대한 라우팅이 추가됨
ip -c route | grep bgp

root@router:~# ip -c route | grep bgp
172.20.0.0/24 nhid 32 via 192.168.10.100 dev eth1 proto bgp metric 20
172.20.1.0/24 nhid 30 via 192.168.10.101 dev eth1 proto bgp metric 20
172.20.2.0/24 nhid 29 via 192.168.20.100 dev eth2 proto bgp metric 20


# router에서 beigbor 정보 확인
vtysh -c 'show ip bgp summary'

root@router:~# vtysh -c 'show ip bgp summary'

IPv4 Unicast Summary (VRF default):
BGP router identifier 192.168.10.200, local AS number 65000 vrf-id 0
BGP table version 4
RIB entries 7, using 1344 bytes of memory
Peers 3, using 2172 KiB of memory
Peer groups 1, using 64 bytes of memory

Neighbor        V         AS   MsgRcvd   MsgSent   TblVer  InQ OutQ  Up/Down State/PfxRcd   PfxSnt Desc
192.168.10.100  4      65001       422       426        0    0    0 00:21:00            1        4 N/A
192.168.10.101  4      65001       424       426        0    0    0 00:21:01            1        4 N/A
192.168.20.100  4      65001       424       426        0    0    0 00:21:01            1        4 N/A

Total number of neighbors 3

vtysh -c 'show ip bgp'

root@router:~# vtysh -c 'show ip bgp'
BGP table version is 4, local router ID is 192.168.10.200, vrf id 0
Default local pref 100, local AS 65000
Status codes:  s suppressed, d damped, h history, * valid, &amp;gt; best, = multipath,
               i internal, r RIB-failure, S Stale, R Removed
Nexthop codes: @NNN nexthop's vrf id, &amp;lt; announce-nh-self
Origin codes:  i - IGP, e - EGP, ? - incomplete
RPKI validation codes: V valid, I invalid, N Not found

   Network          Next Hop            Metric LocPrf Weight Path
*&amp;gt; 10.10.1.0/24     0.0.0.0                  0         32768 i
*&amp;gt; 172.20.0.0/24    192.168.10.100                         0 65001 i
*&amp;gt; 172.20.1.0/24    192.168.10.101                         0 65001 i
*&amp;gt; 172.20.2.0/24    192.168.20.100                         0 65001 i

Displayed  4 routes and 4 total paths&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이렇게 보면 router는 Cilium 노드의 PodCIDR을 정상적으로 전달받아 라우팅이 가능한 것을 알 수 있습니다. 다만 이 상황에서도 curl-pod -&amp;gt; webpod로의 통신은 전체가 이뤄지지 않습니다.&lt;/p&gt;
&lt;pre class=&quot;yaml&quot;&gt;&lt;code&gt;# 신규 터미널 2 (k8s-ctr) 
kubectl exec -it curl-pod -- sh -c 'while true; do curl -s --connect-timeout 1 webpod | grep Hostname; echo &quot;---&quot; ; sleep 1; done'
---
---
Hostname: webpod-697b545f57-rtmv9
---
Hostname: webpod-697b545f57-rtmv9
---
---
---
Hostname: webpod-697b545f57-rtmv9
---
---
---&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;CiliumBPGClusterconfig를 통해서 각 노드를 Peer로 등록해보려고 했지만 앞서 살펴본 바와 같이 cilium 자체가 직접 BGP를 받아주는 역할을 하지 않는 것으로 보이며, 테스트 결과 불가한 것으로 보입니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;관련하여 Cilium에 대한 몇가지 Issue를 살펴보면, Cilium은 upstream으로 Advertising만 하는 역할을 하도록 구현된 것 같습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;https://github.com/cilium/cilium/issues/34296&quot;&gt;https://github.com/cilium/cilium/issues/34296&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1203&quot; data-origin-height=&quot;111&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/kWEb6/btsPUOVXbT2/ysDIoVLrdMrNZ3KVtCbvc0/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/kWEb6/btsPUOVXbT2/ysDIoVLrdMrNZ3KVtCbvc0/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/kWEb6/btsPUOVXbT2/ysDIoVLrdMrNZ3KVtCbvc0/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FkWEb6%2FbtsPUOVXbT2%2FysDIoVLrdMrNZ3KVtCbvc0%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1203&quot; height=&quot;111&quot; data-origin-width=&quot;1203&quot; data-origin-height=&quot;111&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;또한 이미 살펴본 내용과 같이 auto-direct-node-routes를 통해서 L2 의 노드 연결은 가능하지만, L3는 불가합니다. Full Mesh BGP는 현재는 계획이 없는 것으로 확인됩니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;https://github.com/cilium/cilium/issues/31124&quot;&gt;https://github.com/cilium/cilium/issues/31124&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1114&quot; data-origin-height=&quot;208&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bhJyS1/btsPUbDZSaC/MyF14HFbG5A2AxxeEXXKBK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bhJyS1/btsPUbDZSaC/MyF14HFbG5A2AxxeEXXKBK/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bhJyS1/btsPUbDZSaC/MyF14HFbG5A2AxxeEXXKBK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbhJyS1%2FbtsPUbDZSaC%2FMyF14HFbG5A2AxxeEXXKBK%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1114&quot; height=&quot;208&quot; data-origin-width=&quot;1114&quot; data-origin-height=&quot;208&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아래 Issue에서 Kubernetes IPAM 모드에서 &lt;a href=&quot;https://docs.cilium.io/en/latest/network/kube-router/#kube-router&quot;&gt;KubeRouter&lt;/a&gt;을 사용하는 경우 가능하다는 커맨트가 있기는 한데, 시간 관계상 추가 테스트를 진행하지는 못했습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;https://github.com/cilium/cilium/issues/31091#issuecomment-1976188804&quot;&gt;https://github.com/cilium/cilium/issues/31091#issuecomment-1976188804&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1259&quot; data-origin-height=&quot;393&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bt5fEx/btsPUhRIzCp/fHpdLWNGTqWFYYJ9BV9p7K/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bt5fEx/btsPUhRIzCp/fHpdLWNGTqWFYYJ9BV9p7K/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bt5fEx/btsPUhRIzCp/fHpdLWNGTqWFYYJ9BV9p7K/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2Fbt5fEx%2FbtsPUhRIzCp%2FfHpdLWNGTqWFYYJ9BV9p7K%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1259&quot; height=&quot;393&quot; data-origin-width=&quot;1259&quot; data-origin-height=&quot;393&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;한편 모든 노드에서 default Gateway를 tor-Router로 등록하고, tor-Router에서 PodCIDR에 대한 라우팅 정보를 알고 있다면, 실제로 각 노드에서 상대노드의 PodCIDR에 대한 정보를 직접 등록하지 않아도 통신이 가능할 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;다만 실습 환경의 특성상 default routing이 NAT를 수행하는 eth0로 되어 있기 때문에 별도의 라우팅이 필요합니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;(⎈|HomeLab:N/A) root@k8s-ctr:~# ip route
default via 10.0.2.2 dev eth0 proto dhcp src 10.0.2.15 metric 100 # default
10.0.2.0/24 dev eth0 proto kernel scope link src 10.0.2.15 metric 100
10.0.2.2 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100
10.0.2.3 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100
172.20.0.0/24 via 172.20.0.66 dev cilium_host proto kernel src 172.20.0.66
172.20.0.66 dev cilium_host proto kernel scope link
192.168.10.0/24 dev eth1 proto kernel scope link src 192.168.10.100
192.168.20.0/24 via 192.168.10.200 dev eth1 proto static&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;문제를 해결하기 위해서 아래와 같이 Static 라우팅을 추가해 보겠습니다. PodCIDR에 대해서 router(192.168.10.200)이 처리하도록 합니다.&lt;/p&gt;
&lt;pre class=&quot;yaml&quot;&gt;&lt;code&gt;# k8s 파드 사용 대역 통신 전체는 eth1을 통해서 라우팅 설정
ip route add 172.20.0.0/16 via 192.168.10.200
for i in k8s-w0 k8s-w1 router ; do echo &quot;&amp;gt;&amp;gt; node : $i &amp;lt;&amp;lt;&quot;; sshpass -p 'vagrant' ssh -o StrictHostKeyChecking=no vagrant@$i hostname; echo; done
sshpass -p 'vagrant' ssh vagrant@k8s-w1 sudo ip route add 172.20.0.0/16 via 192.168.10.200
sshpass -p 'vagrant' ssh vagrant@k8s-w0 sudo ip route add 172.20.0.0/16 via 192.168.20.200

# router 가 bgp로 학습한 라우팅 정보 한번 더 확인 : 
sshpass -p 'vagrant' ssh vagrant@router ip -c route | grep bgp
172.20.0.0/24 nhid 64 via 192.168.10.100 dev eth1 proto bgp metric 20 
172.20.1.0/24 nhid 60 via 192.168.10.101 dev eth1 proto bgp metric 20 
172.20.2.0/24 nhid 62 via 192.168.20.100 dev eth2 proto bgp metric 20 

# 정상 통신 확인!
kubectl exec -it curl-pod -- sh -c 'while true; do curl -s --connect-timeout 1 webpod | grep Hostname; echo &quot;---&quot; ; sleep 1; done'
---
---
Hostname: webpod-697b545f57-rtmv9
---
---
---
Hostname: webpod-697b545f57-wr4tf # 각 노드에 라우팅 등록 이후 정상 통신 가능
---
Hostname: webpod-697b545f57-wr4tf
---
Hostname: webpod-697b545f57-rtmv9
---
Hostname: webpod-697b545f57-wr4tf
---
Hostname: webpod-697b545f57-rtmv9
---
Hostname: webpod-697b545f57-bvm82
---&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;3. External IP에 대한 BGP Advertisement&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;앞선 게시글에서 Cilium LoadBalancer IPAM과 L2 Announcement를 통해서 Loadbalancer 유형의 서비스의 External IP를 외부에서 접근 가능하도록 설정하는 방식을 알봤습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;참고: &lt;a href=&quot;https://a-person.tistory.com/54&quot;&gt;https://a-person.tistory.com/54&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Cilium의 BGP Control Plane에서 이러한 External IP를 BGP로 Advertisement할 수 있습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;2560&quot; data-origin-height=&quot;1222&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/buuNqW/btsPVcI3hr5/NcAbFtveZTXBnbtcWg1O3K/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/buuNqW/btsPVcI3hr5/NcAbFtveZTXBnbtcWg1O3K/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/buuNqW/btsPVcI3hr5/NcAbFtveZTXBnbtcWg1O3K/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbuuNqW%2FbtsPVcI3hr5%2FNcAbFtveZTXBnbtcWg1O3K%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;2560&quot; height=&quot;1222&quot; data-origin-width=&quot;2560&quot; data-origin-height=&quot;1222&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;출처: &lt;a href=&quot;https://isovalent.com/blog/post/migrating-from-metallb-to-cilium/#l3-announcement-over-bgp&quot;&gt;https://isovalent.com/blog/post/migrating-from-metallb-to-cilium/#l3-announcement-over-bgp&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;실습을 위해서 LoadBalancer IP Pool과 Service에 대한 기본적인 구성을 아래와 같이 진행합니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;# LB IPAM Announcement over BGP 설정 예정으로, 노드의 네트워크 대역이 아니여도 가능!
cat &amp;lt;&amp;lt; EOF | kubectl apply -f -
apiVersion: &quot;cilium.io/v2&quot;
kind: CiliumLoadBalancerIPPool
metadata:
  name: &quot;cilium-pool&quot;
spec:
  allowFirstLastIPs: &quot;No&quot;
  blocks:
  - cidr: &quot;172.16.1.0/24&quot;
EOF

kubectl get ippool

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get ippool
NAME          DISABLED   CONFLICTING   IPS AVAILABLE   AGE
cilium-pool   false      False         254             9s

# 기존 Service의 유형을 LoadBalancer로 변경
kubectl patch svc webpod -p '{&quot;spec&quot;: {&quot;type&quot;: &quot;LoadBalancer&quot;}}'
kubectl get svc webpod 


(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl patch svc webpod -p '{&quot;spec&quot;: {&quot;type&quot;: &quot;LoadBalancer&quot;}}'
service/webpod patched
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get svc webpod
NAME     TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
webpod   LoadBalancer   10.96.120.235   172.16.1.1    80:31039/TCP   24h

# 확인
kubectl get ippool

# IP Avvailable 254 -&amp;gt; 253
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get ippool
NAME          DISABLED   CONFLICTING   IPS AVAILABLE   AGE
cilium-pool   false      False         253             59s

# LBIP로 curl 요청 확인
kubectl get svc webpod -o jsonpath='{.status.loadBalancer.ingress[0].ip}'
LBIP=$(kubectl get svc webpod -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
curl -s $LBIP
curl -s $LBIP | grep Hostname
curl -s $LBIP | grep RemoteAddr

(⎈|HomeLab:N/A) root@k8s-ctr:~# curl -s $LBIP | grep RemoteAddr
RemoteAddr: 172.20.0.66:53430

# router에서 라우팅 정보 확인
root@router:~# ip route
default via 10.0.2.2 dev eth0 proto dhcp src 10.0.2.15 metric 100
10.0.2.0/24 dev eth0 proto kernel scope link src 10.0.2.15 metric 100
10.0.2.2 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100
10.0.2.3 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100
10.10.1.0/24 dev loop1 proto kernel scope link src 10.10.1.200
10.10.2.0/24 dev loop2 proto kernel scope link src 10.10.2.200
172.20.0.0/24 nhid 108 via 192.168.10.100 dev eth1 proto bgp metric 20
172.20.1.0/24 nhid 106 via 192.168.10.101 dev eth1 proto bgp metric 20
172.20.2.0/24 nhid 107 via 192.168.20.100 dev eth2 proto bgp metric 20
192.168.10.0/24 dev eth1 proto kernel scope link src 192.168.10.200
192.168.20.0/24 dev eth2 proto kernel scope link src 192.168.20.200
&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이제 LoadBalancer EXternal IP에 대해서 BGP로 광고 설정을 진행하겠습니다.&lt;/p&gt;
&lt;pre class=&quot;yaml&quot;&gt;&lt;code&gt;# 지정된 서비스에 대한 LoadBalancer의 EXternal IP를 BGP로 광고 설정
cat &amp;lt;&amp;lt; EOF | kubectl apply -f -
apiVersion: cilium.io/v2
kind: CiliumBGPAdvertisement
metadata:
  name: bgp-advertisements-lb-exip-webpod
  labels:
    advertise: bgp
spec:
  advertisements:
    - advertisementType: &quot;Service&quot;
      service:
        addresses:
          - LoadBalancerIP
      selector:             
        matchExpressions:
          - { key: app, operator: In, values: [ webpod ] }
EOF

kubectl get CiliumBGPAdvertisement

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get CiliumBGPAdvertisement
NAME                                AGE
bgp-advertisements                  135m
bgp-advertisements-lb-exip-webpod   13s


# 확인
kubectl exec -it -n kube-system ds/cilium -- cilium-dbg bgp route-policies

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it -n kube-system ds/cilium -- cilium-dbg bgp route-policies
VRouter   Policy Name                                             Type     Match Peers         Match Families   Match Prefixes (Min..Max Len)   RIB Action   Path Actions
65001     allow-local                                             import                                                                        accept
65001     tor-switch-ipv4-PodCIDR                                 export   192.168.10.200/32                    172.20.0.0/24 (24..24)          accept
65001     tor-switch-ipv4-Service-webpod-default-LoadBalancerIP   export   192.168.10.200/32                    172.16.1.1/32 (32..32)          accept


cilium bgp routes available ipv4 unicast

# 모든 노드로 광고가 된다.
(⎈|HomeLab:N/A) root@k8s-ctr:~# cilium bgp routes available ipv4 unicast
Node      VRouter   Prefix          NextHop   Age      Attrs
k8s-ctr   65001     172.16.1.1/32   0.0.0.0   41s      [{Origin: i} {Nexthop: 0.0.0.0}]
          65001     172.20.0.0/24   0.0.0.0   44m56s   [{Origin: i} {Nexthop: 0.0.0.0}]
k8s-w0    65001     172.16.1.1/32   0.0.0.0   41s      [{Origin: i} {Nexthop: 0.0.0.0}]
          65001     172.20.2.0/24   0.0.0.0   44m56s   [{Origin: i} {Nexthop: 0.0.0.0}]
k8s-w1    65001     172.16.1.1/32   0.0.0.0   41s      [{Origin: i} {Nexthop: 0.0.0.0}]
          65001     172.20.1.0/24   0.0.0.0   44m56s   [{Origin: i} {Nexthop: 0.0.0.0}]

# 현재 ExternalTrafficPolicy: Cluster이므로, 모든 노드에 대해서 연결이 가능하다.
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl describe svc webpod | grep 'Traffic Policy'
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster


# 현재 BGP가 동작하는 모든 노드로 전달 가능함
root@router:~# ip route
default via 10.0.2.2 dev eth0 proto dhcp src 10.0.2.15 metric 100
10.0.2.0/24 dev eth0 proto kernel scope link src 10.0.2.15 metric 100
10.0.2.2 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100
10.0.2.3 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100
10.10.1.0/24 dev loop1 proto kernel scope link src 10.10.1.200
10.10.2.0/24 dev loop2 proto kernel scope link src 10.10.2.200
172.16.1.1 nhid 113 proto bgp metric 20
        nexthop via 192.168.10.101 dev eth1 weight 1
        nexthop via 192.168.20.100 dev eth2 weight 1
        nexthop via 192.168.10.100 dev eth1 weight 1
172.20.0.0/24 nhid 108 via 192.168.10.100 dev eth1 proto bgp metric 20
172.20.1.0/24 nhid 106 via 192.168.10.101 dev eth1 proto bgp metric 20
172.20.2.0/24 nhid 107 via 192.168.20.100 dev eth2 proto bgp metric 20
192.168.10.0/24 dev eth1 proto kernel scope link src 192.168.10.200
192.168.20.0/24 dev eth2 proto kernel scope link src 192.168.20.200


sudo vtysh -c 'show ip route bgp'
sudo vtysh -c 'show ip bgp summary'
sudo vtysh -c 'show ip bgp'

root@router:~# sudo vtysh -c 'show ip route bgp'
Codes: K - kernel route, C - connected, S - static, R - RIP,
       O - OSPF, I - IS-IS, B - BGP, E - EIGRP, N - NHRP,
       T - Table, v - VNC, V - VNC-Direct, A - Babel, F - PBR,
       f - OpenFabric,
       &amp;gt; - selected route, * - FIB route, q - queued, r - rejected, b - backup
       t - trapped, o - offload failure

B&amp;gt;* 172.16.1.1/32 [20/0] via 192.168.10.100, eth1, weight 1, 00:01:12
  *                      via 192.168.10.101, eth1, weight 1, 00:01:12
  *                      via 192.168.20.100, eth2, weight 1, 00:01:12
B&amp;gt;* 172.20.0.0/24 [20/0] via 192.168.10.100, eth1, weight 1, 00:45:24
B&amp;gt;* 172.20.1.0/24 [20/0] via 192.168.10.101, eth1, weight 1, 00:45:24
B&amp;gt;* 172.20.2.0/24 [20/0] via 192.168.20.100, eth2, weight 1, 00:45:24
root@router:~# sudo vtysh -c 'show ip bgp summary'

IPv4 Unicast Summary (VRF default):
BGP router identifier 192.168.10.200, local AS number 65000 vrf-id 0
BGP table version 17
RIB entries 9, using 1728 bytes of memory
Peers 3, using 2172 KiB of memory
Peer groups 1, using 64 bytes of memory

Neighbor        V         AS   MsgRcvd   MsgSent   TblVer  InQ OutQ  Up/Down State/PfxRcd   PfxSnt Desc
192.168.10.100  4      65001      1447      1450        0    0    0 00:46:35            2        5 N/A
192.168.10.101  4      65001      1450      1454        0    0    0 00:46:36            2        5 N/A
192.168.20.100  4      65001      1450      1452        0    0    0 00:46:36            2        5 N/A

Total number of neighbors 3
root@router:~# sudo vtysh -c 'show ip bgp'
BGP table version is 17, local router ID is 192.168.10.200, vrf id 0
Default local pref 100, local AS 65000
Status codes:  s suppressed, d damped, h history, * valid, &amp;gt; best, = multipath,
               i internal, r RIB-failure, S Stale, R Removed
Nexthop codes: @NNN nexthop's vrf id, &amp;lt; announce-nh-self
Origin codes:  i - IGP, e - EGP, ? - incomplete
RPKI validation codes: V valid, I invalid, N Not found

   Network          Next Hop            Metric LocPrf Weight Path
*&amp;gt; 10.10.1.0/24     0.0.0.0                  0         32768 i
*&amp;gt; 172.16.1.1/32    192.168.10.100                         0 65001 i
*=                  192.168.20.100                         0 65001 i
*=                  192.168.10.101                         0 65001 i
*&amp;gt; 172.20.0.0/24    192.168.10.100                         0 65001 i
*&amp;gt; 172.20.1.0/24    192.168.10.101                         0 65001 i
*&amp;gt; 172.20.2.0/24    192.168.20.100                         0 65001 i

Displayed  5 routes and 7 total paths


sudo vtysh -c 'show ip bgp 172.16.1.1/32'

root@router:~# sudo vtysh -c 'show ip bgp 172.16.1.1/32'
BGP routing table entry for 172.16.1.1/32, version 17
Paths: (3 available, best #1, table default)
  Advertised to non peer-group peers:
  192.168.10.100 192.168.10.101 192.168.20.100
  65001
    192.168.10.100 from 192.168.10.100 (192.168.10.100)
      Origin IGP, valid, external, multipath, best (Router ID)
      Last update: Wed Aug 13 23:44:49 2025
  65001
    192.168.20.100 from 192.168.20.100 (192.168.20.100)
      Origin IGP, valid, external, multipath
      Last update: Wed Aug 13 23:44:49 2025
  65001
    192.168.10.101 from 192.168.10.101 (192.168.10.101)
      Origin IGP, valid, external, multipath
      Last update: Wed Aug 13 23:44:49 2025
&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;통신 테스트를 수행해보겠습니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;# 통신 테스트
LBIP=172.16.1.1
curl -s $LBIP
curl -s $LBIP | grep Hostname
curl -s $LBIP | grep RemoteAddr

# 반복 접속
for i in {1..100};  do curl -s $LBIP | grep Hostname; done | sort | uniq -c | sort -nr
while true; do curl -s $LBIP | egrep 'Hostname|RemoteAddr' ; sleep 0.1; done

# 접속이 잘 된다.
(⎈|HomeLab:N/A) root@k8s-ctr:~# for i in {1..100};  do curl -s $LBIP | grep Hostname; done | sort | uniq -c | sort -nr
     41 Hostname: webpod-697b545f57-wr4tf
     30 Hostname: webpod-697b545f57-bvm82
     29 Hostname: webpod-697b545f57-rtmv9
(⎈|HomeLab:N/A) root@k8s-ctr:~# while true; do curl -s $LBIP | egrep 'Hostname|RemoteAddr' ; sleep 0.1; done
Hostname: webpod-697b545f57-rtmv9
RemoteAddr: 172.20.0.66:57324
Hostname: webpod-697b545f57-bvm82
RemoteAddr: 192.168.10.100:43910
Hostname: webpod-697b545f57-wr4tf
RemoteAddr: 192.168.10.100:45136
Hostname: webpod-697b545f57-wr4tf
RemoteAddr: 192.168.10.100:45144
Hostname: webpod-697b545f57-rtmv9
RemoteAddr: 172.20.0.66:57330
Hostname: webpod-697b545f57-rtmv9
RemoteAddr: 172.20.0.66:57332
&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;앞서 확인해본 External Traffic Policy가 Cluster인 환경에 대해서 어떤 방식으로 동작하는지 추가로 실습해보겠습니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;# k8s-ctr 에서 replicas=2 로 줄여보자
kubectl scale deployment webpod --replicas 2
kubectl get pod -owide

# 실제 파드는 k8s-ctr, k8s-w0 에 실행 중이다.
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl scale deployment webpod --replicas 2
deployment.apps/webpod scaled
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get pod -owide
NAME                      READY   STATUS    RESTARTS   AGE   IP             NODE      NOMINATED NODE   READINESS GATES
curl-pod                  1/1     Running   0          25h   172.20.0.218   k8s-ctr   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
webpod-697b545f57-bvm82   1/1     Running   0          24h   172.20.2.129   k8s-w0    &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
webpod-697b545f57-rtmv9   1/1     Running   0          25h   172.20.0.145   k8s-ctr   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;

cilium bgp routes

# 파드는 2개이지만 여전히 모든 노드가 BGP로 advertise를 하고 있따.
(⎈|HomeLab:N/A) root@k8s-ctr:~# cilium bgp routes
(Defaulting to `available ipv4 unicast` routes, please see help for more options)

Node      VRouter   Prefix          NextHop   Age      Attrs
k8s-ctr   65001     172.16.1.1/32   0.0.0.0   10m40s   [{Origin: i} {Nexthop: 0.0.0.0}]
          65001     172.20.0.0/24   0.0.0.0   54m55s   [{Origin: i} {Nexthop: 0.0.0.0}]
k8s-w0    65001     172.16.1.1/32   0.0.0.0   10m40s   [{Origin: i} {Nexthop: 0.0.0.0}]
          65001     172.20.2.0/24   0.0.0.0   54m55s   [{Origin: i} {Nexthop: 0.0.0.0}]
k8s-w1    65001     172.16.1.1/32   0.0.0.0   10m40s   [{Origin: i} {Nexthop: 0.0.0.0}]
          65001     172.20.1.0/24   0.0.0.0   54m55s   [{Origin: i} {Nexthop: 0.0.0.0}]

# router 에서 정보 확인 : k8s-w1 노드에 대상 파드가 배치되지 않았지만, 라우팅 경로 설정이 되어 있다.
ip -c route
vtysh -c 'show ip bgp 172.16.1.1/32'

root@router:~# ip -c route
172.16.1.1 nhid 113 proto bgp metric 20
        nexthop via 192.168.10.101 dev eth1 weight 1
        nexthop via 192.168.20.100 dev eth2 weight 1
        nexthop via 192.168.10.100 dev eth1 weight 1

root@router:~# vtysh -c 'show ip bgp 172.16.1.1/32'
BGP routing table entry for 172.16.1.1/32, version 17
Paths: (3 available, best #1, table default)
  Advertised to non peer-group peers:
  192.168.10.100 192.168.10.101 192.168.20.100
  65001
    192.168.10.100 from 192.168.10.100 (192.168.10.100)
      Origin IGP, valid, external, multipath, best (Router ID)
      Last update: Wed Aug 13 23:44:50 2025
  65001
    192.168.20.100 from 192.168.20.100 (192.168.20.100)
      Origin IGP, valid, external, multipath
      Last update: Wed Aug 13 23:44:50 2025
  65001
    192.168.10.101 from 192.168.10.101 (192.168.10.101)
      Origin IGP, valid, external, multipath
      Last update: Wed Aug 13 23:44:50 2025


# 반복 접속 
for i in {1..100};  do curl -s $LBIP | grep Hostname; done | sort | uniq -c | sort -nr
while true; do curl -s $LBIP | egrep 'Hostname|RemoteAddr' ; sleep 0.1; done

# 파드 분산은 잘 이뤄지고 있다.
root@router:~# for i in {1..100};  do curl -s $LBIP | grep Hostname; done | sort | uniq -c | sort -nr
     52 Hostname: webpod-697b545f57-rtmv9
     48 Hostname: webpod-697b545f57-bvm82

# 반복 접속 시 파드가 없는 RemoteAddr: 192.168.10.101도 확인됨. 모든 경우에 정확한 RemoteAddr이 확인되지 않음
root@router:~# while true; do curl -s $LBIP | egrep 'Hostname|RemoteAddr' ; sleep 0.1; done
Hostname: webpod-697b545f57-rtmv9
RemoteAddr: 192.168.20.100:48894
Hostname: webpod-697b545f57-bvm82
RemoteAddr: 192.168.10.100:48902
Hostname: webpod-697b545f57-bvm82
RemoteAddr: 192.168.10.100:48908
Hostname: webpod-697b545f57-bvm82
RemoteAddr: 192.168.10.101:48922
Hostname: webpod-697b545f57-bvm82
RemoteAddr: 192.168.10.101:48936
Hostname: webpod-697b545f57-bvm82
RemoteAddr: 192.168.10.101:48940
Hostname: webpod-697b545f57-bvm82
RemoteAddr: 192.168.10.100:48952
Hostname: webpod-697b545f57-bvm82
RemoteAddr: 192.168.10.101:48968
Hostname: webpod-697b545f57-rtmv9
RemoteAddr: 192.168.20.100:58500
Hostname: webpod-697b545f57-rtmv9
RemoteAddr: 192.168.10.101:58510
Hostname: webpod-697b545f57-rtmv9
RemoteAddr: 192.168.20.200:58520
Hostname: webpod-697b545f57-bvm82
RemoteAddr: 192.168.10.101:58536
Hostname: webpod-697b545f57-bvm82
RemoteAddr: 192.168.10.100:58550
...&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;External Traffic Policy를 Cluster로 설정되면, 모든 노드들이 BGP Advertise를 하기 때문에, 요청이 모든 노드로 분산된 이후에 다시 파드가 위치하는 노드로 전달됩니다. 이 과정에서 SNAT이 발생하고, remoteAddr도 경유한 노드의 IP가 기록됩니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이러한 경우 External Traffic Policy를 Local로 설정하면, 파드가 실행된 노드에만 실제로 BGP advertise가 가능하도록 변경됩니다. 그리고 해당 노드에 진입한 트래픽은 해당 노드에 위치한 파드로 전달됩니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이로써 불필요한 노드로 트래픽이 경유하지 않게되고, 결과적으로 Client IP가 보존되는 효과가 있습니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;# k8s-ctr
kubectl patch service webpod -p '{&quot;spec&quot;:{&quot;externalTrafficPolicy&quot;:&quot;Local&quot;}}'

# router(frr) : 서비스에 대상 파드가 배치된 노드만 BGP 경로에 출력된다.
ip -c route
vtysh -c 'show ip bgp 172.16.1.1/32'

root@router:~# ip -c route
default via 10.0.2.2 dev eth0 proto dhcp src 10.0.2.15 metric 100
10.0.2.0/24 dev eth0 proto kernel scope link src 10.0.2.15 metric 100
10.0.2.2 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100
10.0.2.3 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100
10.10.1.0/24 dev loop1 proto kernel scope link src 10.10.1.200
10.10.2.0/24 dev loop2 proto kernel scope link src 10.10.2.200
172.16.1.1 nhid 125 proto bgp metric 20
        nexthop via 192.168.20.100 dev eth2 weight 1
        nexthop via 192.168.10.100 dev eth1 weight 1
172.20.0.0/24 nhid 108 via 192.168.10.100 dev eth1 proto bgp metric 20
172.20.1.0/24 nhid 106 via 192.168.10.101 dev eth1 proto bgp metric 20
172.20.2.0/24 nhid 107 via 192.168.20.100 dev eth2 proto bgp metric 20
192.168.10.0/24 dev eth1 proto kernel scope link src 192.168.10.200
192.168.20.0/24 dev eth2 proto kernel scope link src 192.168.20.200

root@router:~# vtysh -c 'show ip bgp 172.16.1.1/32'
BGP routing table entry for 172.16.1.1/32, version 21
Paths: (2 available, best #1, table default)
  Advertised to non peer-group peers:
  192.168.10.100 192.168.10.101 192.168.20.100
  65001
    192.168.10.100 from 192.168.10.100 (192.168.10.100)
      Origin IGP, valid, external, multipath, best (Router ID)
      Last update: Thu Aug 14 00:03:42 2025
  65001
    192.168.20.100 from 192.168.20.100 (192.168.20.100)
      Origin IGP, valid, external, multipath
      Last update: Wed Aug 13 23:44:50 2025


# 현재 실습 환경 경우 반복 접속 시 한쪽 노드로 선택되고, 소스IP가 보존됨.
LBIP=172.16.1.1
curl -s $LBIP
for i in {1..100};  do curl -s $LBIP | grep Hostname; done | sort | uniq -c | sort -nr
while true; do curl -s $LBIP | egrep 'Hostname|RemoteAddr' ; sleep 0.1; done

# 한쪽 파드로 몰린다.
root@router:~# for i in {1..100};  do curl -s $LBIP | grep Hostname; done | sort | uniq -c | sort -nr
    100 Hostname: webpod-697b545f57-bvm82

# 리눅스 커널은 기본적으로 L3(목적지 IP 기반) 해시를 사용합니다. 정교한 부하분산을 원하면 L4 해시(IP + 포트)로 설정
# 1 : source IP, dest IP, source port, dest port 기반 hash (more granular)로 변경
sudo sysctl -w net.ipv4.fib_multipath_hash_policy=1
echo &quot;net.ipv4.fib_multipath_hash_policy=1&quot; &amp;gt;&amp;gt; /etc/sysctl.conf

# 재확인: 분산이 고루 된다.
root@router:~# for i in {1..100};  do curl -s $LBIP | grep Hostname; done | sort | uniq -c | sort -nr
     54 Hostname: webpod-697b545f57-bvm82
     46 Hostname: webpod-697b545f57-rtmv9

# 불필요한 노드를 경유하지 않음에 따라 SNAT이 발생하지 않고, remoteAddr도 router의 IP로 보존되는 것을 알 수 있다.
root@router:~# while true; do curl -s $LBIP | egrep 'Hostname|RemoteAddr' ; sleep 0.1; done
Hostname: webpod-697b545f57-rtmv9
RemoteAddr: 192.168.20.200:35666
Hostname: webpod-697b545f57-rtmv9
RemoteAddr: 192.168.20.200:35678
Hostname: webpod-697b545f57-rtmv9
RemoteAddr: 192.168.20.200:35692
Hostname: webpod-697b545f57-bvm82
RemoteAddr: 192.168.20.200:35708
Hostname: webpod-697b545f57-rtmv9
RemoteAddr: 192.168.20.200:35720
Hostname: webpod-697b545f57-rtmv9
RemoteAddr: 192.168.20.200:35732
Hostname: webpod-697b545f57-rtmv9
RemoteAddr: 192.168.20.200:35744
&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이상 BGP를 통해서 Loadbalancer의 External IP를 Advertise하는 방법을 살펴봤습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;마치며&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이번 게시물에서는 Cilium에서 BGP Control Plane을 활용하여 PodCIDR이나 External IP를 외부 라우터에 Advertise해 보았습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;다음 게시물에서는 Cilium으로 구성된 클러스터 간 서비스를 연동하는 ClusterMesh에 대해서 살펴보겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;</description>
      <category>Cilium</category>
      <category>bgp control plane</category>
      <category>cilium</category>
      <category>kubernetes</category>
      <author>한명</author>
      <guid isPermaLink="true">https://a-person.tistory.com/55</guid>
      <comments>https://a-person.tistory.com/55#entry55comment</comments>
      <pubDate>Thu, 14 Aug 2025 22:54:59 +0900</pubDate>
    </item>
    <item>
      <title>[6] Cilium - LoadBalancer IPAM, L2 Announcement</title>
      <link>https://a-person.tistory.com/54</link>
      <description>&lt;p data-ke-size=&quot;size16&quot;&gt;지난 포스트에서 Cilium의 Encapsulation 모드를 살펴봤으며, 이어서 Cilium LoadBalancer IPAM과 Cilium L2 Announcement 에 대해서 살펴보겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;실습 환경 구성은 사전에 아래 포스트에서 진행하였으니 참고 부탁드립니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;https://a-person.tistory.com/53&quot; target=&quot;_blank&quot; rel=&quot;noopener&amp;nbsp;noreferrer&quot;&gt;https://a-person.tistory.com/53&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;목차&lt;/h2&gt;
&lt;ol style=&quot;list-style-type: decimal;&quot; data-ke-list-type=&quot;decimal&quot;&gt;
&lt;li&gt;실습 환경 구성&lt;/li&gt;
&lt;li&gt;Encapsulation 모드&lt;/li&gt;
&lt;li&gt;&lt;b&gt;Cilium LoadBalancer IPAM&lt;/b&gt;&lt;/li&gt;
&lt;li&gt;&lt;b&gt;Cilium L2 Announcement&lt;/b&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;3. Cilium LoadBalancer IPAM&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이번 주제는 쿠버네티스에서 Loadbalancer 유형의 Service 를 생성할 때, Cilium의 Service Loadbalancer IPAM을 사용하는 방법을 살펴보겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;쿠버네티스에서 Loadbalancer 유형의 서비스를 만들면 Loadbalancer의 구현체로 부터 External IP를 할당 받습니다. LoadBalancer IPAM은 Cilium에서 IP Pool을 만들고 Loadbalancer의 External IP를 할당해주는 개념입니다.&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;참고: &lt;a href=&quot;https://docs.cilium.io/en/stable/network/lb-ipam/&quot;&gt;https://docs.cilium.io/en/stable/network/lb-ipam/&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아래와 같이 예시를 살펴보실 수 있습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;2560&quot; data-origin-height=&quot;1485&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/VYcee/btsPLMx9RhW/cteASQTnOkkLrGFeOt4DFK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/VYcee/btsPLMx9RhW/cteASQTnOkkLrGFeOt4DFK/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/VYcee/btsPLMx9RhW/cteASQTnOkkLrGFeOt4DFK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FVYcee%2FbtsPLMx9RhW%2FcteASQTnOkkLrGFeOt4DFK%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;2560&quot; height=&quot;1485&quot; data-origin-width=&quot;2560&quot; data-origin-height=&quot;1485&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;출처: &lt;a href=&quot;https://isovalent.com/blog/post/migrating-from-metallb-to-cilium/#load-balancer-ipam&quot;&gt;https://isovalent.com/blog/post/migrating-from-metallb-to-cilium/#load-balancer-ipam&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;실습을 통해서 Loadbalancer IPAM을 살펴보겠습니다.&lt;/p&gt;
&lt;pre class=&quot;routeros&quot;&gt;&lt;code&gt;# cilium ip pool 생성
kubectl get CiliumLoadBalancerIPPool -A

# 충돌나지 않는 대역으로 LB IP Pool 생성
cat &amp;lt;&amp;lt; EOF | kubectl apply -f -
apiVersion: &quot;cilium.io/v2&quot;  # v1.17 : cilium.io/v2alpha1
kind: CiliumLoadBalancerIPPool
metadata:
  name: &quot;cilium-lb-ippool&quot;
spec:
  blocks:
  - start: &quot;192.168.10.211&quot;
    stop:  &quot;192.168.10.215&quot;
EOF


# CiliumLoadBalancerIPPool 축약어 : ippools,ippool,lbippool,lbippools
kubectl api-resources | grep -i CiliumLoadBalancerIPPool
ciliumloadbalancerippools           ippools,ippool,lbippool,lbippools   cilium.io/v2               false        CiliumLoadBalancerIPPool


(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get CiliumLoadBalancerIPPool -A
NAME               DISABLED   CONFLICTING   IPS AVAILABLE   AGE
cilium-lb-ippool   false      False         5               109s


# webpod 서비스를 LoadBalancer Type 변경 설정
kubectl patch svc webpod -p '{&quot;spec&quot;:{&quot;type&quot;:&quot;LoadBalancer&quot;}}'

# 확인
kubectl get svc webpod

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl patch svc webpod -p '{&quot;spec&quot;:{&quot;type&quot;:&quot;LoadBalancer&quot;}}'
service/webpod patched
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get svc 
NAME         TYPE           CLUSTER-IP      EXTERNAL-IP      PORT(S)        AGE
kubernetes   ClusterIP      10.96.0.1       &amp;lt;none&amp;gt;           443/TCP        25h
webpod       LoadBalancer   10.96.234.235   192.168.10.211   80:30776/TCP   22h


# LBIP로 curl 요청 확인 : k8s 노드들에서 LB EXIP로 통신 가능!
kubectl get svc webpod -o jsonpath='{.status.loadBalancer.ingress[0].ip}'
LBIP=$(kubectl get svc webpod -o jsonpath='{.status.loadBalancer.ingress[0].ip}')

curl -s $LBIP
kubectl exec -it curl-pod -- curl -s $LBIP
kubectl exec -it curl-pod -- curl -s $LBIP | grep Hostname   # 대상 파드 이름 출력
kubectl exec -it curl-pod -- curl -s $LBIP | grep RemoteAddr # 대상 파드 입장에서 소스 IP 출력(Layer3)

# 클러스터 내부에서는 바로 통신이 가능하다.
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it curl-pod -- curl -s $LBIP | grep Hostname   # 대상 파드 이름 출력
Hostname: webpod-697b545f57-rdh2t


# IP 할당 확인
kubectl get ippools
kubectl get ippools -o jsonpath='{.items[*].status.conditions[?(@.type!=&quot;cilium.io/PoolConflict&quot;)]}' | jq

# ip pool이 5-&amp;gt;4로 변경됨
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get ippools
NAME               DISABLED   CONFLICTING   IPS AVAILABLE   AGE
cilium-lb-ippool   false      False         4               4m53s

kubectl get svc webpod -o json | jq
kubectl get svc webpod -o jsonpath='{.status}' | jq

# cilium IPAM request satisfied로 메시지 확인됨
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get svc webpod -o jsonpath='{.status}' | jq
{
  &quot;conditions&quot;: [
    {
      &quot;lastTransitionTime&quot;: &quot;2025-08-08T13:44:01Z&quot;,
      &quot;message&quot;: &quot;&quot;,
      &quot;reason&quot;: &quot;satisfied&quot;,
      &quot;status&quot;: &quot;True&quot;,
      &quot;type&quot;: &quot;cilium.io/IPAMRequestSatisfied&quot;
    }
  ],
  &quot;loadBalancer&quot;: {
    &quot;ingress&quot;: [
      {
        &quot;ip&quot;: &quot;192.168.10.211&quot;,
        &quot;ipMode&quot;: &quot;VIP&quot;
      }
    ]
  }
}&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;클러스터 내부에서는 Cilium Loadbalancer IPAM에서 할당해준 External IP로 바로 접근이 가능한 것을 알 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;다만 Loadbalancer 서비스에 External IP가 할당된다고 해서 클러스터 외부에서 접근이 가능한 것은 아닙니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;# router : K8S 외부에서 통신 불가! 
LBIP=192.168.10.211
curl --connect-timeout 1 $LBIP
arping -i eth1 $LBIP -c 1
...

# router 에서는 해당 IP에 대해서 ARP 응답 조차 받지 못하고 있다.(L2에서는 대상 IP에 대한 ARP를 응답받아야 통신이 성립한다)
root@router:~# LBIP=192.168.10.211
root@router:~# curl --connect-timeout 1 $LBIP
curl: (28) Failed to connect to 192.168.10.211 port 80 after 1001 ms: Timeout was reached
root@router:~# arping -i eth1 $LBIP -c 1
ARPING 192.168.10.211
Timeout

--- 192.168.10.211 statistics ---
1 packets transmitted, 0 packets received, 100% unanswered (0 extra)

root@router:~# arp -a
? (192.168.10.101) at 08:00:27:b6:fa:a5 [ether] on eth1
_gateway (10.0.2.2) at 52:55:0a:00:02:02 [ether] on eth0
? (10.0.2.3) at 52:55:0a:00:02:03 [ether] on eth0
? (192.168.20.100) at 08:00:27:72:bc:89 [ether] on eth2
? (192.168.10.100) at 08:00:27:53:64:57 [ether] on eth1
? (192.168.10.211) at &amp;lt;incomplete&amp;gt; on eth1 # incomplete&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이것을 지원해주는 기능이 이후 살펴볼 L2 Announcement 기능이며, 또한 BGP를 사용할 수도 있습니다. BGP에 대해서는 다음 포스트에서 추가로 확인해보겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;4. L2 Announcement&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Cilium의 Loadbalancer IPAM은 Loadbalancer 유형의 서비스에 External IP를 할당하는 역할을 수행하지만, 클러스터 외부에서 접근은 되지 않습니다. 이 때 L2 Announcement를 사용할 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;참고: &lt;a href=&quot;https://docs.cilium.io/en/stable/network/l2-announcements/&quot;&gt;https://docs.cilium.io/en/stable/network/l2-announcements/&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아래와 같이 일반적인 ARP의 동작이며, 로컬 Client인 경우 Loadbalancer IP로 통신하기 위해서는 MAC 주소를 알아야하고, ARP가 동작하여 특정 IP에 대한 MAC을 통해서 통신이 성립됩니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;2560&quot; data-origin-height=&quot;1059&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bvx05E/btsPL0XlgGP/DbzQserZBL9G61Ib365KA0/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bvx05E/btsPL0XlgGP/DbzQserZBL9G61Ib365KA0/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bvx05E/btsPL0XlgGP/DbzQserZBL9G61Ib365KA0/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2Fbvx05E%2FbtsPL0XlgGP%2FDbzQserZBL9G61Ib365KA0%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;2560&quot; height=&quot;1059&quot; data-origin-width=&quot;2560&quot; data-origin-height=&quot;1059&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;출처: &lt;a href=&quot;https://isovalent.com/blog/post/migrating-from-metallb-to-cilium/#l2-announcement-with-arp&quot;&gt;https://isovalent.com/blog/post/migrating-from-metallb-to-cilium/#l2-announcement-with-arp&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이때 아래와 같이 외부의 Client인 경우, L2 announcement를 통해서 ARP 응답을 받을 수 있게 됩니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;2560&quot; data-origin-height=&quot;1200&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bwr4Kv/btsPLGreeb5/nMEW8Yr8klkLnka5Epvi7K/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bwr4Kv/btsPLGreeb5/nMEW8Yr8klkLnka5Epvi7K/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bwr4Kv/btsPLGreeb5/nMEW8Yr8klkLnka5Epvi7K/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2Fbwr4Kv%2FbtsPLGreeb5%2FnMEW8Yr8klkLnka5Epvi7K%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;2560&quot; height=&quot;1200&quot; data-origin-width=&quot;2560&quot; data-origin-height=&quot;1200&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;출처: &lt;a href=&quot;https://isovalent.com/blog/post/migrating-from-metallb-to-cilium/#l2-announcement-with-arp&quot;&gt;https://isovalent.com/blog/post/migrating-from-metallb-to-cilium/#l2-announcement-with-arp&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;먼저 L2 Announcement를 설정하기 전에 router 에서 arp 응답을 걸어놓고, L2 Announcement를 활성화 하겠습니다.&lt;/p&gt;
&lt;pre class=&quot;routeros&quot;&gt;&lt;code&gt;# 모니터링 : router VM
arping -i eth1 $LBIP -c 100000

root@router:~# arping -i eth1 $LBIP -c 100000
ARPING 192.168.10.211
Timeout
Timeout
...
# cilium 파드 rollout 후
Timeout
Timeout
...
# CiliumL2AnnouncementPolicy 생성 후
Timeout
Timeout
60 bytes from 08:00:27:b6:fa:a5 (192.168.10.211): index=0 time=12.951 usec
60 bytes from 08:00:27:b6:fa:a5 (192.168.10.211): index=1 time=16.095 usec
60 bytes from 08:00:27:b6:fa:a5 (192.168.10.211): index=2 time=16.434 usec
...

# 설정 업그레이드
helm upgrade cilium cilium/cilium --namespace kube-system --version 1.18.0 --reuse-values \
   --set l2announcements.enabled=true &amp;amp;&amp;amp; watch -d kubectl get pod -A

kubectl rollout restart -n kube-system ds/cilium

# 확인
kubectl -n kube-system exec ds/cilium -c cilium-agent -- cilium-dbg config --all | grep EnableL2Announcements
EnableL2Announcements             : true

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl -n kube-system exec ds/cilium -c cilium-agent -- cilium-dbg config --all | grep EnableL2Announcements
EnableL2Announcements             : true

# 정책 설정 : arp 광고하게 될 service 와 node 지정(controlplane 제외)
## 제약사항 : L2 ARP 모드에서 LB IPPool 은 같은 네트워크 대역에서만 유효. -&amp;gt; k8s-w0 을 제외한 이유. 포함 시 리더 노드 선정 시 동작 실패 상황 발생!
cat &amp;lt;&amp;lt; EOF | kubectl apply -f -
apiVersion: &quot;cilium.io/v2alpha1&quot;  # not v2
kind: CiliumL2AnnouncementPolicy
metadata:
  name: policy1
spec:
  serviceSelector:
    matchLabels:
      app: webpod # webpod 서비스에 대해서 L2 Announcement를 한다
  nodeSelector:
    matchExpressions:
      - key: kubernetes.io/hostname
        operator: NotIn
        values:
          - k8s-w0 # k8s-w0는 리더 역할을 할 수 없음
  interfaces:
  - ^eth[1-9]+
  externalIPs: true
  loadBalancerIPs: true
EOF


# 확인
kubectl -n kube-system get lease
kubectl -n kube-system get lease | grep &quot;cilium-l2announce&quot;


(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl -n kube-system get lease
NAME                                   HOLDER                                                                      AGE
apiserver-k3qt3hgfvd4qocxh5wccoxpss4   apiserver-k3qt3hgfvd4qocxh5wccoxpss4_bc4199c0-4f4a-4936-90dd-88aa3187f070   174m
cilium-l2announce-default-webpod       k8s-w1                                                                      46s
cilium-operator-resource-lock          k8s-ctr-pxcdg8nzz5                                                          25h
kube-controller-manager                k8s-ctr_4e6651a7-ae1b-4e6c-bf84-75dee9e10375                                25h
kube-scheduler                         k8s-ctr_eeea8f29-a2f2-45d3-ac5d-4733c0049cb5                                25h
# Leader 역할
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl -n kube-system get lease | grep &quot;cilium-l2announce&quot;
cilium-l2announce-default-webpod       k8s-w1                                                                      53s

# 현재 리더 역할 노드 확인
kubectl -n kube-system get lease/cilium-l2announce-default-webpod -o yaml | yq

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl -n kube-system get lease/cilium-l2announce-default-webpod -o yaml | yq
{
  &quot;apiVersion&quot;: &quot;coordination.k8s.io/v1&quot;,
  &quot;kind&quot;: &quot;Lease&quot;,
  &quot;metadata&quot;: {
    &quot;creationTimestamp&quot;: &quot;2025-08-08T14:09:08Z&quot;,
    &quot;name&quot;: &quot;cilium-l2announce-default-webpod&quot;,
    &quot;namespace&quot;: &quot;kube-system&quot;,
    &quot;resourceVersion&quot;: &quot;54676&quot;,
    &quot;uid&quot;: &quot;27ca48e0-f61a-4e3b-83a9-0e126af11ac9&quot;
  },
  &quot;spec&quot;: {
    &quot;acquireTime&quot;: &quot;2025-08-08T14:09:02.689094Z&quot;,
    &quot;holderIdentity&quot;: &quot;k8s-w1&quot;, # leader
    &quot;leaseDurationSeconds&quot;: 15,
    &quot;leaseTransitions&quot;: 0,
    &quot;renewTime&quot;: &quot;2025-08-08T14:10:15.396057Z&quot;
  }
}


# cilium 파드 이름 지정
export CILIUMPOD0=$(kubectl get -l k8s-app=cilium pods -n kube-system --field-selector spec.nodeName=k8s-ctr -o jsonpath='{.items[0].metadata.name}')
export CILIUMPOD1=$(kubectl get -l k8s-app=cilium pods -n kube-system --field-selector spec.nodeName=k8s-w1  -o jsonpath='{.items[0].metadata.name}')
export CILIUMPOD2=$(kubectl get -l k8s-app=cilium pods -n kube-system --field-selector spec.nodeName=k8s-w0  -o jsonpath='{.items[0].metadata.name}')
echo $CILIUMPOD0 $CILIUMPOD1 $CILIUMPOD2

# 현재 해당 IP에 대한 리더가 위치한 노드의 cilium-agent 파드 내에서 정보 확인
kubectl exec -n kube-system $CILIUMPOD0 -- cilium-dbg shell -- db/show l2-announce
kubectl exec -n kube-system $CILIUMPOD1 -- cilium-dbg shell -- db/show l2-announce

# 리더가 되면 Interface가 하나 생성됨 (External IP를 가지고 있음)
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -n kube-system $CILIUMPOD1 -- cilium-dbg shell -- db/show l2-announce
IP               NetworkInterface
192.168.10.211   eth1

kubectl exec -n kube-system $CILIUMPOD2 -- cilium-dbg shell -- db/show l2-announce

# 로그 확인
kubectl -n kube-system logs ds/cilium | grep &quot;l2&quot;

# 다른 cilium agent는 해당 interface가 없다
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -n kube-system $CILIUMPOD2 -- cilium-dbg shell -- db/show l2-announce
IP   NetworkInterface&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이 상황에서 통신 테스트를 수행보겠습니다.&lt;/p&gt;
&lt;pre class=&quot;yaml&quot;&gt;&lt;code&gt;# router VM : LBIP로 curl 요청 확인
arping -i eth1 $LBIP -c 1000
curl --connect-timeout 1 $LBIP

root@router:~# arping -i eth1 $LBIP
ARPING 192.168.10.211
60 bytes from 08:00:27:b6:fa:a5 (192.168.10.211): index=0 time=16.697 usec
60 bytes from 08:00:27:b6:fa:a5 (192.168.10.211): index=1 time=9.531 usec
^C
--- 192.168.10.211 statistics ---
2 packets transmitted, 2 packets received,   0% unanswered (0 extra)
rtt min/avg/max/std-dev = 0.010/0.013/0.017/0.004 ms
root@router:~# curl --connect-timeout 1 $LBIP
Hostname: webpod-697b545f57-5dd9p
IP: 127.0.0.1
IP: ::1
IP: 172.20.0.74
IP: fe80::e0b3:40ff:fe9b:2c68
RemoteAddr: 172.20.2.247:49718
GET / HTTP/1.1
Host: 192.168.10.211
User-Agent: curl/8.5.0
Accept: */*


# VIP 에 대한 mac 주소가 리더 노드의 mac 주소와 동일함을 확인
arp -a

root@router:~# arp -a
? (192.168.10.101) at 08:00:27:b6:fa:a5 [ether] on eth1 # MAC 이 같다. (k8s-w1)
_gateway (10.0.2.2) at 52:55:0a:00:02:02 [ether] on eth0
? (10.0.2.3) at 52:55:0a:00:02:03 [ether] on eth0
? (192.168.20.100) at 08:00:27:72:bc:89 [ether] on eth2
? (192.168.10.100) at 08:00:27:53:64:57 [ether] on eth1
? (192.168.10.211) at 08:00:27:b6:fa:a5 [ether] on eth1 # MAC 이 같다. (k8s-w1)


# 리더 노드가 아닌 다른 노드에 webpod 통신 시, SNAT 됨 : arp 동작(리더 노드)으로 인한 제약 사항
while true; do curl -s --connect-timeout 1 $LBIP | grep Hostname; sleep 0.1; done
while true; do curl -s --connect-timeout 1 $LBIP | grep RemoteAddr; sleep 0.1; done

# 파드는 골고루 요청이 된다. 다만 이 구간은 리더 노드로 먼저 가고, 그 이후에 다시 파드로 향한다.
(⎈|HomeLab:N/A) root@k8s-ctr:~# while true; do curl -s --connect-timeout 1 $LBIP | grep Hostname; sleep 0.1; done
Hostname: webpod-697b545f57-5dd9p
Hostname: webpod-697b545f57-5dd9p
Hostname: webpod-697b545f57-vhn6k
^C

# 모든 RemoteAddr이 172.20.0.172로 확인된다.
(⎈|HomeLab:N/A) root@k8s-ctr:~# while true; do curl -s --connect-timeout 1 $LBIP | grep RemoteAddr; sleep 0.1; done
RemoteAddr: 172.20.0.172:35668
RemoteAddr: 172.20.0.172:48130
RemoteAddr: 172.20.0.172:35684
RemoteAddr: 172.20.0.172:35700
RemoteAddr: 172.20.0.172:35714
RemoteAddr: 172.20.0.172:43742
^C

# 이 IP는 앞서 살펴본 노드의 router 역할을 하는 IP이다.
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it $CILIUMPOD0 -n kube-system -c cilium-agent -- cilium status --all-addresses | grep router
  172.20.0.172 (router)&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이 상황에서 리더 노드에 장애가 발생한 상황을 테스트 해보겠습니다.&lt;/p&gt;
&lt;pre class=&quot;routeros&quot;&gt;&lt;code&gt;# 신규 터미널 (router) : 반복 호출
while true; do curl -s --connect-timeout 1 $LBIP | grep Hostname; sleep 0.1; done
# -&amp;gt; 약간의 지연이 있는 것 같음

# 현재 리더 노드 확인
kubectl -n kube-system get lease | grep &quot;cilium-l2announce&quot;

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl -n kube-system get lease | grep &quot;cilium-l2announce&quot;
cilium-l2announce-default-webpod       k8s-w1  

# 리더 노드 강제 reboot
sshpass -p 'vagrant' ssh vagrant@k8s-w1  sudo reboot

# 신규 터미널 (router) : arp 변경(갱신) 확인
arp -a

# 리더 노드가 변경되었다.
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl -n kube-system get lease | grep &quot;cilium-l2announce&quot;
cilium-l2announce-default-webpod       k8s-w1                                                                      13m
(⎈|HomeLab:N/A) root@k8s-ctr:~# sshpass -p 'vagrant' ssh vagrant@k8s-w1  sudo reboot
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl -n kube-system get lease | grep &quot;cilium-l2announce&quot;
cilium-l2announce-default-webpod       k8s-ctr                                                                     13m

# router에서 MAC이 변경된 것으로 확인된다.
root@router:~# arp -a
? (192.168.10.101) at 08:00:27:b6:fa:a5 [ether] on eth1
_gateway (10.0.2.2) at 52:55:0a:00:02:02 [ether] on eth0
? (10.0.2.3) at 52:55:0a:00:02:03 [ether] on eth0
? (192.168.20.100) at 08:00:27:72:bc:89 [ether] on eth2
? (192.168.10.100) at 08:00:27:53:64:57 [ether] on eth1 # MAC 같음 (k8s-ctr)
? (192.168.10.211) at 08:00:27:53:64:57 [ether] on eth1 # MAC 같음 (k8s-ctr)&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이상으로 실습을 마무리 하겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;마무리&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Cilium에서 Loadbalancer 유형의 서비스의 External IP를 제공하는 Loadbalancer IPAM을 살펴보고, 외부에서 External IP를 지원하기 위해서 L2 annoucnement 기능을 살펴봤습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;다만 L2 Announcement는 Gratuitous ARP를 사용하기 때문에 노드의 장애와 같은 상황에서 Failover가 발생하면 통신이 일시적으로 지연되는 단점이 있습니다. 이러한 단점은 BGP를 활용하는 경우 개선이 됩니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;다음 포스트에서는 Cilium에서 BGP를 활용하는 고급 네트워킹 기능을 살펴보겠습니다.&lt;/p&gt;</description>
      <category>Cilium</category>
      <category>cilium</category>
      <category>ipam</category>
      <category>kubernetes</category>
      <category>l2announcement</category>
      <category>loadbalancer</category>
      <author>한명</author>
      <guid isPermaLink="true">https://a-person.tistory.com/54</guid>
      <comments>https://a-person.tistory.com/54#entry54comment</comments>
      <pubDate>Fri, 8 Aug 2025 23:34:49 +0900</pubDate>
    </item>
    <item>
      <title>[5] Cilium - Encapsulation 모드</title>
      <link>https://a-person.tistory.com/53</link>
      <description>&lt;p data-ke-size=&quot;size16&quot;&gt;지난 주에 이어서 Cilium CNI Plugin에서 파드들 통신에 대해서 살펴보겠습니다. 특히 워커 노드가 서로 다른 IP대역에 위치하는 상황에서 Encapsulation 모드의 역할을 살펴보고, Cilium LoadBalancer IPAM과 Cilium L2 Announcement 에 대해서 살펴보겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;목차&lt;/h2&gt;
&lt;ol style=&quot;list-style-type: decimal;&quot; data-ke-list-type=&quot;decimal&quot;&gt;
&lt;li&gt;&lt;b&gt;실습 환경 구성&lt;/b&gt;&lt;/li&gt;
&lt;li&gt;&lt;b&gt;Encapsulation 모드&lt;/b&gt;&lt;/li&gt;
&lt;li&gt;Cilium LoadBalancer IPAM&lt;/li&gt;
&lt;li&gt;Cilium L2 Announcement&lt;/li&gt;
&lt;/ol&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;1. 실습 환경 구성&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;vagrant 를 통해서 실습 환경을 실행합니다.&lt;/p&gt;
&lt;pre class=&quot;awk&quot;&gt;&lt;code&gt;mkdir cilium-lab &amp;amp;&amp;amp; cd cilium-lab

curl -O https://raw.githubusercontent.com/gasida/vagrant-lab/refs/heads/main/cilium-study/4w/Vagrantfile

vagrant up&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;code&gt;&lt;/code&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;code&gt;vagrant up&lt;/code&gt; 이 완료되면 4대의 VM이 생성된 상태로 확인됩니다.&lt;/p&gt;
&lt;pre class=&quot;lua&quot;&gt;&lt;code&gt;PS C:\cilium-lab\w4&amp;gt; vagrant status
Current machine states:

k8s-ctr                   running (virtualbox)
k8s-w1                    running (virtualbox)
router                    running (virtualbox)
k8s-w0                    running (virtualbox)&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;컨트롤 플레인에 2대의 워커 노드와, 추가로 router라는 VM이 생성되어 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;해당 실습 환경에서는 서로 다른 네트워크에 워커노드를 위치하고, router라는 VM을 통해서 서로 연결되도록 구성되어 있습니다. router가 192.168.10.0/24 대역과 192.168.20.0/24에 연결된 2개의 network Interface를 가지고, 각 네트워크에 대한 라우팅을 처리해주도록 구성되어 있습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1961&quot; data-origin-height=&quot;1102&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bviuSs/btsPL3fs2jS/ygeWmI4q7z7OGY73ej5dfK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bviuSs/btsPL3fs2jS/ygeWmI4q7z7OGY73ej5dfK/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bviuSs/btsPL3fs2jS/ygeWmI4q7z7OGY73ej5dfK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbviuSs%2FbtsPL3fs2jS%2FygeWmI4q7z7OGY73ej5dfK%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1961&quot; height=&quot;1102&quot; data-origin-width=&quot;1961&quot; data-origin-height=&quot;1102&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이후 실습은 &lt;code&gt;vagrant ssh k8s-ctr&lt;/code&gt;과 같이 노드에 진입한 이후에 진행하도록 하겠습니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get no -owide
NAME      STATUS   ROLES           AGE   VERSION   INTERNAL-IP      EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION     CONTAINER-RUNTIME
k8s-ctr   Ready    control-plane   59m   v1.33.2   192.168.10.100   &amp;lt;none&amp;gt;        Ubuntu 24.04.2 LTS   6.8.0-53-generic   containerd://1.7.27
k8s-w0    Ready    &amp;lt;none&amp;gt;          41m   v1.33.2   192.168.20.100   &amp;lt;none&amp;gt;        Ubuntu 24.04.2 LTS   6.8.0-53-generic   containerd://1.7.27
k8s-w1    Ready    &amp;lt;none&amp;gt;          97s   v1.33.2   192.168.10.101   &amp;lt;none&amp;gt;        Ubuntu 24.04.2 LTS   6.8.0-53-generic   containerd://1.7.27&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;워커 노드 2대가 연결되어 있으며, k8s-ctr, k8s-w1은 192.168.10.0/24 대역을 사용하고, k8s-w0는 192.168.20.0/24대역을 사용하고 있습니다.&lt;/p&gt;
&lt;pre class=&quot;routeros&quot;&gt;&lt;code&gt;# 각 VM에 간단히 명령어 수행을 위해서 아래 명령 실행
sshpass -p 'vagrant' ssh -o StrictHostKeyChecking=no vagrant@k8s-w0 hostname
sshpass -p 'vagrant' ssh -o StrictHostKeyChecking=no vagrant@k8s-w1 hostname
sshpass -p 'vagrant' ssh -o StrictHostKeyChecking=no vagrant@router hostname

# 노드 정보 : 상태, INTERNAL-IP 확인
ifconfig | grep -iEA1 'eth[0-9]:'
ip route | grep static

(⎈|HomeLab:N/A) root@k8s-ctr:~# ifconfig | grep -iEA1 'eth[0-9]:'
eth0: flags=4163&amp;lt;UP,BROADCAST,RUNNING,MULTICAST&amp;gt;  mtu 1500
        inet 10.0.2.15  netmask 255.255.255.0  broadcast 10.0.2.255
--
eth1: flags=4163&amp;lt;UP,BROADCAST,RUNNING,MULTICAST&amp;gt;  mtu 1500
        inet 192.168.10.100  netmask 255.255.255.0  broadcast 192.168.10.255
(⎈|HomeLab:N/A) root@k8s-ctr:~# ip route |grep static
10.10.0.0/16 via 192.168.10.200 dev eth1 proto static
172.20.0.0/16 via 192.168.10.200 dev eth1 proto static
192.168.20.0/24 via 192.168.10.200 dev eth1 proto static # router로 향함

# router의 routing 정보를 조회해보면 192.168.10.0/24와 192.168.20.0/24에 대한 라우팅이 적용되어 있음
# 또한 dummy interface로 10.10.1.0/24와 10.10.2.0/24 네트워크에 대한 라우팅도 있음
(⎈|HomeLab:N/A) root@k8s-ctr:~# sshpass -p 'vagrant' ssh vagrant@router ip -c route
default via 10.0.2.2 dev eth0 proto dhcp src 10.0.2.15 metric 100
10.0.2.0/24 dev eth0 proto kernel scope link src 10.0.2.15 metric 100
10.0.2.2 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100
10.0.2.3 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100
10.10.1.0/24 dev loop1 proto kernel scope link src 10.10.1.200
10.10.2.0/24 dev loop2 proto kernel scope link src 10.10.2.200
192.168.10.0/24 dev eth1 proto kernel scope link src 192.168.10.200
192.168.20.0/24 dev eth2 proto kernel scope link src 192.168.20.200

# ipam 모드 확인
cilium config view | grep ^ipam
cilium config view | grep ^routing

(⎈|HomeLab:N/A) root@k8s-ctr:~# cilium config view | grep ^ipam
ipam                                              cluster-pool
ipam-cilium-node-update-rate                      15s
(⎈|HomeLab:N/A) root@k8s-ctr:~# cilium config view | grep ^routing
routing-mode                                      native&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;각 노드의 라우팅 정보를 바탕으로 Cilium의 &lt;code&gt;autoDirectNodeRoutes=true&lt;/code&gt; 설정에 대해 이해해 보겠습니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;## --set routingMode=native --set autoDirectNodeRoutes=true 가 설정되어 있다.
(⎈|HomeLab:N/A) root@k8s-ctr:~# cilium config view | grep ^auto
auto-direct-node-routes                           true

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get ciliumnode
NAME      CILIUMINTERNALIP   INTERNALIP       AGE
k8s-ctr   172.20.0.172       192.168.10.100   121m
k8s-w0    172.20.1.16        192.168.20.100   104m
k8s-w1    172.20.2.247       192.168.10.101   64m

ip -c route
sshpass -p 'vagrant' ssh vagrant@k8s-w1 ip -c route
sshpass -p 'vagrant' ssh vagrant@k8s-w0 ip -c route

(⎈|HomeLab:N/A) root@k8s-ctr:~#  ip -c route
default via 10.0.2.2 dev eth0 proto dhcp src 10.0.2.15 metric 100
10.0.2.0/24 dev eth0 proto kernel scope link src 10.0.2.15 metric 100
10.0.2.2 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100
10.0.2.3 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100
10.10.0.0/16 via 192.168.10.200 dev eth1 proto static
172.20.0.0/24 via 172.20.0.172 dev cilium_host proto kernel src 172.20.0.172
172.20.0.0/16 via 192.168.10.200 dev eth1 proto static
172.20.0.172 dev cilium_host proto kernel scope link
172.20.2.0/24 via 192.168.10.101 dev eth1 proto kernel # k8s-w1로 라우팅이 자동 추가됨
192.168.10.0/24 dev eth1 proto kernel scope link src 192.168.10.100
192.168.20.0/24 via 192.168.10.200 dev eth1 proto static
(⎈|HomeLab:N/A) root@k8s-ctr:~# sshpass -p 'vagrant' ssh vagrant@k8s-w1 ip -c route
default via 10.0.2.2 dev eth0 proto dhcp src 10.0.2.15 metric 100
10.0.2.0/24 dev eth0 proto kernel scope link src 10.0.2.15 metric 100
10.0.2.2 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100
10.0.2.3 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100
10.10.0.0/16 via 192.168.10.200 dev eth1 proto static
172.20.0.0/24 via 192.168.10.100 dev eth1 proto kernel # k8s-ctr로 라우팅이 자동 추가됨
172.20.0.0/16 via 192.168.10.200 dev eth1 proto static
172.20.2.0/24 via 172.20.2.247 dev cilium_host proto kernel src 172.20.2.247
172.20.2.247 dev cilium_host proto kernel scope link
192.168.10.0/24 dev eth1 proto kernel scope link src 192.168.10.101
192.168.20.0/24 via 192.168.10.200 dev eth1 proto static
(⎈|HomeLab:N/A) root@k8s-ctr:~# sshpass -p 'vagrant' ssh vagrant@k8s-w0 ip -c route
default via 10.0.2.2 dev eth0 proto dhcp src 10.0.2.15 metric 100
10.0.2.0/24 dev eth0 proto kernel scope link src 10.0.2.15 metric 100
10.0.2.2 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100
10.0.2.3 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100
10.10.0.0/16 via 192.168.20.200 dev eth1 proto static
172.20.0.0/16 via 192.168.20.200 dev eth1 proto static
172.20.1.0/24 via 172.20.1.16 dev cilium_host proto kernel src 172.20.1.16
172.20.1.16 dev cilium_host proto kernel scope link
192.168.10.0/24 via 192.168.20.200 dev eth1 proto static
192.168.20.0/24 dev eth1 proto kernel scope link src 192.168.20.100

# 통신 확인
ping -c 1 10.10.1.200     # router loop1 
ping -c 1 192.168.20.100  # k8s-w0 eth1

# router를 통해서 k8s-w0와 통신은 성립됨
(⎈|HomeLab:N/A) root@k8s-ctr:~# ping -c 1 10.10.1.200
PING 10.10.1.200 (10.10.1.200) 56(84) bytes of data.
64 bytes from 10.10.1.200: icmp_seq=1 ttl=64 time=3.36 ms

--- 10.10.1.200 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 3.358/3.358/3.358/0.000 ms
(⎈|HomeLab:N/A) root@k8s-ctr:~# ping -c 1 192.168.20.100
PING 192.168.20.100 (192.168.20.100) 56(84) bytes of data.
64 bytes from 192.168.20.100: icmp_seq=1 ttl=63 time=3.96 ms

--- 192.168.20.100 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 3.963/3.963/3.963/0.000 ms&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;code&gt;&lt;/code&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;code&gt;autoDirectNodeRoutes=true&lt;/code&gt;가 설정되면 같은 네트워크에 있는 노드에 대해서만 PodCidr에 대한 static routing이 추가되는 것을 알 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;실습 환경에서는 k8s-ctr과 k8s-w1은 같은 네트워크에 위치해 있으며, 중간에 라우터를 거쳐야 k8s-w0와 통신이 가능합니다. 이때문에 Direct Node routing이 의미가 없으며 실제로 라우터에서 라우팅을 처리해주지 않는다면, 실제로 통신이 되지 않습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;결론적으로는 다른 대역의 노드에 위치한 파드 간에는 통신이 불가하게 됩니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;2. Encapsulation 모드&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;앞서 살펴본 바와 같이 현재는 Native Routing 모드로 구성되어 있으며, 이 상황에서 실습 환경과 같은 구성이 어떤 문제가 되는지 살펴보겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;먼저 샘플 애플리케이션 배포하겠습니다.&lt;/p&gt;
&lt;pre class=&quot;yaml&quot;&gt;&lt;code&gt;# 샘플 애플리케이션 배포
cat &amp;lt;&amp;lt; EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
  name: webpod
spec:
  replicas: 3
  selector:
    matchLabels:
      app: webpod
  template:
    metadata:
      labels:
        app: webpod
    spec:
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchExpressions:
              - key: app
                operator: In
                values:
                - sample-app
            topologyKey: &quot;kubernetes.io/hostname&quot;
      containers:
      - name: webpod
        image: traefik/whoami
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: webpod
  labels:
    app: webpod
spec:
  selector:
    app: webpod
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80
  type: ClusterIP
EOF


# k8s-ctr 노드에 curl-pod 파드 배포
cat &amp;lt;&amp;lt;EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
  name: curl-pod
  labels:
    app: curl
spec:
  nodeName: k8s-ctr
  containers:
  - name: curl
    image: nicolaka/netshoot
    command: [&quot;tail&quot;]
    args: [&quot;-f&quot;, &quot;/dev/null&quot;]
  terminationGracePeriodSeconds: 0
EOF&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;생성된 애플리케이션을 확인해보고, 실제 통신이 제대로 되는지 확인해보겠습니다.&lt;/p&gt;
&lt;pre class=&quot;yaml&quot;&gt;&lt;code&gt;# 배포 확인
kubectl get deploy,svc,ep webpod -owide
kubectl get endpointslices -l app=webpod
kubectl get ciliumendpoints # IP 확인


(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get deploy,svc,ep webpod -owide
Warning: v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
NAME                     READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS   IMAGES           SELECTOR
deployment.apps/webpod   3/3     3            3           20h   webpod       traefik/whoami   app=webpod

NAME             TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE   SELECTOR
service/webpod   ClusterIP   10.96.234.235   &amp;lt;none&amp;gt;        80/TCP    20h   app=webpod

NAME               ENDPOINTS                                       AGE
endpoints/webpod   172.20.0.74:80,172.20.1.224:80,172.20.2.63:80   20h
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get endpointslices -l app=webpod
NAME           ADDRESSTYPE   PORTS   ENDPOINTS                              AGE
webpod-9v987   IPv4          80      172.20.0.74,172.20.1.224,172.20.2.63   20h
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get ciliumendpoints
NAME                      SECURITY IDENTITY   ENDPOINT STATE   IPV4           IPV6
curl-pod                  58696               ready            172.20.0.217
webpod-697b545f57-5dd9p   12564               ready            172.20.0.74
webpod-697b545f57-rdh2t   12564               ready            172.20.1.224
webpod-697b545f57-vhn6k   12564               ready            172.20.2.63


# 통신 확인 : 문제 확인
kubectl exec -it curl-pod -- curl webpod | grep Hostname
kubectl exec -it curl-pod -- sh -c 'while true; do curl -s --connect-timeout 1 webpod | grep Hostname; echo &quot;---&quot; ; sleep 1; done'

# 통신이 하나씩 빠진다.
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it curl-pod -- curl webpod | grep Hostname
Hostname: webpod-697b545f57-5dd9p
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it curl-pod -- curl webpod | grep Hostname
Hostname: webpod-697b545f57-vhn6k
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it curl-pod -- curl webpod | grep Hostname
command terminated with exit code 130
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it curl-pod -- sh -c 'while true; do curl -s --connect-timeout 1 webpod | grep Hostname; echo &quot;---&quot; ; sleep 1; done'
---
Hostname: webpod-697b545f57-5dd9p
---
---
Hostname: webpod-697b545f57-vhn6k
---
Hostname: webpod-697b545f57-5dd9p
---
Hostname: webpod-697b545f57-vhn6k
---
Hostname: webpod-697b545f57-vhn6k
---
Hostname: webpod-697b545f57-5dd9p

# k8s-w0 노드에 배포된 webpod 파드 IP 지정 (k8s-w0는 다른 네트워크에 위치함)
export WEBPOD=$(kubectl get pod -l app=webpod --field-selector spec.nodeName=k8s-w0 -o jsonpath='{.items[0].status.podIP}')
echo $WEBPOD

(⎈|HomeLab:N/A) root@k8s-ctr:~# export WEBPOD=$(kubectl get pod -l app=webpod --field-selector spec.nodeName=k8s-w0 -o jsonpath='{.items[0].status.podIP}')
echo $WEBPOD
172.20.1.224

# 신규 터미널 [router]
tcpdump -i any icmp -nn

# k8s-w0(192.168.20.100)에 있는 WEBPOD로 ping 체크를 하면 실패한다.
# 앞서 살펴본 바와 같이 k8s-w0에 위치한 PodCIDR에 대해서는 라우팅 정보가 없다.
kubectl exec -it curl-pod -- ping -c 2 -w 1 -W 1 $WEBPOD


(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it curl-pod -- ping -c 2 -w 1 -W 1 $WEBPOD
PING 172.20.1.224 (172.20.1.224) 56(84) bytes of data.

--- 172.20.1.224 ping statistics ---
1 packets transmitted, 0 received, 100% packet loss, time 0ms

command terminated with exit code 1


# 신규 터미널 [router] : 응답이 없다.
root@router:~# tcpdump -i any icmp -nn
tcpdump: data link type LINUX_SLL2
tcpdump: verbose output suppressed, use -v[v]... for full protocol decode
listening on any, link-type LINUX_SLL2 (Linux cooked v2), snapshot length 262144 bytes
20:37:18.313178 eth1  In  IP 172.20.0.217 &amp;gt; 172.20.1.224: ICMP echo request, id 99, seq 1, length 64 # eth1 에서 들어옴
20:37:18.313209 eth0  Out IP 172.20.0.217 &amp;gt; 172.20.1.224: ICMP echo request, id 99, seq 1, length 64 # eth0 로 보낸다 : eth0는 SNAT을 위한 NIC임 -&amp;gt; 통신 성립 안됨


# 신규 터미널 [router]
ip -c route
ip route get 172.20.2.36

# 172.20.1.0/24에 대한 라우팅 정보가 없다(PodCIDR에 대한 라우팅 정보 없음, 노드의 IP대역에 대한 라우팅만 처리함) 
# -&amp;gt; default route로 보냄
root@router:~# ip -c route
default via 10.0.2.2 dev eth0 proto dhcp src 10.0.2.15 metric 100
10.0.2.0/24 dev eth0 proto kernel scope link src 10.0.2.15 metric 100
10.0.2.2 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100
10.0.2.3 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100
10.10.1.0/24 dev loop1 proto kernel scope link src 10.10.1.200
10.10.2.0/24 dev loop2 proto kernel scope link src 10.10.2.200
192.168.10.0/24 dev eth1 proto kernel scope link src 192.168.10.200
192.168.20.0/24 dev eth2 proto kernel scope link src 192.168.20.200

# 해당 IP에 대한 라우팅 정보 확인
root@router:~# ip route get 172.20.2.36
172.20.2.36 via 10.0.2.2 dev eth0 src 10.0.2.15 uid 0
    cache&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;현재 실습 환경과 같이 노드의 네트워크가 다른 구성에서 Cilium 의 Native Routing 모드에서는 서로 다른 네트워크에 위치한 파드로의 통신이 불가하다는 것을 알 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이 문제를 해결하려면 두 가지 방안이 있습니다.&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;라우팅 처리: Router에서 PodCIDR에 대한 라우팅을 Static 하게 추가할 수 있습니다. 다만 노드가 많아질 수록 관리의 복잡도가 높아지고, 노드의 PodCIDR이 변경되는 상황에서 문제가 발생할 수 있습니다. 혹은 BGP를 통해서 자동으로 라우팅을 등록해줄 수 있습니다. 이 방안은 다음 포스트에서 알아보겠습니다.&lt;/li&gt;
&lt;li&gt;오버레이 네트워크: Cilium의 Encapsulation 모드를 사용할 수 있습니다. 노드 간 통신이 가능한 환경이므로 파드의 원본 패킷을 노드에서 encapsulation하여 상대 노드로 전달하는 방식입니다.&lt;/li&gt;
&lt;/ul&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;해당 포스트에서는 Cilium의 Encapsulation 모드를 통해서 문제 상황을 해결해 보겠습니다.&lt;/p&gt;
&lt;pre class=&quot;routeros&quot;&gt;&lt;code&gt;# [커널 구성 옵션] Requirements for Tunneling and Routing
grep -E 'CONFIG_VXLAN=y|CONFIG_VXLAN=m|CONFIG_GENEVE=y|CONFIG_GENEVE=m|CONFIG_FIB_RULES=y' /boot/config-$(uname -r)
CONFIG_FIB_RULES=y # 커널에 내장됨
CONFIG_VXLAN=m # 모듈로 컴파일됨 &amp;rarr; 커널에 로드해서 사용
CONFIG_GENEVE=m # 모듈로 컴파일됨 &amp;rarr; 커널에 로드해서 사용

#  커널 로드
lsmod | grep -E 'vxlan|geneve'
modprobe vxlan # modprobe geneve
lsmod | grep -E 'vxlan|geneve'

(⎈|HomeLab:N/A) root@k8s-ctr:~# lsmod | grep -E 'vxlan|geneve'
(⎈|HomeLab:N/A) root@k8s-ctr:~# modprobe vxlan # modprobe geneve
(⎈|HomeLab:N/A) root@k8s-ctr:~# lsmod | grep -E 'vxlan|geneve'
vxlan                 155648  0
ip6_udp_tunnel         16384  1 vxlan
udp_tunnel             32768  1 vxlan

for i in w1 w0 ; do echo &quot;&amp;gt;&amp;gt; node : k8s-$i &amp;lt;&amp;lt;&quot;; sshpass -p 'vagrant' ssh vagrant@k8s-$i sudo modprobe vxlan ; echo; done
for i in w1 w0 ; do echo &quot;&amp;gt;&amp;gt; node : k8s-$i &amp;lt;&amp;lt;&quot;; sshpass -p 'vagrant' ssh vagrant@k8s-$i sudo lsmod | grep -E 'vxlan|geneve' ; echo; done


# k8s-w1 노드에 배포된 webpod 파드 IP 지정
export WEBPOD1=$(kubectl get pod -l app=webpod --field-selector spec.nodeName=k8s-w1 -o jsonpath='{.items[0].status.podIP}')
echo $WEBPOD1

# 반복 ping 실행해두기
kubectl exec -it curl-pod -- ping $WEBPOD1


# rollout 되는 순간 일시적으로 통신이 끊어짐
64 bytes from 172.20.2.63: icmp_seq=107 ttl=62 time=3.71 ms
64 bytes from 172.20.2.63: icmp_seq=108 ttl=62 time=2.75 ms
From 172.20.0.172 icmp_seq=143 Time to live exceeded
From 172.20.0.172 icmp_seq=144 Time to live exceeded
From 172.20.0.172 icmp_seq=145 Time to live exceeded
From 172.20.0.172 icmp_seq=146 Time to live exceeded
From 172.20.0.172 icmp_seq=147 Time to live exceeded
From 172.20.0.172 icmp_seq=148 Time to live exceeded
From 172.20.0.172 icmp_seq=149 Time to live exceeded
From 172.20.0.172 icmp_seq=150 Time to live exceeded
From 172.20.0.172 icmp_seq=151 Time to live exceeded
From 172.20.0.172 icmp_seq=152 Time to live exceeded
64 bytes from 172.20.2.63: icmp_seq=153 ttl=63 time=9.84 ms
64 bytes from 172.20.2.63: icmp_seq=154 ttl=63 time=5.50 ms
64 bytes from 172.20.2.63: icmp_seq=155 ttl=63 time=2.37 ms


# 업그레이드
helm upgrade cilium cilium/cilium --namespace kube-system --version 1.18.0 --reuse-values \
  --set routingMode=tunnel --set tunnelProtocol=vxlan \
  --set autoDirectNodeRoutes=false --set installNoConntrackIptablesRules=false

kubectl rollout restart -n kube-system ds/cilium

# 반복 ping 실행 결과
kubectl exec -it curl-pod -- ping $WEBPOD1
# rollout 되는 순간 일시적으로 통신이 끊어짐
64 bytes from 172.20.2.63: icmp_seq=107 ttl=62 time=3.71 ms
64 bytes from 172.20.2.63: icmp_seq=108 ttl=62 time=2.75 ms
From 172.20.0.172 icmp_seq=143 Time to live exceeded
From 172.20.0.172 icmp_seq=144 Time to live exceeded
From 172.20.0.172 icmp_seq=145 Time to live exceeded
From 172.20.0.172 icmp_seq=146 Time to live exceeded
From 172.20.0.172 icmp_seq=147 Time to live exceeded
From 172.20.0.172 icmp_seq=148 Time to live exceeded
From 172.20.0.172 icmp_seq=149 Time to live exceeded
From 172.20.0.172 icmp_seq=150 Time to live exceeded
From 172.20.0.172 icmp_seq=151 Time to live exceeded
From 172.20.0.172 icmp_seq=152 Time to live exceeded
64 bytes from 172.20.2.63: icmp_seq=153 ttl=63 time=9.84 ms
64 bytes from 172.20.2.63: icmp_seq=154 ttl=63 time=5.50 ms
64 bytes from 172.20.2.63: icmp_seq=155 ttl=63 time=2.37 ms&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;cilium을 encapsulation 모드로 업데이트 이후 설정값과 노드의 정보가 어떻게 변경되었는지 살펴보겠습니다.&lt;/p&gt;
&lt;pre class=&quot;routeros&quot;&gt;&lt;code&gt;# 설정 확인
cilium features status
cilium features status | grep datapath_network

kubectl exec -it -n kube-system ds/cilium -- cilium status | grep ^Routing
cilium config view | grep tunnel


(⎈|HomeLab:N/A) root@k8s-ctr:~# cilium features status | grep datapath_network
Yes      cilium_feature_datapath_network                                         mode=overlay-vxlan                                1        1       1
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it -n kube-system ds/cilium -- cilium status | grep ^Routing
Routing:                 Network: Tunnel [vxlan]   Host: BPF
(⎈|HomeLab:N/A) root@k8s-ctr:~# cilium config view | grep tunnel
routing-mode                                      tunnel
tunnel-protocol                                   vxlan
tunnel-source-port-range                          0-0

# cilium_vxlan 확인
ip -c addr show dev cilium_vxlan
for i in w1 w0 ; do echo &quot;&amp;gt;&amp;gt; node : k8s-$i &amp;lt;&amp;lt;&quot;; sshpass -p 'vagrant' ssh vagrant@k8s-$i ip -c addr show dev cilium_vxlan ; echo; done

(⎈|HomeLab:N/A) root@k8s-ctr:~# ip -c addr show dev cilium_vxlan
26: cilium_vxlan: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 qdisc noqueue state UNKNOWN group default
    link/ether 16:5d:7a:ab:58:3f brd ff:ff:ff:ff:ff:ff
    inet6 fe80::145d:7aff:feab:583f/64 scope link
       valid_lft forever preferred_lft forever

# 라우팅 정보 확인
ip -c route | grep cilium_host
ip route get 172.20.1.10
ip route get 172.20.2.10

# k8s node 간 다른 네트워크 대역에 있더라도, 파드의 네트워크 대역 정보가 라우팅에 올라온다. 
(⎈|HomeLab:N/A) root@k8s-ctr:~# ip -c route | grep cilium_host
172.20.0.0/24 via 172.20.0.172 dev cilium_host proto kernel src 172.20.0.172
172.20.0.172 dev cilium_host proto kernel scope link
172.20.1.0/24 via 172.20.0.172 dev cilium_host proto kernel src 172.20.0.172 mtu 1450
172.20.2.0/24 via 172.20.0.172 dev cilium_host proto kernel src 172.20.0.172 mtu 1450
(⎈|HomeLab:N/A) root@k8s-ctr:~# ip route get 172.20.1.10
172.20.1.10 dev cilium_host src 172.20.0.172 uid 0
    cache mtu 1450
(⎈|HomeLab:N/A) root@k8s-ctr:~# ip route get 172.20.2.10
172.20.2.10 dev cilium_host src 172.20.0.172 uid 0
    cache mtu 1450&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;노드의 라우팅 정보에서 다른 노드의 PodCIDR에 대한 라우팅이 확인됩니다. 상대 PodCIDR에 대한 라우팅은 실제로 cilium_host 쪽으로 향하고 여기서 encapsulation 수행됩니다. 이때 노드 간은 통신이 가능한 환경이기 때문에 encapsulation된 패킷이 상대 노드로 라우팅되어 통신되게 됩니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;라우팅 정보의 172.20.0.172 IP와 cilium_host 인터페이스에 대해서 아래와 같이 정보를 추가로 확인해보겠습니다.&lt;/p&gt;
&lt;pre class=&quot;reasonml&quot;&gt;&lt;code&gt;# cilium 파드 이름 지정
export CILIUMPOD0=$(kubectl get -l k8s-app=cilium pods -n kube-system --field-selector spec.nodeName=k8s-ctr -o jsonpath='{.items[0].metadata.name}')
export CILIUMPOD1=$(kubectl get -l k8s-app=cilium pods -n kube-system --field-selector spec.nodeName=k8s-w1  -o jsonpath='{.items[0].metadata.name}')
export CILIUMPOD2=$(kubectl get -l k8s-app=cilium pods -n kube-system --field-selector spec.nodeName=k8s-w0  -o jsonpath='{.items[0].metadata.name}')
echo $CILIUMPOD0 $CILIUMPOD1 $CILIUMPOD2

# router 역할 IP 확인
kubectl exec -it $CILIUMPOD0 -n kube-system -c cilium-agent -- cilium status --all-addresses | grep router
kubectl exec -it $CILIUMPOD1 -n kube-system -c cilium-agent -- cilium status --all-addresses | grep router
kubectl exec -it $CILIUMPOD2 -n kube-system -c cilium-agent -- cilium status --all-addresses | grep router

(⎈|HomeLab:N/A) root@k8s-ctr:~# export CILIUMPOD0=$(kubectl get -l k8s-app=cilium pods -n kube-system --field-selector spec.nodeName=k8s-ctr -o jsonpath='{.items[0].metadata.name}')
export CILIUMPOD1=$(kubectl get -l k8s-app=cilium pods -n kube-system --field-selector spec.nodeName=k8s-w1  -o jsonpath='{.items[0].metadata.name}')
export CILIUMPOD2=$(kubectl get -l k8s-app=cilium pods -n kube-system --field-selector spec.nodeName=k8s-w0  -o jsonpath='{.items[0].metadata.name}')
echo $CILIUMPOD0 $CILIUMPOD1 $CILIUMPOD2
cilium-4ljkb cilium-2lgrw cilium-5s92k

# router 역할을 하는 IP를 확인해보면 172.20.0.172 IP가 확인된다.
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it $CILIUMPOD0 -n kube-system -c cilium-agent -- cilium status --all-addresses | grep router
  172.20.0.172 (router)&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이제 앞서 실패한 서로 다른 네트워크에 위치한 노드로 실패한 파드 통신을 다시 확인해보겠습니다.&lt;/p&gt;
&lt;pre class=&quot;routeros&quot;&gt;&lt;code&gt;# 통신 확인
kubectl exec -it curl-pod -- curl webpod | grep Hostname
kubectl exec -it curl-pod -- sh -c 'while true; do curl -s --connect-timeout 1 webpod | grep Hostname; echo &quot;---&quot; ; sleep 1; done'

# 파드 3개의 이름이 모두 확인된다.
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it curl-pod -- sh -c 'while true; do curl -s --connect-timeout 1 webpod | grep Hostname; echo &quot;---&quot; ; sleep 1; done'
Hostname: webpod-697b545f57-5dd9p
---
Hostname: webpod-697b545f57-vhn6k
---
Hostname: webpod-697b545f57-5dd9p
---
Hostname: webpod-697b545f57-vhn6k
---
Hostname: webpod-697b545f57-rdh2t

# k8s-w0 노드에 배포된 webpod 파드 IP 지정
export WEBPOD=$(kubectl get pod -l app=webpod --field-selector spec.nodeName=k8s-w0 -o jsonpath='{.items[0].status.podIP}')
echo $WEBPOD

# 신규 터미널 [router]
tcpdump -i any udp port 8472 -nn

# Ping 테스트
kubectl exec -it curl-pod -- ping -c 2 -w 1 -W 1 $WEBPOD

# 성공
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it curl-pod -- ping -c 2 -w 1 -W 1 $WEBPOD
PING 172.20.1.224 (172.20.1.224) 56(84) bytes of data.
64 bytes from 172.20.1.224: icmp_seq=1 ttl=63 time=7.61 ms

--- 172.20.1.224 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 7.606/7.606/7.606/0.000 ms
command terminated with exit code 1

# 신규 터미널 [router] : 라우팅이 어떻게 되는가?
tcpdump -i any icmp -nn
tcpdump -i any udp port 8472 -nn # vXLAN에서 사용하는 포트

# 결과1
root@router:~# tcpdump -i any udp port 8472 -nn
tcpdump: data link type LINUX_SLL2
tcpdump: verbose output suppressed, use -v[v]... for full protocol decode
listening on any, link-type LINUX_SLL2 (Linux cooked v2), snapshot length 262144 bytes
21:52:10.609698 eth1  In  IP 192.168.10.100.41429 &amp;gt; 192.168.20.100.8472: OTV, flags [I] (0x08), overlay 0, instance 58696
IP 172.20.0.217 &amp;gt; 172.20.1.224: ICMP echo request, id 194, seq 1, length 64
21:52:10.609730 eth2  Out IP 192.168.10.100.41429 &amp;gt; 192.168.20.100.8472: OTV, flags [I] (0x08), overlay 0, instance 58696
IP 172.20.0.217 &amp;gt; 172.20.1.224: ICMP echo request, id 194, seq 1, length 64
21:52:10.612113 eth2  In  IP 192.168.20.100.49596 &amp;gt; 192.168.10.100.8472: OTV, flags [I] (0x08), overlay 0, instance 12564
IP 172.20.1.224 &amp;gt; 172.20.0.217: ICMP echo reply, id 194, seq 1, length 64
21:52:10.612135 eth1  Out IP 192.168.20.100.49596 &amp;gt; 192.168.10.100.8472: OTV, flags [I] (0x08), overlay 0, instance 12564
IP 172.20.1.224 &amp;gt; 172.20.0.217: ICMP echo reply, id 194, seq 1, length 64

# 결과2
root@router:~# tcpdump -i any udp port 8472 -nn
tcpdump: data link type LINUX_SLL2
tcpdump: verbose output suppressed, use -v[v]... for full protocol decode
listening on any, link-type LINUX_SLL2 (Linux cooked v2), snapshot length 262144 bytes
21:52:10.609698 eth1  In  IP 192.168.10.100.41429 &amp;gt; 192.168.20.100.8472: OTV, flags [I] (0x08), overlay 0, instance 58696 # k8s-ctr -&amp;gt; k8s-w0 로 통신
IP 172.20.0.217 &amp;gt; 172.20.1.224: ICMP echo request, id 194, seq 1, length 64 # Inner header 정보 (pod to pod 통신)
21:52:10.609730 eth2  Out IP 192.168.10.100.41429 &amp;gt; 192.168.20.100.8472: OTV, flags [I] (0x08), overlay 0, instance 58696
IP 172.20.0.217 &amp;gt; 172.20.1.224: ICMP echo request, id 194, seq 1, length 64
21:52:10.612113 eth2  In  IP 192.168.20.100.49596 &amp;gt; 192.168.10.100.8472: OTV, flags [I] (0x08), overlay 0, instance 12564
IP 172.20.1.224 &amp;gt; 172.20.0.217: ICMP echo reply, id 194, seq 1, length 64
21:52:10.612135 eth1  Out IP 192.168.20.100.49596 &amp;gt; 192.168.10.100.8472: OTV, flags [I] (0x08), overlay 0, instance 12564
IP 172.20.1.224 &amp;gt; 172.20.0.217: ICMP echo reply, id 194, seq 1, length 64&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;앞서 Cilium의 Routing 모드를 설명하면서 Encapsulation 모드가 가지는 단점을 말씀드렸습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Encapsulation 모드에서는 encapsulation을 위한 header를 추가하므로, 페이로드에 사용할 수 있는 유효 MTU가 네이티브 라우팅(VXLAN의 경우 네트워크 패킷당 50바이트)보다 낮아지고, 결과적으로 네트워크 연결에 대한 최대 처리량이 낮아집니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아래 노드에서 확인해보면 eth1 인터페이스에 대해서 MTU가 1500인 것을 확인할 수 있지만, 라우팅 정보에서는 MTU가 1450으로 확인됩니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;(⎈|HomeLab:N/A) root@k8s-ctr:~# ip a
1: lo: &amp;lt;LOOPBACK,UP,LOWER_UP&amp;gt; mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host noprefixroute
       valid_lft forever preferred_lft forever
2: eth0: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 08:00:27:6b:69:c9 brd ff:ff:ff:ff:ff:ff
    altname enp0s3
    inet 10.0.2.15/24 metric 100 brd 10.0.2.255 scope global dynamic eth0
       valid_lft 79973sec preferred_lft 79973sec
    inet6 fd17:625c:f037:2:a00:27ff:fe6b:69c9/64 scope global dynamic mngtmpaddr noprefixroute
       valid_lft 86322sec preferred_lft 14322sec
    inet6 fe80::a00:27ff:fe6b:69c9/64 scope link
       valid_lft forever preferred_lft forever
3: eth1: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 08:00:27:53:64:57 brd ff:ff:ff:ff:ff:ff
    altname enp0s8
    inet 192.168.10.100/24 brd 192.168.10.255 scope global eth1
       valid_lft forever preferred_lft forever
    inet6 fe80::a00:27ff:fe53:6457/64 scope link
       valid_lft forever preferred_lft forever
...

(⎈|HomeLab:N/A) root@k8s-ctr:~# ip route | grep 172.20
172.20.0.0/24 via 172.20.0.172 dev cilium_host proto kernel src 172.20.0.172
172.20.0.0/16 via 192.168.10.200 dev eth1 proto static
172.20.0.172 dev cilium_host proto kernel scope link
172.20.1.0/24 via 172.20.0.172 dev cilium_host proto kernel src 172.20.0.172 mtu 1450
172.20.2.0/24 via 172.20.0.172 dev cilium_host proto kernel src 172.20.0.172 mtu 1450&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;파드간 통신에서 MTU를 테스트 하기 위해서 아래와 같이 확인해보겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;curl-pod에서 DF (Don't Fragment)플래그를 설정하고(네트워크 구간에서 MTU가 작아도 조각화하지 않는 설정) 1500 사이즈의 Ping을 수행하면 &lt;code&gt;ping: sendmsg: Message too large&lt;/code&gt; 와 같은 에러가 발생하는 것을 알 수 있습니다.&lt;/p&gt;
&lt;pre class=&quot;routeros&quot;&gt;&lt;code&gt;# -M do : Don't Fragment (DF) 플래그를 설정하여 조각화 방지
# -s 1472 : 페이로드(payload) 크기, 즉 ICMP 데이터 크기
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it curl-pod -- ping -M do -s 1500 $WEBPOD
PING 172.20.1.224 (172.20.1.224) 1500(1528) bytes of data.
ping: sendmsg: Message too large
ping: sendmsg: Message too large
^C
--- 172.20.1.224 ping statistics ---
2 packets transmitted, 0 received, +2 errors, 100% packet loss, time 1019ms

command terminated with exit code 1&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이때 아래와 같이 1422에서는 통신이 가능한 것을 알 수 있습니다. 이는 payload인 1422에 IP 헤더 20 바이트와 ICMP 헤더 8바이트가 추가 되고, 이후 vXLAN의 outer header인 50 바이트가 필요하기 때문입니다.&lt;/p&gt;
&lt;pre class=&quot;routeros&quot;&gt;&lt;code&gt;(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it curl-pod -- ping -M do -s 1423 $WEBPOD
PING 172.20.1.224 (172.20.1.224) 1423(1451) bytes of data.
ping: sendmsg: Message too large
ping: sendmsg: Message too large
ping: sendmsg: Message too large
ping: sendmsg: Message too large
^C
--- 172.20.1.224 ping statistics ---
4 packets transmitted, 0 received, +4 errors, 100% packet loss, time 3097ms

command terminated with exit code 1
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it curl-pod -- ping -M do -s 1422 $WEBPOD
PING 172.20.1.224 (172.20.1.224) 1422(1450) bytes of data.
1430 bytes from 172.20.1.224: icmp_seq=1 ttl=63 time=7.45 ms
1430 bytes from 172.20.1.224: icmp_seq=2 ttl=63 time=4.37 ms
^C
--- 172.20.1.224 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1002ms
rtt min/avg/max/mdev = 4.374/5.912/7.450/1.538 ms&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;마치며&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Cilium 실습 환경을 통해서 서로 네트워크가 다른 노드에서 파드 통신이 가능하도록 Encapulation 모드를 살펴봤습니다. 주제가 다소 다르기 때문에 Cilium LoadBalancer IPAM와 Cilium L2 Announcement에 대해서는 다음 포스트로 나눠서 작성하도록 하겠습니다.&lt;/p&gt;</description>
      <category>Cilium</category>
      <category>cilium</category>
      <category>encapsulation</category>
      <category>kubernetes</category>
      <author>한명</author>
      <guid isPermaLink="true">https://a-person.tistory.com/53</guid>
      <comments>https://a-person.tistory.com/53#entry53comment</comments>
      <pubDate>Fri, 8 Aug 2025 23:31:46 +0900</pubDate>
    </item>
    <item>
      <title>[4] Cilium에서 NodeLocalDNS 사용</title>
      <link>https://a-person.tistory.com/52</link>
      <description>&lt;p data-ke-size=&quot;size16&quot;&gt;이번 포스트에서는 파드 DNS 질의의 기본적인 동작과, NodeLocalDNS를 사용하는 방법을 살펴보겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;목차&lt;/h2&gt;
&lt;ol style=&quot;list-style-type: decimal;&quot; data-ke-list-type=&quot;decimal&quot;&gt;
&lt;li&gt;파드의 DNS질의와 coredns&lt;/li&gt;
&lt;li&gt;NodeLocalDNS 사용&lt;/li&gt;
&lt;li&gt;Cilium의 Local Redirect Policy&lt;/li&gt;
&lt;/ol&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;1. 파드의 DNS질의와 coredns&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;쿠버네티스 환경에서 DNS를 어떤 방식으로 처리하는지 coreDNS 설정과 같이 살펴보겠습니다.&lt;/p&gt;
&lt;pre class=&quot;routeros&quot;&gt;&lt;code&gt;# 파드의 DNS 설정 정보 확인 (search domain과 nameserver, ndots 설정 확인)
kubectl exec -it curl-pod -- cat /etc/resolv.conf

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it curl-pod -- cat /etc/resolv.conf
search default.svc.cluster.local svc.cluster.local cluster.local
nameserver 10.96.0.10
options ndots:5

# 이 설정은 kubelet에서 설정을 확인할 수 있다.
cat /var/lib/kubelet/config.yaml | grep cluster -A1

(⎈|HomeLab:N/A) root@k8s-ctr:~# cat /var/lib/kubelet/config.yaml | grep cluster -A1
clusterDNS:
- 10.96.0.10
clusterDomain: cluster.local
containerRuntimeEndpoint: &quot;&quot;

# 10.96.0.10 자체는 kube-dns에 대한 Cluster IP이며, 이는 coredns 파드들로 연결된다.
kubectl get svc,ep -n kube-system kube-dns
kubectl get pod -n kube-system -l k8s-app=kube-dns

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get svc,ep -n kube-system kube-dns
Warning: v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
NAME               TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                  AGE
service/kube-dns   ClusterIP   10.96.0.10   &amp;lt;none&amp;gt;        53/UDP,53/TCP,9153/TCP   4h40m

NAME                 ENDPOINTS                                                     AGE
endpoints/kube-dns   172.20.0.224:53,172.20.1.107:53,172.20.0.224:53 + 3 more...   4h40m
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get pod -n kube-system -l k8s-app=kube-dns -owide
NAME                       READY   STATUS    RESTARTS   AGE    IP             NODE      NOMINATED NODE   READINESS GATES
coredns-674b8bbfcf-j48sp   1/1     Running   0          3h8m   172.20.1.107   k8s-ctr   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
coredns-674b8bbfcf-zfdq8   1/1     Running   0          3h8m   172.20.0.224   k8s-w1    &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;

# coredns 에 대한 config로 coredns라는 ConfigMap을 참조한다.
kubectl describe pod -n kube-system -l k8s-app=kube-dns
...
  config-volume:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      coredns
    Optional:  false
...

# coredns 로 정의된 설정을 살펴본다.
kubectl describe cm -n kube-system coredns
...
Corefile:
----
.:53 {              # 모든 도메인 요청을 53포트에서 수신
    errors          # DNS 응답 중 에러가 발생할 경우 로그 출력
    health {        # health 엔드포인트를 제공하여 상태 확인 가능
       lameduck 5s  # 종료 시 5초간 lameduck 모드로 트래픽을 점차 줄이며 종료 (graceful shutdown)
    }
    ready           # ready 엔드포인트 제공, 8181 포트의 HTTP 엔드포인트가, 모든 플러그인이 준비되었다는 신호를 보내면 200 OK 를 반환
    kubernetes cluster.local in-addr.arpa ip6.arpa {    # Kubernetes DNS 플러그인 설정(클러스터 내부 도메인 처리), cluster.local: 클러스터 도메인
       pods insecure                         # 파드 IP로 DNS 조회 허용 (보안 없음)
       fallthrough in-addr.arpa ip6.arpa     # 해당 도메인에서 결과 없으면 다음 플러그인으로 전달
       ttl 30                                #  캐시 타임 (30초)
    }
    prometheus :9153 # Prometheus metrics 수집 가능
    forward . /etc/resolv.conf {             # fallback으로 떨어지면, 즉, CoreDNS가 모르는 도메인은 지정된 업스트림(보통 외부 DNS)으로 전달, .: 모든 쿼리
       max_concurrent 1000                   # 병렬 포워딩 최대 1000개
    }
    cache 30 {                        # DNS 응답 캐시 기능, 기본 캐시 TTL 30초
       disable success cluster.local  # 성공 응답 캐시 안 함 (cluster.local 도메인)
       disable denial cluster.local   # NXDOMAIN 응답도 캐시 안 함
    } 
    loop         # 간단한 전달 루프(loop)를 감지하고, 루프가 발견되면 CoreDNS 프로세스를 중단(halt).
    reload       # Corefile 이 변경되었을 때 자동으로 재적용, 컨피그맵 설정을 변경한 후에 변경 사항이 적용되기 위하여 약 2분정도 소요.
    loadbalance  # 응답에 대하여 A, AAAA, MX 레코드의 순서를 무작위로 선정하는 라운드-로빈 DNS 로드밸런서.
}

# 호스트의 /etc/resolv.conf 확인
(⎈|HomeLab:N/A) root@k8s-ctr:~# cat /etc/resolv.conf |grep -v &quot;#&quot;

nameserver 127.0.0.53
options edns0 trust-ad
search .

# 127.0.0.53은 내부적으로 아래를 참조한다.
resolvectl 

(⎈|HomeLab:N/A) root@k8s-ctr:~# resolvectl
Global
         Protocols: -LLMNR -mDNS -DNSOverTLS DNSSEC=no/unsupported
  resolv.conf mode: stub

Link 2 (eth0)
    Current Scopes: DNS
         Protocols: +DefaultRoute -LLMNR -mDNS -DNSOverTLS DNSSEC=no/unsupported
Current DNS Server: 10.0.2.3
       DNS Servers: 10.0.2.3
&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이를 통해서 살펴보면, 파드는 coredns 로 DNS를 질의하고, coredns는 cluster Domain에 대해서 응답을 하며, 그 외의 도메인에 대해서는 노드의 /etc/resolv.conf의 설정에 따라 DNS 질의에 대한 응답이 이뤄지는 것을 알 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;실습 환경에서 DNS 질의를 테스트 해보겠습니다.&lt;/p&gt;
&lt;pre class=&quot;yaml&quot;&gt;&lt;code&gt;# 모니터링1
cilium hubble port-forward&amp;amp;
hubble observe -f --port 53
hubble observe -f --port 53 --protocol UDP
hubble observe -f --pod curl-pod --port 53

# 모니터링2
tcpdump -i any udp port 53 -nn

# 실습 편리를 위해 coredns 파드를 1개로 축소
kubectl scale deployment -n kube-system coredns --replicas 1
kubectl get pod -n kube-system -l k8s-app=kube-dns -owide

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl scale deployment -n kube-system coredns --replicas 1
deployment.apps/coredns scaled
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get pod -n kube-system -l k8s-app=kube-dns -owide
NAME                       READY   STATUS        RESTARTS   AGE     IP             NODE      NOMINATED NODE   READINESS GATES
coredns-674b8bbfcf-j48sp   1/1     Running       0          3h22m   172.20.1.107   k8s-ctr   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;

# coredns 의 cache hit관련 메트릭 정보 확인
kubectl exec -it curl-pod -- curl kube-dns.kube-system.svc:9153/metrics | grep coredns_cache_ | grep -v ^#

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it curl-pod -- curl kube-dns.kube-system.svc:9153/metrics | grep coredns_cache_ | grep -v ^#
coredns_cache_entries{server=&quot;dns://:53&quot;,type=&quot;denial&quot;,view=&quot;&quot;,zones=&quot;.&quot;} 1
coredns_cache_entries{server=&quot;dns://:53&quot;,type=&quot;success&quot;,view=&quot;&quot;,zones=&quot;.&quot;} 0
coredns_cache_misses_total{server=&quot;dns://:53&quot;,view=&quot;&quot;,zones=&quot;.&quot;} 31
coredns_cache_requests_total{server=&quot;dns://:53&quot;,view=&quot;&quot;,zones=&quot;.&quot;} 31


# 도메인 질의
kubectl exec -it curl-pod -- nslookup -debug webpod
kubectl exec -it curl-pod -- nslookup -debug google.com

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it curl-pod -- nslookup -debug webpod
;; Got recursion not available from 10.96.0.10
Server:         10.96.0.10
Address:        10.96.0.10#53

------------
    QUESTIONS:
        webpod.default.svc.cluster.local, type = A, class = IN
    ANSWERS:
    -&amp;gt;  webpod.default.svc.cluster.local
        internet address = 10.96.195.112
        ttl = 30
    AUTHORITY RECORDS:
    ADDITIONAL RECORDS:
------------
Name:   webpod.default.svc.cluster.local
Address: 10.96.195.112
;; Got recursion not available from 10.96.0.10
------------
    QUESTIONS:
        webpod.default.svc.cluster.local, type = AAAA, class = IN
    ANSWERS:
    AUTHORITY RECORDS:
    -&amp;gt;  cluster.local
        origin = ns.dns.cluster.local
        mail addr = hostmaster.cluster.local
        serial = 1754132506
        refresh = 7200
        retry = 1800
        expire = 86400
        minimum = 30
        ttl = 30
    ADDITIONAL RECORDS:
------------

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it curl-pod -- nslookup -debug google.com
;; Got recursion not available from 10.96.0.10
Server:         10.96.0.10
Address:        10.96.0.10#53

------------
    QUESTIONS:
        google.com.default.svc.cluster.local, type = A, class = IN # search domain을 붙여서 질의함
    ANSWERS:
    AUTHORITY RECORDS:
    -&amp;gt;  cluster.local
        origin = ns.dns.cluster.local
        mail addr = hostmaster.cluster.local
        serial = 1754132506
        refresh = 7200
        retry = 1800
        expire = 86400
        minimum = 30
        ttl = 30
    ADDITIONAL RECORDS:
------------
** server can't find google.com.default.svc.cluster.local: NXDOMAIN
;; Got recursion not available from 10.96.0.10
Server:         10.96.0.10
Address:        10.96.0.10#53

------------
    QUESTIONS:
        google.com.svc.cluster.local, type = A, class = IN # search domain을 붙여서 질의함
    ANSWERS:
    AUTHORITY RECORDS:
    -&amp;gt;  cluster.local
        origin = ns.dns.cluster.local
        mail addr = hostmaster.cluster.local
        serial = 1754132506
        refresh = 7200
        retry = 1800
        expire = 86400
        minimum = 30
        ttl = 30
    ADDITIONAL RECORDS:
------------
** server can't find google.com.svc.cluster.local: NXDOMAIN
;; Got recursion not available from 10.96.0.10
Server:         10.96.0.10
Address:        10.96.0.10#53

------------
    QUESTIONS:
        google.com.cluster.local, type = A, class = IN # search domain을 붙여서 질의함
    ANSWERS:
    AUTHORITY RECORDS:
    -&amp;gt;  cluster.local
        origin = ns.dns.cluster.local
        mail addr = hostmaster.cluster.local
        serial = 1754132506
        refresh = 7200
        retry = 1800
        expire = 86400
        minimum = 30
        ttl = 30
    ADDITIONAL RECORDS:
------------
** server can't find google.com.cluster.local: NXDOMAIN
Server:         10.96.0.10
Address:        10.96.0.10#53

------------
    QUESTIONS:
        google.com, type = A, class = IN # 최종 질의
    ANSWERS:
    -&amp;gt;  google.com
        internet address = 172.217.175.14
        ttl = 30
    AUTHORITY RECORDS:
    ADDITIONAL RECORDS:
------------
Non-authoritative answer:
Name:   google.com
Address: 172.217.175.14
------------
    QUESTIONS:
        google.com, type = AAAA, class = IN
    ANSWERS:
    -&amp;gt;  google.com
        has AAAA address 2404:6800:4004:823::200e
        ttl = 30
    AUTHORITY RECORDS:
    ADDITIONAL RECORDS:
------------
Name:   google.com
Address: 2404:6800:4004:823::200e

command terminated with exit code 1

# coredns 로깅, 디버깅 활성화
k9s &amp;rarr; configmap &amp;rarr; coredns 선택 &amp;rarr; E(edit) &amp;rarr; 아래처럼 log, debug 입력 후 빠져나오기
    .:53 {
        log
        debug
        errors

# 로그 모니터링 3
kubectl -n kube-system logs -l k8s-app=kube-dns -f


# 도메인 질의
kubectl exec -it curl-pod -- nslookup webpod
kubectl exec -it curl-pod -- nslookup google.com


# coredns 로그
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl -n kube-system logs -l k8s-app=kube-dns -f
linux/amd64, go1.23.3, 51e11f1
...
# webpod
[INFO] 172.20.1.218:38704 - 5586 &quot;A IN webpod.default.svc.cluster.local. udp 50 false 512&quot; NOERROR qr,aa,rd 98 0.000898836s
[INFO] 172.20.1.218:38693 - 50432 &quot;AAAA IN webpod.default.svc.cluster.local. udp 50 false 512&quot; NOERROR qr,aa,rd 143 0.003870525s
# google.com
[INFO] 172.20.1.218:52681 - 2613 &quot;A IN google.com.default.svc.cluster.local. udp 54 false 512&quot; NXDOMAIN qr,aa,rd 147 0.001374837s
[INFO] 172.20.1.218:38214 - 31588 &quot;A IN google.com.svc.cluster.local. udp 46 false 512&quot; NXDOMAIN qr,aa,rd 139 0.006958878s
[INFO] 172.20.1.218:48197 - 4129 &quot;A IN google.com.cluster.local. udp 42 false 512&quot; NXDOMAIN qr,aa,rd 135 0.00506481s
[INFO] 172.20.1.218:56039 - 7882 &quot;A IN google.com. udp 28 false 512&quot; NOERROR qr,rd,ra 54 0.02799622s
[INFO] 172.20.1.218:36634 - 19787 &quot;AAAA IN google.com. udp 28 false 512&quot; NOERROR qr,rd,ra 66 0.021259972s

# hubble observe 로그
(⎈|HomeLab:N/A) root@k8s-ctr:~# hubble observe -f --port 53 --protocol UDP
Aug  2 11:15:28.720: default/curl-pod (ID:13646) &amp;lt;&amp;gt; 10.96.0.10:53 (world) pre-xlate-fwd TRACED (UDP)
Aug  2 11:15:28.721: default/curl-pod (ID:13646) &amp;lt;&amp;gt; kube-system/coredns-674b8bbfcf-j48sp:53 (ID:8066) post-xlate-fwd TRANSLATED (UDP)
Aug  2 11:15:28.722: default/curl-pod:38704 (ID:13646) -&amp;gt; kube-system/coredns-674b8bbfcf-j48sp:53 (ID:8066) to-endpoint FORWARDED (UDP)
Aug  2 11:15:28.746: default/curl-pod:38704 (ID:13646) &amp;lt;- kube-system/coredns-674b8bbfcf-j48sp:53 (ID:8066) to-endpoint FORWARDED (UDP)
Aug  2 11:15:28.746: kube-system/coredns-674b8bbfcf-j48sp:53 (ID:8066) &amp;lt;&amp;gt; default/curl-pod (ID:13646) pre-xlate-rev TRACED (UDP)
Aug  2 11:15:28.746: 10.96.0.10:53 (world) &amp;lt;&amp;gt; default/curl-pod (ID:13646) post-xlate-rev TRANSLATED (UDP)
Aug  2 11:15:28.795: default/curl-pod (ID:13646) &amp;lt;&amp;gt; 10.96.0.10:53 (world) pre-xlate-fwd TRACED (UDP)
Aug  2 11:15:28.795: default/curl-pod (ID:13646) &amp;lt;&amp;gt; kube-system/coredns-674b8bbfcf-j48sp:53 (ID:8066) post-xlate-fwd TRANSLATED (UDP)
Aug  2 11:15:28.799: default/curl-pod:38693 (ID:13646) -&amp;gt; kube-system/coredns-674b8bbfcf-j48sp:53 (ID:8066) to-endpoint FORWARDED (UDP)
Aug  2 11:15:28.800: default/curl-pod:38693 (ID:13646) &amp;lt;- kube-system/coredns-674b8bbfcf-j48sp:53 (ID:8066) to-endpoint FORWARDED (UDP)
Aug  2 11:15:28.805: kube-system/coredns-674b8bbfcf-j48sp:53 (ID:8066) &amp;lt;&amp;gt; default/curl-pod (ID:13646) pre-xlate-rev TRACED (UDP)
Aug  2 11:15:28.805: 10.96.0.10:53 (world) &amp;lt;&amp;gt; default/curl-pod (ID:13646) post-xlate-rev TRANSLATED (UDP)

Aug  2 11:15:51.609: default/curl-pod (ID:13646) &amp;lt;&amp;gt; 10.96.0.10:53 (world) pre-xlate-fwd TRACED (UDP)
Aug  2 11:15:51.609: default/curl-pod (ID:13646) &amp;lt;&amp;gt; kube-system/coredns-674b8bbfcf-j48sp:53 (ID:8066) post-xlate-fwd TRANSLATED (UDP)
Aug  2 11:15:51.610: default/curl-pod:52681 (ID:13646) -&amp;gt; kube-system/coredns-674b8bbfcf-j48sp:53 (ID:8066) to-endpoint FORWARDED (UDP)
Aug  2 11:15:51.625: default/curl-pod:52681 (ID:13646) &amp;lt;- kube-system/coredns-674b8bbfcf-j48sp:53 (ID:8066) to-endpoint FORWARDED (UDP)
Aug  2 11:15:51.625: kube-system/coredns-674b8bbfcf-j48sp:53 (ID:8066) &amp;lt;&amp;gt; default/curl-pod (ID:13646) pre-xlate-rev TRACED (UDP)
Aug  2 11:15:51.625: 10.96.0.10:53 (world) &amp;lt;&amp;gt; default/curl-pod (ID:13646) post-xlate-rev TRANSLATED (UDP)
Aug  2 11:15:51.639: default/curl-pod (ID:13646) &amp;lt;&amp;gt; 10.96.0.10:53 (world) pre-xlate-fwd TRACED (UDP)
Aug  2 11:15:51.640: default/curl-pod (ID:13646) &amp;lt;&amp;gt; kube-system/coredns-674b8bbfcf-j48sp:53 (ID:8066) post-xlate-fwd TRANSLATED (UDP)
Aug  2 11:15:51.645: default/curl-pod:38214 (ID:13646) -&amp;gt; kube-system/coredns-674b8bbfcf-j48sp:53 (ID:8066) to-endpoint FORWARDED (UDP)
Aug  2 11:15:51.646: default/curl-pod:38214 (ID:13646) &amp;lt;- kube-system/coredns-674b8bbfcf-j48sp:53 (ID:8066) to-endpoint FORWARDED (UDP)
Aug  2 11:15:51.650: kube-system/coredns-674b8bbfcf-j48sp:53 (ID:8066) &amp;lt;&amp;gt; default/curl-pod (ID:13646) pre-xlate-rev TRACED (UDP)
Aug  2 11:15:51.650: 10.96.0.10:53 (world) &amp;lt;&amp;gt; default/curl-pod (ID:13646) post-xlate-rev TRANSLATED (UDP)
Aug  2 11:15:51.683: default/curl-pod (ID:13646) &amp;lt;&amp;gt; 10.96.0.10:53 (world) pre-xlate-fwd TRACED (UDP)
Aug  2 11:15:51.683: default/curl-pod (ID:13646) &amp;lt;&amp;gt; kube-system/coredns-674b8bbfcf-j48sp:53 (ID:8066) post-xlate-fwd TRANSLATED (UDP)
Aug  2 11:15:51.688: default/curl-pod:48197 (ID:13646) -&amp;gt; kube-system/coredns-674b8bbfcf-j48sp:53 (ID:8066) to-endpoint FORWARDED (UDP)
Aug  2 11:15:51.699: default/curl-pod:48197 (ID:13646) &amp;lt;- kube-system/coredns-674b8bbfcf-j48sp:53 (ID:8066) to-endpoint FORWARDED (UDP)
Aug  2 11:15:51.706: kube-system/coredns-674b8bbfcf-j48sp:53 (ID:8066) &amp;lt;&amp;gt; default/curl-pod (ID:13646) pre-xlate-rev TRACED (UDP)
Aug  2 11:15:51.710: 10.96.0.10:53 (world) &amp;lt;&amp;gt; default/curl-pod (ID:13646) post-xlate-rev TRANSLATED (UDP)
Aug  2 11:15:51.745: default/curl-pod (ID:13646) &amp;lt;&amp;gt; 10.96.0.10:53 (world) pre-xlate-fwd TRACED (UDP)
Aug  2 11:15:51.745: default/curl-pod (ID:13646) &amp;lt;&amp;gt; kube-system/coredns-674b8bbfcf-j48sp:53 (ID:8066) post-xlate-fwd TRANSLATED (UDP)
Aug  2 11:15:51.747: default/curl-pod:56039 (ID:13646) -&amp;gt; kube-system/coredns-674b8bbfcf-j48sp:53 (ID:8066) to-endpoint FORWARDED (UDP)
Aug  2 11:15:51.751: 10.0.2.3:53 (world) &amp;lt;&amp;gt; kube-system/coredns-674b8bbfcf-j48sp (ID:8066) pre-xlate-rev TRACED (UDP)
Aug  2 11:15:51.751: 10.0.2.3:53 (world) &amp;lt;&amp;gt; kube-system/coredns-674b8bbfcf-j48sp (ID:8066) pre-xlate-rev TRACED (UDP)
Aug  2 11:15:51.754: 10.0.2.3:53 (world) &amp;lt;&amp;gt; kube-system/coredns-674b8bbfcf-j48sp (ID:8066) pre-xlate-rev TRACED (UDP)
Aug  2 11:15:51.754: kube-system/coredns-674b8bbfcf-j48sp:55313 (ID:8066) -&amp;gt; 10.0.2.3:53 (world) to-network FORWARDED (UDP)
Aug  2 11:15:51.757: 10.0.2.3:53 (world) &amp;lt;&amp;gt; kube-system/coredns-674b8bbfcf-j48sp (ID:8066) pre-xlate-rev TRACED (UDP)
Aug  2 11:15:51.757: 10.0.2.3:53 (world) &amp;lt;&amp;gt; kube-system/coredns-674b8bbfcf-j48sp (ID:8066) pre-xlate-rev TRACED (UDP)
Aug  2 11:15:51.777: default/curl-pod:56039 (ID:13646) &amp;lt;- kube-system/coredns-674b8bbfcf-j48sp:53 (ID:8066) to-endpoint FORWARDED (UDP)
Aug  2 11:15:51.777: kube-system/coredns-674b8bbfcf-j48sp:55313 (ID:8066) &amp;lt;- 10.0.2.3:53 (world) to-endpoint FORWARDED (UDP)
Aug  2 11:15:51.803: kube-system/coredns-674b8bbfcf-j48sp:53 (ID:8066) &amp;lt;&amp;gt; default/curl-pod (ID:13646) pre-xlate-rev TRACED (UDP)
Aug  2 11:15:51.803: 10.96.0.10:53 (world) &amp;lt;&amp;gt; default/curl-pod (ID:13646) post-xlate-rev TRANSLATED (UDP)
Aug  2 11:15:51.824: default/curl-pod (ID:13646) &amp;lt;&amp;gt; 10.96.0.10:53 (world) pre-xlate-fwd TRACED (UDP)
Aug  2 11:15:51.824: default/curl-pod (ID:13646) &amp;lt;&amp;gt; kube-system/coredns-674b8bbfcf-j48sp:53 (ID:8066) post-xlate-fwd TRANSLATED (UDP)
Aug  2 11:15:51.824: default/curl-pod:36634 (ID:13646) -&amp;gt; kube-system/coredns-674b8bbfcf-j48sp:53 (ID:8066) to-endpoint FORWARDED (UDP)
Aug  2 11:15:51.837: default/curl-pod:36634 (ID:13646) &amp;lt;- kube-system/coredns-674b8bbfcf-j48sp:53 (ID:8066) to-endpoint FORWARDED (UDP)
Aug  2 11:15:51.837: kube-system/coredns-674b8bbfcf-j48sp:53 (ID:8066) &amp;lt;&amp;gt; default/curl-pod (ID:13646) pre-xlate-rev TRACED (UDP)
Aug  2 11:15:51.837: 10.96.0.10:53 (world) &amp;lt;&amp;gt; default/curl-pod (ID:13646) post-xlate-rev TRANSLATED (UDP)

# tcpdump 로그
(⎈|HomeLab:N/A) root@k8s-ctr:~# tcpdump -i any udp port 53 -nn
tcpdump: data link type LINUX_SLL2
tcpdump: verbose output suppressed, use -v[v]... for full protocol decode
listening on any, link-type LINUX_SLL2 (Linux cooked v2), snapshot length 262144 bytes
# webpod
20:15:28.721334 lxcba4acff7647e In  IP 172.20.1.218.38704 &amp;gt; 172.20.1.107.53: 5586+ A? webpod.default.svc.cluster.local. (50)
20:15:28.736149 lxc624d897a501a In  IP 172.20.1.107.53 &amp;gt; 172.20.1.218.38704: 5586*- 1/0/0 A 10.96.195.112 (98)
20:15:28.796141 lxcba4acff7647e In  IP 172.20.1.218.38693 &amp;gt; 172.20.1.107.53: 50432+ AAAA? webpod.default.svc.cluster.local. (50)
20:15:28.799300 lxc624d897a501a In  IP 172.20.1.107.53 &amp;gt; 172.20.1.218.38693: 50432*- 0/1/0 (143)
# google.com
20:15:51.610506 lxcba4acff7647e In  IP 172.20.1.218.52681 &amp;gt; 172.20.1.107.53: 2613+ A? google.com.default.svc.cluster.local. (54)
20:15:51.612065 lxc624d897a501a In  IP 172.20.1.107.53 &amp;gt; 172.20.1.218.52681: 2613 NXDomain*- 0/1/0 (147)
20:15:51.640965 lxcba4acff7647e In  IP 172.20.1.218.38214 &amp;gt; 172.20.1.107.53: 31588+ A? google.com.svc.cluster.local. (46)
20:15:51.645524 lxc624d897a501a In  IP 172.20.1.107.53 &amp;gt; 172.20.1.218.38214: 31588 NXDomain*- 0/1/0 (139)
20:15:51.687149 lxcba4acff7647e In  IP 172.20.1.218.48197 &amp;gt; 172.20.1.107.53: 4129+ A? google.com.cluster.local. (42)
20:15:51.692518 lxc624d897a501a In  IP 172.20.1.107.53 &amp;gt; 172.20.1.218.48197: 4129 NXDomain*- 0/1/0 (135)
20:15:51.745120 lxcba4acff7647e In  IP 172.20.1.218.56039 &amp;gt; 172.20.1.107.53: 7882+ A? google.com. (28)
20:15:51.753722 lxc624d897a501a In  IP 172.20.1.107.55313 &amp;gt; 10.0.2.3.53: 13520+ A? google.com. (28)
20:15:51.754903 eth0  Out IP 10.0.2.15.55313 &amp;gt; 10.0.2.3.53: 13520+ A? google.com. (28)
20:15:51.774891 eth0  In  IP 10.0.2.3.53 &amp;gt; 10.0.2.15.55313: 13520 1/0/0 A 142.250.196.142 (44)
20:15:51.775751 lxc624d897a501a In  IP 172.20.1.107.53 &amp;gt; 172.20.1.218.56039: 7882 1/0/0 A 142.250.196.142 (54)
20:15:51.824473 lxcba4acff7647e In  IP 172.20.1.218.36634 &amp;gt; 172.20.1.107.53: 19787+ AAAA? google.com. (28)
20:15:51.825237 lxc624d897a501a In  IP 172.20.1.107.55313 &amp;gt; 10.0.2.3.53: 10205+ AAAA? google.com. (28)
20:15:51.825271 eth0  Out IP 10.0.2.15.55313 &amp;gt; 10.0.2.3.53: 10205+ AAAA? google.com. (28)
20:15:51.834639 eth0  In  IP 10.0.2.3.53 &amp;gt; 10.0.2.15.55313: 10205 1/0/0 AAAA 2404:6800:4004:818::200e (56)
20:15:51.835809 lxc624d897a501a In  IP 172.20.1.107.53 &amp;gt; 172.20.1.218.36634: 19787 1/0/0 AAAA 2404:6800:4004:818::200e (66)


# CoreDNS가 prometheus 플러그인을 사용하고 있다면, 메트릭 포트(:9153)를 통해 캐시 관련 정보를 수집.
## coredns_cache_entries 현재 캐시에 저장된 엔트리(항목) 수 : type: success 또는 denial (정상 응답 or NXDOMAIN 등)
## coredns_cache_hits_total    캐시 조회 성공 횟수
## coredns_cache_misses_total    캐시 미스 횟수
## coredns_cache_requests_total    캐시 관련 요청 횟수의 총합

kubectl exec -it curl-pod -- curl kube-dns.kube-system.svc:9153/metrics | grep coredns_cache_ | grep -v ^#

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it curl-pod -- curl kube-dns.kube-system.svc:9153/metrics | grep coredns_cache_ | grep -v ^#
coredns_cache_entries{server=&quot;dns://:53&quot;,type=&quot;denial&quot;,view=&quot;&quot;,zones=&quot;.&quot;} 1
coredns_cache_entries{server=&quot;dns://:53&quot;,type=&quot;success&quot;,view=&quot;&quot;,zones=&quot;.&quot;} 2
coredns_cache_misses_total{server=&quot;dns://:53&quot;,view=&quot;&quot;,zones=&quot;.&quot;} 44
coredns_cache_requests_total{server=&quot;dns://:53&quot;,view=&quot;&quot;,zones=&quot;.&quot;} 44&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;실습을 통해서 파드의 DNS 질의가 어떻게 동작과 모니터링 방법을 같이 살펴봤습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;2. NodeLocalDNS 사용&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;NodeLocalDNS는 DNS 질의가 과도하여 성능 저하가 있는 경우, 각 노드에 DNS 질의 결과를 캐싱하는 에이전트를 데몬 셋 형태로 실행하여 로컬 DNS 캐싱 서비스를 제공하는 방식을 말합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아래 그림의 파드 DNS 질의 절차를 보면 Client pod는 Local DNS cache로 질의를 하고, Cache가 없는 경우에 다시 coredns로 질의하여 결과를 반환하는 형태로 동작합니다. 이 경우 coredns의 부하도 줄어들고, 한편 coredns가 다른 노드에 위치하더라도 통신을 위한 latency가 줄어드는 효과가 있습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;601&quot; data-origin-height=&quot;501&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bMBWul/dJMb89dBfzf/RF7DPWomXAG594ZqZm3PaK/tfile.svg&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bMBWul/dJMb89dBfzf/RF7DPWomXAG594ZqZm3PaK/tfile.svg&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bMBWul/dJMb89dBfzf/RF7DPWomXAG594ZqZm3PaK/tfile.svg&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbMBWul%2FdJMb89dBfzf%2FRF7DPWomXAG594ZqZm3PaK%2Ftfile.svg&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;601&quot; height=&quot;501&quot; data-origin-width=&quot;601&quot; data-origin-height=&quot;501&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;출처: &lt;a href=&quot;https://kubernetes.io/docs/tasks/administer-cluster/nodelocaldns/&quot;&gt;https://kubernetes.io/docs/tasks/administer-cluster/nodelocaldns/&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아래로 NodeLocalDNS를 배포해보고 실습해 보겠습니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;# iptables 확인
iptables-save | tee before.txt

# nodelocaldns.yaml 다운
wget https://github.com/kubernetes/kubernetes/raw/master/cluster/addons/dns/nodelocaldns/nodelocaldns.yaml

# kubedns 는 coredns 서비스의 ClusterIP를 변수 지정
kubedns=`kubectl get svc kube-dns -n kube-system -o jsonpath={.spec.clusterIP}`
domain='cluster.local'    ## default 값
localdns='169.254.20.10'  ## default 값
echo $kubedns $domain $localdns

# iptables 모드 사용 중으로 아래 명령어 수행
sed -i &quot;s/__PILLAR__LOCAL__DNS__/$localdns/g; s/__PILLAR__DNS__DOMAIN__/$domain/g; s/__PILLAR__DNS__SERVER__/$kubedns/g&quot; nodelocaldns.yaml

# nodelocaldns 설치
kubectl apply -f nodelocaldns.yaml

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl apply -f nodelocaldns.yaml
serviceaccount/node-local-dns created
service/kube-dns-upstream created
configmap/node-local-dns created
daemonset.apps/node-local-dns created
service/node-local-dns created

# 설치 확인
kubectl get pod -n kube-system -l k8s-app=node-local-dns -owide

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get pod -n kube-system -l k8s-app=node-local-dns -owide
NAME                   READY   STATUS    RESTARTS   AGE   IP               NODE      NOMINATED NODE   READINESS GATES
node-local-dns-56cxv   1/1     Running   0          29s   192.168.10.100   k8s-ctr   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
node-local-dns-bgqpq   1/1     Running   0          29s   192.168.10.101   k8s-w1    &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;

# node-local-dns의 logging 설정 및 설정 확인
kubectl edit cm -n kube-system node-local-dns # 'cluster.local' 과 '.:53' 에 log, debug 추가
kubectl -n kube-system rollout restart ds node-local-dns
kubectl describe cm -n kube-system node-local-dns

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl edit cm -n kube-system node-local-dns
configmap/node-local-dns edited
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl -n kube-system rollout restart ds node-local-dns
daemonset.apps/node-local-dns restarted
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl describe cm -n kube-system node-local-dns
Name:         node-local-dns
Namespace:    kube-system
Labels:       addonmanager.kubernetes.io/mode=Reconcile
Annotations:  &amp;lt;none&amp;gt;

Data
====
Corefile:
----
cluster.local:53 {
    log
    debug
    errors
    cache {
            success 9984 30
            denial 9984 5
    }
    reload
    loop
    bind 169.254.20.10 10.96.0.10 # 기본적으로 nodelocaldns로 호출하고, 그 이후 coredns로 감
    forward . __PILLAR__CLUSTER__DNS__ {
            force_tcp
    }
    prometheus :9253
    health 169.254.20.10:8080
    }
in-addr.arpa:53 {
    errors
    cache 30
    reload
    loop
    bind 169.254.20.10 10.96.0.10
    forward . __PILLAR__CLUSTER__DNS__ {
            force_tcp
    }
    prometheus :9253
    }
ip6.arpa:53 {
    errors
    cache 30
    reload
    loop
    bind 169.254.20.10 10.96.0.10
    forward . __PILLAR__CLUSTER__DNS__ {
            force_tcp
    }
    prometheus :9253
    }
.:53 {
    errors
    cache 30
    reload
    loop
    bind 169.254.20.10 10.96.0.10
    forward . __PILLAR__UPSTREAM__SERVERS__
    prometheus :9253
    }



BinaryData
====

Events:  &amp;lt;none&amp;gt;


# iptables 확인 : 규칙 업데이트까지 다소 시간 소요됨
iptables-save | tee after.txt
diff before.txt after.txt

## 
iptables -t filter -S | grep -i dns
-A INPUT -d 10.96.0.10/32 -p udp -m udp --dport 53 -m comment --comment &quot;NodeLocal DNS Cache: allow DNS traffic&quot; -j ACCEPT
-A INPUT -d 10.96.0.10/32 -p tcp -m tcp --dport 53 -m comment --comment &quot;NodeLocal DNS Cache: allow DNS traffic&quot; -j ACCEPT
-A INPUT -d 169.254.20.10/32 -p udp -m udp --dport 53 -m comment --comment &quot;NodeLocal DNS Cache: allow DNS traffic&quot; -j ACCEPT
-A INPUT -d 169.254.20.10/32 -p tcp -m tcp --dport 53 -m comment --comment &quot;NodeLocal DNS Cache: allow DNS traffic&quot; -j ACCEPT
-A OUTPUT -s 10.96.0.10/32 -p udp -m udp --sport 53 -m comment --comment &quot;NodeLocal DNS Cache: allow DNS traffic&quot; -j ACCEPT
-A OUTPUT -s 10.96.0.10/32 -p tcp -m tcp --sport 53 -m comment --comment &quot;NodeLocal DNS Cache: allow DNS traffic&quot; -j ACCEPT
-A OUTPUT -s 169.254.20.10/32 -p udp -m udp --sport 53 -m comment --comment &quot;NodeLocal DNS Cache: allow DNS traffic&quot; -j ACCEPT
-A OUTPUT -s 169.254.20.10/32 -p tcp -m tcp --sport 53 -m comment --comment &quot;NodeLocal DNS Cache: allow DNS traffic&quot; -j ACCEPT

##
iptables -t raw -S | grep -i dns
-A PREROUTING -d 10.96.0.10/32 -p udp -m udp --dport 53 -m comment --comment &quot;NodeLocal DNS Cache: skip conntrack&quot; -j NOTRACK
-A PREROUTING -d 10.96.0.10/32 -p tcp -m tcp --dport 53 -m comment --comment &quot;NodeLocal DNS Cache: skip conntrack&quot; -j NOTRACK
-A PREROUTING -d 169.254.20.10/32 -p udp -m udp --dport 53 -m comment --comment &quot;NodeLocal DNS Cache: skip conntrack&quot; -j NOTRACK
-A PREROUTING -d 169.254.20.10/32 -p tcp -m tcp --dport 53 -m comment --comment &quot;NodeLocal DNS Cache: skip conntrack&quot; -j NOTRACK
-A OUTPUT -s 10.96.0.10/32 -p tcp -m tcp --sport 8080 -m comment --comment &quot;NodeLocal DNS Cache: skip conntrack&quot; -j NOTRACK
-A OUTPUT -d 10.96.0.10/32 -p tcp -m tcp --dport 8080 -m comment --comment &quot;NodeLocal DNS Cache: skip conntrack&quot; -j NOTRACK
...
&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;다만 nodelocaldns를 배포한 이후에도 DNS 질의를 하면 nodelocaldns를 사용하지 않는 것으로 확인됩니다.&lt;/p&gt;
&lt;pre class=&quot;livecodeserver&quot;&gt;&lt;code&gt;# dns 질의가 변경 여부 확인
kubectl exec -it curl-pod -- nslookup webpod
kubectl exec -it curl-pod -- nslookup google.com

# logs : 아직 coredns 쪽으로만 로그가 남겨진다. (nodelocaldns가 제대로 동작하지 않는다는 의미)
kubectl -n kube-system logs -l k8s-app=kube-dns -f
kubectl -n kube-system logs -l k8s-app=node-local-dns -f

# 파드를 재시작해도 실제로 /etc/resolv.conf가 바뀌지 않는다.
kubectl exec -it curl-pod -- cat /etc/resolv.conf&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;그 이유는 위에 살펴본 iptables 규칙과 관련이 있으며, 실제로 nodelocaldns를 배포하면, iptables 정책을 변경하여 coredns로의 호출을 nodelocaldns가 수행하도록 동작합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;다만 Cilium 환경에서는 iptables가 정의한 대로 동작하지 않습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;3. Cilium의 Local Redirect Policy&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이러한 상황에서 사용할 수 있는 옵션이 cilium의 Local Redirect Policy 옵션입니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Cilium에 &lt;code&gt;--set localRedirectPolicy=true&lt;/code&gt; 옵션을 적용하고, &lt;code&gt;CiliumLocalRedirectPolicy&lt;/code&gt;라는 CRD를 정의하여, IP 주소와 Port/Protocol의 tuple 또는 Kubernetes Service 로 향하는 파드 트래픽을 eBPF를 사용하여 노드 내 백엔드 파드로 로컬로 리디렉션할 수 있도록 하는 Cilium의 로컬 리디렉션 정책을 구성합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아래와 같이 실습을 이어 나가겠습니다.&lt;/p&gt;
&lt;pre class=&quot;yaml&quot;&gt;&lt;code&gt;# 옵션 적용
helm upgrade cilium cilium/cilium --namespace kube-system --version 1.17.6 --reuse-values \
  --set localRedirectPolicy=true
kubectl rollout restart deploy cilium-operator -n kube-system
kubectl rollout restart ds cilium -n kube-system

(⎈|HomeLab:N/A) root@k8s-ctr:~# helm upgrade cilium cilium/cilium --namespace kube-system --version 1.17.6 --reuse-values \
  --set localRedirectPolicy=true
Release &quot;cilium&quot; has been upgraded. Happy Helming!
NAME: cilium
LAST DEPLOYED: Sat Aug  2 20:48:04 2025
NAMESPACE: kube-system
STATUS: deployed
REVISION: 4
TEST SUITE: None
NOTES:
You have successfully installed Cilium with Hubble Relay and Hubble UI.

Your release version is 1.17.6.

For any further help, visit https://docs.cilium.io/en/v1.17/gettinghelp
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl rollout restart deploy cilium-operator -n kube-system
deployment.apps/cilium-operator restarted
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl rollout restart ds cilium -n kube-system
daemonset.apps/cilium restarted


# local redirect용 nodelocaldns에 대한 파일 다운
wget https://raw.githubusercontent.com/cilium/cilium/1.17.6/examples/kubernetes-local-redirect/node-local-dns.yaml

# 설정 추가
kubedns=$(kubectl get svc kube-dns -n kube-system -o jsonpath={.spec.clusterIP})
sed -i &quot;s/__PILLAR__DNS__SERVER__/$kubedns/g;&quot; node-local-dns.yaml

# 일부 설정이 다른 것을 알 수 있다.
vi -d nodelocaldns.yaml node-local-dns.yaml

## before
args: [ &quot;-localip&quot;, &quot;169.254.20.10,10.96.0.10&quot;, &quot;-conf&quot;, &quot;/etc/Corefile&quot;, &quot;-upstreamsvc&quot;, &quot;kube-dns-upstream&quot; ]

## after
args: [ &quot;-localip&quot;, &quot;169.254.20.10,10.96.0.10&quot;, &quot;-conf&quot;, &quot;/etc/Corefile&quot;, &quot;-upstreamsvc&quot;, &quot;kube-dns-upstream&quot;, &quot;-skipteardown=true&quot;, &quot;-setupinterface=false&quot;, &quot;-setupiptables=false&quot; ]

# 배포 (local redirect를 위한 추가 설정이 있는 것을 알 수 있음)
## -skipteardown=true, -setupinterface=false, and -setupiptables=false.
# Modify Node-local DNS cache&amp;rsquo;s deployment yaml to put it in non-host namespace by setting hostNetwork: false for the daemonset.
# In the Corefile, bind to 0.0.0.0 instead of the static IP.
kubectl apply -f node-local-dns.yaml

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl apply -f node-local-dns.yaml
serviceaccount/node-local-dns configured
service/kube-dns-upstream configured
configmap/node-local-dns configured
daemonset.apps/node-local-dns configured

# Logging 설정 추가
kubectl edit cm -n kube-system node-local-dns # log, debug 추가
kubectl -n kube-system rollout restart ds node-local-dns

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl edit cm -n kube-system node-local-dns
configmap/node-local-dns edited
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl -n kube-system rollout restart ds node-local-dns
daemonset.apps/node-local-dns restarted

kubectl describe cm -n kube-system node-local-dns

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl describe cm -n kube-system node-local-dns
Name:         node-local-dns
Namespace:    kube-system
Labels:       &amp;lt;none&amp;gt;
Annotations:  &amp;lt;none&amp;gt;

Data
====
Corefile:
----
cluster.local:53 {
    log
    debug
    errors
    cache {
            success 9984 30
            denial 9984 5
    }
    reload
    loop
    bind 0.0.0.0
    forward . __PILLAR__CLUSTER__DNS__ {
            force_tcp
    }
    prometheus :9253
    health
    }
in-addr.arpa:53 {
    errors
    cache 30
    reload
    loop
    bind 0.0.0.0
    forward . __PILLAR__CLUSTER__DNS__ {
            force_tcp
    }
    prometheus :9253
    }
ip6.arpa:53 {
    errors
    cache 30
    reload
    loop
    bind 0.0.0.0
    forward . __PILLAR__CLUSTER__DNS__ {
            force_tcp
    }
    prometheus :9253
    }
.:53 {
    errors
    cache 30
    reload
    loop
    bind 0.0.0.0
    forward . __PILLAR__UPSTREAM__SERVERS__
    prometheus :9253
    }



BinaryData
====

Events:  &amp;lt;none&amp;gt;


# CiliumLocalRedirectPolicy 파일 확인
wget https://raw.githubusercontent.com/cilium/cilium/1.17.6/examples/kubernetes-local-redirect/node-local-dns-lrp.yaml
cat node-local-dns-lrp.yaml | yq
apiVersion: &quot;cilium.io/v2&quot;
kind: CiliumLocalRedirectPolicy
metadata:
  name: &quot;nodelocaldns&quot;
  namespace: kube-system
spec:
  redirectFrontend:
    serviceMatcher:
      serviceName: kube-dns
      namespace: kube-system
  redirectBackend: # redirectBackend를 node-local-dns로 지정한다!
    localEndpointSelector:
      matchLabels:
        k8s-app: node-local-dns
    toPorts:
      - port: &quot;53&quot;
        name: dns
        protocol: UDP
      - port: &quot;53&quot;
        name: dns-tcp
        protocol: TCP

kubectl apply -f https://raw.githubusercontent.com/cilium/cilium/1.17.6/examples/kubernetes-local-redirect/node-local-dns-lrp.yaml

# 생성 확인
kubectl get CiliumLocalRedirectPolicy -A

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get CiliumLocalRedirectPolicy -A
NAMESPACE     NAME           AGE
kube-system   nodelocaldns   5s

# cilium에서 local retirection 설정 확인
kubectl exec -it -n kube-system ds/cilium -c cilium-agent -- cilium-dbg lrp list

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it -n kube-system ds/cilium -c cilium-agent -- cilium-dbg lrp list
LRP namespace   LRP name       FrontendType                Matching Service
kube-system     nodelocaldns   clusterIP + all svc ports   kube-system/kube-dns
                |              10.96.0.10:9153/TCP -&amp;gt;
                |              10.96.0.10:53/UDP -&amp;gt; 172.20.0.188:53(kube-system/node-local-dns-zcg6v),
                |              10.96.0.10:53/TCP -&amp;gt; 172.20.0.188:53(kube-system/node-local-dns-zcg6v),

kubectl exec -it -n kube-system ds/cilium -c cilium-agent -- cilium-dbg service list | grep LocalRedirect

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it -n kube-system ds/cilium -c cilium-agent -- cilium-dbg service list | grep LocalRedirect
16   10.96.0.10:53/TCP          LocalRedirect   1 =&amp;gt; 172.20.0.188:53/TCP (active)
18   10.96.0.10:53/UDP          LocalRedirect   1 =&amp;gt; 172.20.0.188:53/UDP (active)

# coredns 호출을 node-local-dns 파드로 전달한다
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get po -n kube-system -owide -l k8s-app=node-local-dns
NAME                   READY   STATUS    RESTARTS   AGE   IP             NODE      NOMINATED NODE   READINESS GATES
node-local-dns-chsl8   1/1     Running   0          20m   172.20.1.246   k8s-ctr   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
node-local-dns-zcg6v   1/1     Running   0          20m   172.20.0.188   k8s-w1    &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;

# logs 재확인
kubectl -n kube-system logs -l k8s-app=kube-dns -f
kubectl -n kube-system logs -l k8s-app=node-local-dns -f

# 호출 테스트
kubectl exec -it curl-pod -- nslookup www.google.com

# coredns 로그
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl -n kube-system logs -l k8s-app=kube-dns -f
[INFO] 172.20.1.246:36924 - 64493 &quot;A IN www.google.com.default.svc.cluster.local. tcp 58 false 65535&quot; NXDOMAIN qr,aa,rd 151 0.011807205s
[INFO] 172.20.1.246:36924 - 34068 &quot;A IN www.google.com.svc.cluster.local. tcp 50 false 65535&quot; NXDOMAIN qr,aa,rd 143 0.003313177s
[INFO] 172.20.1.246:36924 - 6838 &quot;A IN www.google.com.cluster.local. tcp 46 false 65535&quot; NXDOMAIN qr,aa,rd 139 0.000671685s
# 더이상 없음

# nodelocaldns 로그
kubectl -n kube-system logs -l k8s-app=node-local-dns -f
# 첫번째 로그
[INFO] 172.20.1.218:56602 - 64493 &quot;A IN www.google.com.default.svc.cluster.local. udp 58 false 512&quot; NXDOMAIN qr,aa,rd 151 0.055108912s
[INFO] 172.20.1.218:44023 - 34068 &quot;A IN www.google.com.svc.cluster.local. udp 50 false 512&quot; NXDOMAIN qr,aa,rd 143 0.014067275s
[INFO] 172.20.1.218:33608 - 6838 &quot;A IN www.google.com.cluster.local. udp 46 false 512&quot; NXDOMAIN qr,aa,rd 139 0.022431702s
# 다시 nodelocaldns 에서 응답함
[INFO] 172.20.1.218:40290 - 20639 &quot;A IN www.google.com.default.svc.cluster.local. udp 58 false 512&quot; NXDOMAIN qr,aa,rd 151 0.001162012s
[INFO] 172.20.1.218:57376 - 55001 &quot;A IN www.google.com.svc.cluster.local. udp 50 false 512&quot; NXDOMAIN qr,aa,rd 143 0.000293324s
[INFO] 172.20.1.218:60696 - 6755 &quot;A IN www.google.com.cluster.local. udp 46 false 512&quot; NXDOMAIN qr,aa,rd 139 0.000283324s
&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Hubble UI에서도 요청이 coredns에서 node-local-dns로 변경된 것을 알 수 있습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;670&quot; data-origin-height=&quot;640&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bYbEDG/btsPGEMmDYI/Rr8MNBV67j3qKq0NRHL9OK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bYbEDG/btsPGEMmDYI/Rr8MNBV67j3qKq0NRHL9OK/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bYbEDG/btsPGEMmDYI/Rr8MNBV67j3qKq0NRHL9OK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbYbEDG%2FbtsPGEMmDYI%2FRr8MNBV67j3qKq0NRHL9OK%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;670&quot; height=&quot;640&quot; data-origin-width=&quot;670&quot; data-origin-height=&quot;640&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;해당 예제는 아래 문서의 예시 중 하나이며, 실제로는 특정 서비스로 향하는 백엔드 파드를 설정으로 변경할 수 있는 옵션으로 활용할 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;참고: &lt;a href=&quot;https://docs.cilium.io/en/stable/network/kubernetes/local-redirect-policy/&quot;&gt;https://docs.cilium.io/en/stable/network/kubernetes/local-redirect-policy/&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;마치며&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이번 포스트에서는 파드 DNS 질의를 처리 coredns의 동작, 그리고 NodeLocalDNS에 대한 배경 설명을 살펴봤습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;기본적인 Cilium환경에서 NodeLocalDNS가 정상적으로 동작하지 않을 수 있으며, 이 경우에는 Local Redirect를 기반으로하는 NodeLocalDNS를 활용하실 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;다음 포스트에서도 Cilium의 파드 통신의 다른 주제에 대해서 살펴보도록 하겠습니다.&lt;/p&gt;</description>
      <category>Cilium</category>
      <category>cilium</category>
      <category>kubernetes</category>
      <category>localredirect</category>
      <category>nodelocaldns</category>
      <author>한명</author>
      <guid isPermaLink="true">https://a-person.tistory.com/52</guid>
      <comments>https://a-person.tistory.com/52#entry52comment</comments>
      <pubDate>Sat, 2 Aug 2025 21:43:21 +0900</pubDate>
    </item>
    <item>
      <title>[3] Cilium Networking - IPAM, Routing, Masquerading</title>
      <link>https://a-person.tistory.com/51</link>
      <description>&lt;p data-ke-size=&quot;size16&quot;&gt;이번 포스트에서는 Cilium의 파드 통신에 대해서 살펴보기 위해서 IPAM 과 Routing, 그리고 Masquerading에 대해서 알아보겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Cilium 공식 문서의 Networking Concepts의 아래에 해당 합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;https://docs.cilium.io/en/stable/network/concepts/&quot;&gt;https://docs.cilium.io/en/stable/network/concepts/&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;404&quot; data-origin-height=&quot;258&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/RpxOF/btsPF7H2JJv/UXkGwMgSOTCesnaYo9kuek/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/RpxOF/btsPF7H2JJv/UXkGwMgSOTCesnaYo9kuek/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/RpxOF/btsPF7H2JJv/UXkGwMgSOTCesnaYo9kuek/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FRpxOF%2FbtsPF7H2JJv%2FUXkGwMgSOTCesnaYo9kuek%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;404&quot; height=&quot;258&quot; data-origin-width=&quot;404&quot; data-origin-height=&quot;258&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;목차&lt;/h2&gt;
&lt;ol style=&quot;list-style-type: decimal;&quot; data-ke-list-type=&quot;decimal&quot;&gt;
&lt;li&gt;실습 환경 구성&lt;/li&gt;
&lt;li&gt;IPAM&lt;/li&gt;
&lt;li&gt;Routing&lt;/li&gt;
&lt;li&gt;Masquerading&lt;/li&gt;
&lt;/ol&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;1. 실습 환경 구성&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아래 Vagrantfile을 바탕으로 실습 환경을 구성합니다.&lt;/p&gt;
&lt;pre class=&quot;awk&quot;&gt;&lt;code&gt;mkdir cilium-lab &amp;amp;&amp;amp; cd cilium-lab

curl -O https://raw.githubusercontent.com/gasida/vagrant-lab/refs/heads/main/cilium-study/3w/Vagrantfile

vagrant up&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이후 &lt;code&gt;vagrant status&lt;/code&gt;로 확인해보면 k8s-ctr, k8s-w1, router라는 VM이 생성된 것을 확인하실 수 있습니다.&lt;/p&gt;
&lt;pre class=&quot;applescript&quot;&gt;&lt;code&gt;PS C:\projects\cilium-lab\w3&amp;gt; vagrant status
Current machine states:

k8s-ctr                   running (virtualbox)
k8s-w1                    running (virtualbox)
router                    running (virtualbox)

This environment represents multiple VMs. The VMs are all listed
above with their current state. For more information about a specific
VM, run `vagrant status NAME`.&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이후 과정은 &lt;code&gt;vagrant ssh k8s-ctr&lt;/code&gt; 으로 컨트롤 플레인에 진입하여 명령을 수행합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;실습 환경에서 노드 정보를 확인해보면, 컨트롤 플레인에 워커 노드 1대가 생성되어 있으며, Cilium이 이미 설치된 상태이로, IPAM은 Kubernetes host scope, Routing은 Native routing 모드로 구성되어 있습니다.&lt;/p&gt;
&lt;pre class=&quot;sql&quot;&gt;&lt;code&gt;# 클러스터 정보 확인
kubectl get no
kubectl cluster-info
kubectl cluster-info dump | grep -m 2 -E &quot;cluster-cidr|service-cluster-ip-range&quot;
                            &quot;--service-cluster-ip-range=10.96.0.0/16&quot;,
                            &quot;--cluster-cidr=10.244.0.0/16&quot;,

kubectl describe cm -n kube-system kubeadm-config
kubectl describe cm -n kube-system kubelet-config

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get no
NAME      STATUS   ROLES           AGE   VERSION
k8s-ctr   Ready    control-plane   19m   v1.33.2
k8s-w1    Ready    &amp;lt;none&amp;gt;          13m   v1.33.2
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl cluster-info
Kubernetes control plane is running at https://192.168.10.100:6443
CoreDNS is running at https://192.168.10.100:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl cluster-info dump | grep -m 2 -E &quot;cluster-cidr|service-cluster-ip-range&quot;
                            &quot;--service-cluster-ip-range=10.96.0.0/16&quot;,
                            &quot;--cluster-cidr=10.244.0.0/16&quot;,
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl describe cm -n kube-system kubeadm-config
Name:         kubeadm-config
Namespace:    kube-system
Labels:       &amp;lt;none&amp;gt;
Annotations:  &amp;lt;none&amp;gt;

Data
====
ClusterConfiguration:
----
apiServer: {}
apiVersion: kubeadm.k8s.io/v1beta4
caCertificateValidityPeriod: 87600h0m0s
certificateValidityPeriod: 8760h0m0s
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
encryptionAlgorithm: RSA-2048
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.k8s.io
kind: ClusterConfiguration
kubernetesVersion: v1.33.2
networking:
  dnsDomain: cluster.local
  podSubnet: 10.244.0.0/16
  serviceSubnet: 10.96.0.0/16
proxy: {}
scheduler: {}



BinaryData
====

Events:  &amp;lt;none&amp;gt;
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl describe cm -n kube-system kubelet-config
Name:         kubelet-config
Namespace:    kube-system
Labels:       &amp;lt;none&amp;gt;
Annotations:  kubeadm.kubernetes.io/component-config.hash: sha256:0ff07274ab31cc8c0f9d989e90179a90b6e9b633c8f3671993f44185a0791127

Data
====
kubelet:
----
apiVersion: kubelet.config.k8s.io/v1beta1
authentication:
  anonymous:
    enabled: false
  webhook:
    cacheTTL: 0s
    enabled: true
  x509:
    clientCAFile: /etc/kubernetes/pki/ca.crt
authorization:
  mode: Webhook
  webhook:
    cacheAuthorizedTTL: 0s
    cacheUnauthorizedTTL: 0s
cgroupDriver: systemd
clusterDNS:
- 10.96.0.10
clusterDomain: cluster.local
containerRuntimeEndpoint: &quot;&quot;
cpuManagerReconcilePeriod: 0s
crashLoopBackOff: {}
evictionPressureTransitionPeriod: 0s
fileCheckFrequency: 0s
healthzBindAddress: 127.0.0.1
healthzPort: 10248
httpCheckFrequency: 0s
imageMaximumGCAge: 0s
imageMinimumGCAge: 0s
kind: KubeletConfiguration
logging:
  flushFrequency: 0
  options:
    json:
      infoBufferSize: &quot;0&quot;
    text:
      infoBufferSize: &quot;0&quot;
  verbosity: 0
memorySwap: {}
nodeStatusReportFrequency: 0s
nodeStatusUpdateFrequency: 0s
resolvConf: /run/systemd/resolve/resolv.conf
rotateCertificates: true
runtimeRequestTimeout: 0s
shutdownGracePeriod: 0s
shutdownGracePeriodCriticalPods: 0s
staticPodPath: /etc/kubernetes/manifests
streamingConnectionIdleTimeout: 0s
syncFrequency: 0s
volumeStatsAggPeriod: 0s



BinaryData
====

Events:  &amp;lt;none&amp;gt;
(⎈|HomeLab:N/A) root@k8s-ctr:~#


# 노드 정보 : 상태, INTERNAL-IP 확인
kubectl get node -owide

# 파드 정보 : 상태, 파드 IP 확인
kubectl get nodes -o jsonpath='{range .items[*]}{.metadata.name}{&quot;\t&quot;}{.spec.podCIDR}{&quot;\n&quot;}{end}'
kubectl get ciliumnode -o json | grep podCIDRs -A2
kubectl get pod -A -owide


(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get node -owide
NAME      STATUS   ROLES           AGE   VERSION   INTERNAL-IP      EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION     CONTAINER-RUNTIME
k8s-ctr   Ready    control-plane   21m   v1.33.2   192.168.10.100   &amp;lt;none&amp;gt;        Ubuntu 24.04.2 LTS   6.8.0-53-generic   containerd://1.7.27
k8s-w1    Ready    &amp;lt;none&amp;gt;          16m   v1.33.2   192.168.10.101   &amp;lt;none&amp;gt;        Ubuntu 24.04.2 LTS   6.8.0-53-generic   containerd://1.7.27
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get nodes -o jsonpath='{range .items[*]}{.metadata.name}{&quot;\t&quot;}{.spec.podCIDR}{&quot;\n&quot;}{end}'
k8s-ctr 10.244.0.0/24
k8s-w1  10.244.1.0/24
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get ciliumnode -o json | grep podCIDRs -A2
                    &quot;podCIDRs&quot;: [
                        &quot;10.244.0.0/24&quot;
                    ],
--
                    &quot;podCIDRs&quot;: [
                        &quot;10.244.1.0/24&quot;
                    ],
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get pod -A -owide
NAMESPACE            NAME                                      READY   STATUS    RESTARTS       AGE   IP               NODE      NOMINATED NODE   READINESS GATES
cilium-monitoring    grafana-5c69859d9-fphx8                   1/1     Running   0              21m   10.244.0.75      k8s-ctr   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
cilium-monitoring    prometheus-6fc896bc5d-b45nk               1/1     Running   0              21m   10.244.0.122     k8s-ctr   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system          cilium-envoy-rt7pm                        1/1     Running   0              21m   192.168.10.100   k8s-ctr   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system          cilium-envoy-z64wz                        1/1     Running   1 (11m ago)    16m   192.168.10.101   k8s-w1    &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system          cilium-frrtz                              1/1     Running   1 (11m ago)    16m   192.168.10.101   k8s-w1    &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system          cilium-operator-5bc66f5b9b-hmgfx          1/1     Running   0              21m   192.168.10.100   k8s-ctr   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system          cilium-tj7tl                              1/1     Running   0              21m   192.168.10.100   k8s-ctr   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system          coredns-674b8bbfcf-5lc2v                  1/1     Running   0              21m   10.244.0.45      k8s-ctr   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system          coredns-674b8bbfcf-f9qln                  1/1     Running   0              21m   10.244.0.181     k8s-ctr   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system          etcd-k8s-ctr                              1/1     Running   0              22m   192.168.10.100   k8s-ctr   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system          hubble-relay-5dcd46f5c-vlwg4              1/1     Running   0              21m   10.244.0.146     k8s-ctr   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system          hubble-ui-76d4965bb6-5qtp7                2/2     Running   0              21m   10.244.0.85      k8s-ctr   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system          kube-apiserver-k8s-ctr                    1/1     Running   0              21m   192.168.10.100   k8s-ctr   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system          kube-controller-manager-k8s-ctr           1/1     Running   1 (14m ago)    21m   192.168.10.100   k8s-ctr   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system          kube-proxy-gdbb4                          1/1     Running   1 (11m ago)    16m   192.168.10.101   k8s-w1    &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system          kube-proxy-vshp2                          1/1     Running   0              21m   192.168.10.100   k8s-ctr   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system          kube-scheduler-k8s-ctr                    1/1     Running   1 (3m6s ago)   21m   192.168.10.100   k8s-ctr   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
local-path-storage   local-path-provisioner-74f9666bc9-hmkjc   1/1     Running   0              21m   10.244.0.66      k8s-ctr   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;


# ipam 과 routing 모드 확인
cilium config view | grep ^ipam
cilium config view | grep ^routing-mode

(⎈|HomeLab:N/A) root@k8s-ctr:~# cilium config view | grep ^ipam
ipam                                              kubernetes
ipam-cilium-node-update-rate                      15s

(⎈|HomeLab:N/A) root@k8s-ctr:~# cilium config view | grep ^routing-mode
routing-mode                                      native

# cilium 상태 확인
kubectl get cm -n kube-system cilium-config -o json | jq
cilium status
cilium config view

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get cm -n kube-system cilium-config -o json | jq
{
  &quot;apiVersion&quot;: &quot;v1&quot;,
  &quot;data&quot;: {
    &quot;agent-not-ready-taint-key&quot;: &quot;node.cilium.io/agent-not-ready&quot;,
    &quot;arping-refresh-period&quot;: &quot;30s&quot;,
    &quot;auto-direct-node-routes&quot;: &quot;true&quot;,
    &quot;bpf-distributed-lru&quot;: &quot;false&quot;,
    &quot;bpf-events-drop-enabled&quot;: &quot;true&quot;,
    &quot;bpf-events-policy-verdict-enabled&quot;: &quot;true&quot;,
    &quot;bpf-events-trace-enabled&quot;: &quot;true&quot;,
    &quot;bpf-lb-acceleration&quot;: &quot;disabled&quot;,
    &quot;bpf-lb-algorithm-annotation&quot;: &quot;false&quot;,
    &quot;bpf-lb-external-clusterip&quot;: &quot;false&quot;,
    &quot;bpf-lb-map-max&quot;: &quot;65536&quot;,
    &quot;bpf-lb-mode-annotation&quot;: &quot;false&quot;,
    &quot;bpf-lb-sock&quot;: &quot;false&quot;,
    &quot;bpf-lb-source-range-all-types&quot;: &quot;false&quot;,
    &quot;bpf-map-dynamic-size-ratio&quot;: &quot;0.0025&quot;,
    &quot;bpf-policy-map-max&quot;: &quot;16384&quot;,
    &quot;bpf-root&quot;: &quot;/sys/fs/bpf&quot;,
    &quot;cgroup-root&quot;: &quot;/run/cilium/cgroupv2&quot;,
    &quot;cilium-endpoint-gc-interval&quot;: &quot;5m0s&quot;,
    &quot;cluster-id&quot;: &quot;0&quot;,
    &quot;cluster-name&quot;: &quot;default&quot;,
    &quot;clustermesh-enable-endpoint-sync&quot;: &quot;false&quot;,
    &quot;clustermesh-enable-mcs-api&quot;: &quot;false&quot;,
    &quot;cni-exclusive&quot;: &quot;true&quot;,
    &quot;cni-log-file&quot;: &quot;/var/run/cilium/cilium-cni.log&quot;,
    &quot;controller-group-metrics&quot;: &quot;write-cni-file sync-host-ips sync-lb-maps-with-k8s-services&quot;,
    &quot;custom-cni-conf&quot;: &quot;false&quot;,
    &quot;datapath-mode&quot;: &quot;veth&quot;,
    &quot;debug&quot;: &quot;true&quot;,
    &quot;debug-verbose&quot;: &quot;&quot;,
    &quot;default-lb-service-ipam&quot;: &quot;lbipam&quot;,
    &quot;direct-routing-skip-unreachable&quot;: &quot;false&quot;,
    &quot;dnsproxy-enable-transparent-mode&quot;: &quot;true&quot;,
    &quot;dnsproxy-socket-linger-timeout&quot;: &quot;10&quot;,
    &quot;egress-gateway-reconciliation-trigger-interval&quot;: &quot;1s&quot;,
    &quot;enable-auto-protect-node-port-range&quot;: &quot;true&quot;,
    &quot;enable-bpf-clock-probe&quot;: &quot;false&quot;,
    &quot;enable-bpf-masquerade&quot;: &quot;true&quot;,
    &quot;enable-endpoint-health-checking&quot;: &quot;false&quot;,
    &quot;enable-endpoint-lockdown-on-policy-overflow&quot;: &quot;false&quot;,
    &quot;enable-endpoint-routes&quot;: &quot;true&quot;,
    &quot;enable-experimental-lb&quot;: &quot;false&quot;,
    &quot;enable-health-check-loadbalancer-ip&quot;: &quot;false&quot;,
    &quot;enable-health-check-nodeport&quot;: &quot;true&quot;,
    &quot;enable-health-checking&quot;: &quot;false&quot;,
    &quot;enable-hubble&quot;: &quot;true&quot;,
    &quot;enable-hubble-open-metrics&quot;: &quot;true&quot;,
    &quot;enable-internal-traffic-policy&quot;: &quot;true&quot;,
    &quot;enable-ipv4&quot;: &quot;true&quot;,
    &quot;enable-ipv4-big-tcp&quot;: &quot;false&quot;,
    &quot;enable-ipv4-masquerade&quot;: &quot;true&quot;,
    &quot;enable-ipv6&quot;: &quot;false&quot;,
    &quot;enable-ipv6-big-tcp&quot;: &quot;false&quot;,
    &quot;enable-ipv6-masquerade&quot;: &quot;true&quot;,
    &quot;enable-k8s-networkpolicy&quot;: &quot;true&quot;,
    &quot;enable-k8s-terminating-endpoint&quot;: &quot;true&quot;,
    &quot;enable-l2-neigh-discovery&quot;: &quot;true&quot;,
    &quot;enable-l7-proxy&quot;: &quot;true&quot;,
    &quot;enable-lb-ipam&quot;: &quot;true&quot;,
    &quot;enable-local-redirect-policy&quot;: &quot;false&quot;,
    &quot;enable-masquerade-to-route-source&quot;: &quot;false&quot;,
    &quot;enable-metrics&quot;: &quot;true&quot;,
    &quot;enable-node-selector-labels&quot;: &quot;false&quot;,
    &quot;enable-non-default-deny-policies&quot;: &quot;true&quot;,
    &quot;enable-policy&quot;: &quot;default&quot;,
    &quot;enable-policy-secrets-sync&quot;: &quot;true&quot;,
    &quot;enable-runtime-device-detection&quot;: &quot;true&quot;,
    &quot;enable-sctp&quot;: &quot;false&quot;,
    &quot;enable-source-ip-verification&quot;: &quot;true&quot;,
    &quot;enable-svc-source-range-check&quot;: &quot;true&quot;,
    &quot;enable-tcx&quot;: &quot;true&quot;,
    &quot;enable-vtep&quot;: &quot;false&quot;,
    &quot;enable-well-known-identities&quot;: &quot;false&quot;,
    &quot;enable-xt-socket-fallback&quot;: &quot;true&quot;,
    &quot;envoy-access-log-buffer-size&quot;: &quot;4096&quot;,
    &quot;envoy-base-id&quot;: &quot;0&quot;,
    &quot;envoy-keep-cap-netbindservice&quot;: &quot;false&quot;,
    &quot;external-envoy-proxy&quot;: &quot;true&quot;,
    &quot;health-check-icmp-failure-threshold&quot;: &quot;3&quot;,
    &quot;http-retry-count&quot;: &quot;3&quot;,
    &quot;hubble-disable-tls&quot;: &quot;false&quot;,
    &quot;hubble-export-file-max-backups&quot;: &quot;5&quot;,
    &quot;hubble-export-file-max-size-mb&quot;: &quot;10&quot;,
    &quot;hubble-listen-address&quot;: &quot;:4244&quot;,
    &quot;hubble-metrics&quot;: &quot;dns drop tcp flow port-distribution icmp httpV2:exemplars=true;labelsContext=source_ip,source_namespace,source_workload,destination_ip,destination_namespace,destination_workload,traffic_direction&quot;,
    &quot;hubble-metrics-server&quot;: &quot;:9965&quot;,
    &quot;hubble-metrics-server-enable-tls&quot;: &quot;false&quot;,
    &quot;hubble-socket-path&quot;: &quot;/var/run/cilium/hubble.sock&quot;,
    &quot;hubble-tls-cert-file&quot;: &quot;/var/lib/cilium/tls/hubble/server.crt&quot;,
    &quot;hubble-tls-client-ca-files&quot;: &quot;/var/lib/cilium/tls/hubble/client-ca.crt&quot;,
    &quot;hubble-tls-key-file&quot;: &quot;/var/lib/cilium/tls/hubble/server.key&quot;,
    &quot;identity-allocation-mode&quot;: &quot;crd&quot;,
    &quot;identity-gc-interval&quot;: &quot;15m0s&quot;,
    &quot;identity-heartbeat-timeout&quot;: &quot;30m0s&quot;,
    &quot;install-no-conntrack-iptables-rules&quot;: &quot;true&quot;,
    &quot;ipam&quot;: &quot;kubernetes&quot;,
    &quot;ipam-cilium-node-update-rate&quot;: &quot;15s&quot;,
    &quot;iptables-random-fully&quot;: &quot;false&quot;,
    &quot;ipv4-native-routing-cidr&quot;: &quot;10.244.0.0/16&quot;,
    &quot;k8s-require-ipv4-pod-cidr&quot;: &quot;true&quot;,
    &quot;k8s-require-ipv6-pod-cidr&quot;: &quot;false&quot;,
    &quot;kube-proxy-replacement&quot;: &quot;true&quot;,
    &quot;kube-proxy-replacement-healthz-bind-address&quot;: &quot;&quot;,
    &quot;max-connected-clusters&quot;: &quot;255&quot;,
    &quot;mesh-auth-enabled&quot;: &quot;true&quot;,
    &quot;mesh-auth-gc-interval&quot;: &quot;5m0s&quot;,
    &quot;mesh-auth-queue-size&quot;: &quot;1024&quot;,
    &quot;mesh-auth-rotated-identities-queue-size&quot;: &quot;1024&quot;,
    &quot;monitor-aggregation&quot;: &quot;medium&quot;,
    &quot;monitor-aggregation-flags&quot;: &quot;all&quot;,
    &quot;monitor-aggregation-interval&quot;: &quot;5s&quot;,
    &quot;nat-map-stats-entries&quot;: &quot;32&quot;,
    &quot;nat-map-stats-interval&quot;: &quot;30s&quot;,
    &quot;node-port-bind-protection&quot;: &quot;true&quot;,
    &quot;nodeport-addresses&quot;: &quot;&quot;,
    &quot;nodes-gc-interval&quot;: &quot;5m0s&quot;,
    &quot;operator-api-serve-addr&quot;: &quot;127.0.0.1:9234&quot;,
    &quot;operator-prometheus-serve-addr&quot;: &quot;:9963&quot;,
    &quot;policy-cidr-match-mode&quot;: &quot;&quot;,
    &quot;policy-secrets-namespace&quot;: &quot;cilium-secrets&quot;,
    &quot;policy-secrets-only-from-secrets-namespace&quot;: &quot;true&quot;,
    &quot;preallocate-bpf-maps&quot;: &quot;false&quot;,
    &quot;procfs&quot;: &quot;/host/proc&quot;,
    &quot;prometheus-serve-addr&quot;: &quot;:9962&quot;,
    &quot;proxy-connect-timeout&quot;: &quot;2&quot;,
    &quot;proxy-idle-timeout-seconds&quot;: &quot;60&quot;,
    &quot;proxy-initial-fetch-timeout&quot;: &quot;30&quot;,
    &quot;proxy-max-concurrent-retries&quot;: &quot;128&quot;,
    &quot;proxy-max-connection-duration-seconds&quot;: &quot;0&quot;,
    &quot;proxy-max-requests-per-connection&quot;: &quot;0&quot;,
    &quot;proxy-xff-num-trusted-hops-egress&quot;: &quot;0&quot;,
    &quot;proxy-xff-num-trusted-hops-ingress&quot;: &quot;0&quot;,
    &quot;remove-cilium-node-taints&quot;: &quot;true&quot;,
    &quot;routing-mode&quot;: &quot;native&quot;,
    &quot;service-no-backend-response&quot;: &quot;reject&quot;,
    &quot;set-cilium-is-up-condition&quot;: &quot;true&quot;,
    &quot;set-cilium-node-taints&quot;: &quot;true&quot;,
    &quot;synchronize-k8s-nodes&quot;: &quot;true&quot;,
    &quot;tofqdns-dns-reject-response-code&quot;: &quot;refused&quot;,
    &quot;tofqdns-enable-dns-compression&quot;: &quot;true&quot;,
    &quot;tofqdns-endpoint-max-ip-per-hostname&quot;: &quot;1000&quot;,
    &quot;tofqdns-idle-connection-grace-period&quot;: &quot;0s&quot;,
    &quot;tofqdns-max-deferred-connection-deletes&quot;: &quot;10000&quot;,
    &quot;tofqdns-proxy-response-max-delay&quot;: &quot;100ms&quot;,
    &quot;tunnel-protocol&quot;: &quot;vxlan&quot;,
    &quot;tunnel-source-port-range&quot;: &quot;0-0&quot;,
    &quot;unmanaged-pod-watcher-interval&quot;: &quot;15&quot;,
    &quot;vtep-cidr&quot;: &quot;&quot;,
    &quot;vtep-endpoint&quot;: &quot;&quot;,
    &quot;vtep-mac&quot;: &quot;&quot;,
    &quot;vtep-mask&quot;: &quot;&quot;,
    &quot;write-cni-conf-when-ready&quot;: &quot;/host/etc/cni/net.d/05-cilium.conflist&quot;
  },
  &quot;kind&quot;: &quot;ConfigMap&quot;,
  &quot;metadata&quot;: {
    &quot;annotations&quot;: {
      &quot;meta.helm.sh/release-name&quot;: &quot;cilium&quot;,
      &quot;meta.helm.sh/release-namespace&quot;: &quot;kube-system&quot;
    },
    &quot;creationTimestamp&quot;: &quot;2025-08-02T06:06:32Z&quot;,
    &quot;labels&quot;: {
      &quot;app.kubernetes.io/managed-by&quot;: &quot;Helm&quot;
    },
    &quot;name&quot;: &quot;cilium-config&quot;,
    &quot;namespace&quot;: &quot;kube-system&quot;,
    &quot;resourceVersion&quot;: &quot;447&quot;,
    &quot;uid&quot;: &quot;fe79f2c8-35ed-4a1c-955a-7c50243ffae7&quot;
  }
}
(⎈|HomeLab:N/A) root@k8s-ctr:~# cilium status
    /&amp;macr;&amp;macr;\
 /&amp;macr;&amp;macr;\__/&amp;macr;&amp;macr;\    Cilium:             OK
 \__/&amp;macr;&amp;macr;\__/    Operator:           OK
 /&amp;macr;&amp;macr;\__/&amp;macr;&amp;macr;\    Envoy DaemonSet:    OK
 \__/&amp;macr;&amp;macr;\__/    Hubble Relay:       OK
    \__/       ClusterMesh:        disabled

DaemonSet              cilium                   Desired: 2, Ready: 2/2, Available: 2/2
DaemonSet              cilium-envoy             Desired: 2, Ready: 2/2, Available: 2/2
Deployment             cilium-operator          Desired: 1, Ready: 1/1, Available: 1/1
Deployment             hubble-relay             Desired: 1, Ready: 1/1, Available: 1/1
Deployment             hubble-ui                Desired: 1, Ready: 1/1, Available: 1/1
Containers:            cilium                   Running: 2
                       cilium-envoy             Running: 2
                       cilium-operator          Running: 1
                       clustermesh-apiserver
                       hubble-relay             Running: 1
                       hubble-ui                Running: 1
Cluster Pods:          7/7 managed by Cilium
Helm chart version:    1.17.6
Image versions         cilium             quay.io/cilium/cilium:v1.17.6@sha256:544de3d4fed7acba72758413812780a4972d47c39035f2a06d6145d8644a3353: 2
                       cilium-envoy       quay.io/cilium/cilium-envoy:v1.33.4-1752151664-7c2edb0b44cf95f326d628b837fcdd845102ba68@sha256:318eff387835ca2717baab42a84f35a83a5f9e7d519253df87269f80b9ff0171: 2
                       cilium-operator    quay.io/cilium/operator-generic:v1.17.6@sha256:91ac3bf7be7bed30e90218f219d4f3062a63377689ee7246062fa0cc3839d096: 1
                       hubble-relay       quay.io/cilium/hubble-relay:v1.17.6@sha256:7d17ec10b3d37341c18ca56165b2f29a715cb8ee81311fd07088d8bf68c01e60: 1
                       hubble-ui          quay.io/cilium/hubble-ui-backend:v0.13.2@sha256:a034b7e98e6ea796ed26df8f4e71f83fc16465a19d166eff67a03b822c0bfa15: 1
                       hubble-ui          quay.io/cilium/hubble-ui:v0.13.2@sha256:9e37c1296b802830834cc87342a9182ccbb71ffebb711971e849221bd9d59392: 1
(⎈|HomeLab:N/A) root@k8s-ctr:~# cilium config view
agent-not-ready-taint-key                         node.cilium.io/agent-not-ready
arping-refresh-period                             30s
auto-direct-node-routes                           true
bpf-distributed-lru                               false
bpf-events-drop-enabled                           true
bpf-events-policy-verdict-enabled                 true
bpf-events-trace-enabled                          true
bpf-lb-acceleration                               disabled
bpf-lb-algorithm-annotation                       false
bpf-lb-external-clusterip                         false
bpf-lb-map-max                                    65536
bpf-lb-mode-annotation                            false
bpf-lb-sock                                       false
bpf-lb-source-range-all-types                     false
bpf-map-dynamic-size-ratio                        0.0025
bpf-policy-map-max                                16384
bpf-root                                          /sys/fs/bpf
cgroup-root                                       /run/cilium/cgroupv2
cilium-endpoint-gc-interval                       5m0s
cluster-id                                        0
cluster-name                                      default
clustermesh-enable-endpoint-sync                  false
clustermesh-enable-mcs-api                        false
cni-exclusive                                     true
cni-log-file                                      /var/run/cilium/cilium-cni.log
controller-group-metrics                          write-cni-file sync-host-ips sync-lb-maps-with-k8s-services
custom-cni-conf                                   false
datapath-mode                                     veth
debug                                             true
debug-verbose
default-lb-service-ipam                           lbipam
direct-routing-skip-unreachable                   false
dnsproxy-enable-transparent-mode                  true
dnsproxy-socket-linger-timeout                    10
egress-gateway-reconciliation-trigger-interval    1s
enable-auto-protect-node-port-range               true
enable-bpf-clock-probe                            false
enable-bpf-masquerade                             true
enable-endpoint-health-checking                   false
enable-endpoint-lockdown-on-policy-overflow       false
enable-endpoint-routes                            true
enable-experimental-lb                            false
enable-health-check-loadbalancer-ip               false
enable-health-check-nodeport                      true
enable-health-checking                            false
enable-hubble                                     true
enable-hubble-open-metrics                        true
enable-internal-traffic-policy                    true
enable-ipv4                                       true
enable-ipv4-big-tcp                               false
enable-ipv4-masquerade                            true
enable-ipv6                                       false
enable-ipv6-big-tcp                               false
enable-ipv6-masquerade                            true
enable-k8s-networkpolicy                          true
enable-k8s-terminating-endpoint                   true
enable-l2-neigh-discovery                         true
enable-l7-proxy                                   true
enable-lb-ipam                                    true
enable-local-redirect-policy                      false
enable-masquerade-to-route-source                 false
enable-metrics                                    true
enable-node-selector-labels                       false
enable-non-default-deny-policies                  true
enable-policy                                     default
enable-policy-secrets-sync                        true
enable-runtime-device-detection                   true
enable-sctp                                       false
enable-source-ip-verification                     true
enable-svc-source-range-check                     true
enable-tcx                                        true
enable-vtep                                       false
enable-well-known-identities                      false
enable-xt-socket-fallback                         true
envoy-access-log-buffer-size                      4096
envoy-base-id                                     0
envoy-keep-cap-netbindservice                     false
external-envoy-proxy                              true
health-check-icmp-failure-threshold               3
http-retry-count                                  3
hubble-disable-tls                                false
hubble-export-file-max-backups                    5
hubble-export-file-max-size-mb                    10
hubble-listen-address                             :4244
hubble-metrics                                    dns drop tcp flow port-distribution icmp httpV2:exemplars=true;labelsContext=source_ip,source_namespace,source_workload,destination_ip,destination_namespace,destination_workload,traffic_direction
hubble-metrics-server                             :9965
hubble-metrics-server-enable-tls                  false
hubble-socket-path                                /var/run/cilium/hubble.sock
hubble-tls-cert-file                              /var/lib/cilium/tls/hubble/server.crt
hubble-tls-client-ca-files                        /var/lib/cilium/tls/hubble/client-ca.crt
hubble-tls-key-file                               /var/lib/cilium/tls/hubble/server.key
identity-allocation-mode                          crd
identity-gc-interval                              15m0s
identity-heartbeat-timeout                        30m0s
install-no-conntrack-iptables-rules               true
ipam                                              kubernetes
ipam-cilium-node-update-rate                      15s
iptables-random-fully                             false
ipv4-native-routing-cidr                          10.244.0.0/16
k8s-require-ipv4-pod-cidr                         true
k8s-require-ipv6-pod-cidr                         false
kube-proxy-replacement                            true
kube-proxy-replacement-healthz-bind-address
max-connected-clusters                            255
mesh-auth-enabled                                 true
mesh-auth-gc-interval                             5m0s
mesh-auth-queue-size                              1024
mesh-auth-rotated-identities-queue-size           1024
monitor-aggregation                               medium
monitor-aggregation-flags                         all
monitor-aggregation-interval                      5s
nat-map-stats-entries                             32
nat-map-stats-interval                            30s
node-port-bind-protection                         true
nodeport-addresses
nodes-gc-interval                                 5m0s
operator-api-serve-addr                           127.0.0.1:9234
operator-prometheus-serve-addr                    :9963
policy-cidr-match-mode
policy-secrets-namespace                          cilium-secrets
policy-secrets-only-from-secrets-namespace        true
preallocate-bpf-maps                              false
procfs                                            /host/proc
prometheus-serve-addr                             :9962
proxy-connect-timeout                             2
proxy-idle-timeout-seconds                        60
proxy-initial-fetch-timeout                       30
proxy-max-concurrent-retries                      128
proxy-max-connection-duration-seconds             0
proxy-max-requests-per-connection                 0
proxy-xff-num-trusted-hops-egress                 0
proxy-xff-num-trusted-hops-ingress                0
remove-cilium-node-taints                         true
routing-mode                                      native
service-no-backend-response                       reject
set-cilium-is-up-condition                        true
set-cilium-node-taints                            true
synchronize-k8s-nodes                             true
tofqdns-dns-reject-response-code                  refused
tofqdns-enable-dns-compression                    true
tofqdns-endpoint-max-ip-per-hostname              1000
tofqdns-idle-connection-grace-period              0s
tofqdns-max-deferred-connection-deletes           10000
tofqdns-proxy-response-max-delay                  100ms
tunnel-protocol                                   vxlan
tunnel-source-port-range                          0-0
unmanaged-pod-watcher-interval                    15
vtep-cidr
vtep-endpoint
vtep-mac
vtep-mask
write-cni-conf-when-ready                         /host/etc/cni/net.d/05-cilium.conflist&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;추가로 해당 실습 환경에는 router라는 VM이 생성되어 있는데, 이 VM은 사내망 통신이 필요한 경우 경유하는 라우터의 역할을 수행합니다. 특별히 라우팅 기능을 설치한 것은 아니며, router에는 dummy interface를 통해서 10.10.0.0/16 대역의 통신 가능한 인터페이스를 생성하였습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;또한 라우팅의 역할을 처리하는 것 처럼 동작하기 위해 IP forward가 가능하도록 설정하였습니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;root@router:~# ip -br -c a
lo               UNKNOWN        127.0.0.1/8 ::1/128
eth0             UP             10.0.2.15/24 metric 100 fd17:625c:f037:2:a00:27ff:fe6b:69c9/64 fe80::a00:27ff:fe6b:69c9/64
eth1             UP             192.168.10.200/24 fe80::a00:27ff:fed1:1a0d/64
loop1            UNKNOWN        10.10.1.200/24 fe80::4c64:a5ff:fe5d:2386/64
loop2            UNKNOWN        10.10.2.200/24 fe80::7cb5:ff:fefc:b543/64
root@router:~# ip route
default via 10.0.2.2 dev eth0 proto dhcp src 10.0.2.15 metric 100
10.0.2.0/24 dev eth0 proto kernel scope link src 10.0.2.15 metric 100
10.0.2.2 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100
10.0.2.3 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100
10.10.1.0/24 dev loop1 proto kernel scope link src 10.10.1.200
10.10.2.0/24 dev loop2 proto kernel scope link src 10.10.2.200
192.168.10.0/24 dev eth1 proto kernel scope link src 192.168.10.200
root@router:~# sysctl -a |grep ip_forward
net.ipv4.ip_forward = 1
net.ipv4.ip_forward_update_priority = 1
net.ipv4.ip_forward_use_pmtu = 0&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;해당 실습 환경을 바탕으로 IPAM 부터 설명을 이어가겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;2. IPAM&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;CNI Plugin 의 주요 역할은 파드의 IP 주소와 네트워크를 구성하는 IPAM(IP Address Management)과 파드가 통신이 가능하도록 연결해주는 Connectivity 혹은 Routing 입니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Cilium에서 제공하는 IPAM은 Kubernetes Host Scope 혹은 Cluster Scope(Default), Multi-Pool(Beta)을 사용할 수 있습니다.&lt;/p&gt;
&lt;table data-ke-align=&quot;alignLeft&quot;&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Feature&lt;/th&gt;
&lt;th&gt;Kubernetes Host Scope&lt;/th&gt;
&lt;th&gt;Cluster Scope (default)&lt;/th&gt;
&lt;th&gt;Multi-Pool&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Tunnel routing&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Direct routing&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;CIDR Configuration&lt;/td&gt;
&lt;td&gt;Kubernetes&lt;/td&gt;
&lt;td&gt;Cilium&lt;/td&gt;
&lt;td&gt;Cilium&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Multiple CIDRs per cluster&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Multiple CIDRs per node&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Dynamic CIDR/IP allocation&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;td&gt;❌&lt;/td&gt;
&lt;td&gt;✅&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;각 IPAM 모드에 대한 설명을 아래 문서를 참고하실 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;참고: &lt;a href=&quot;https://isovalent.com/blog/post/overcoming-kubernetes-ip-address-exhaustion-with-cilium/&quot;&gt;https://isovalent.com/blog/post/overcoming-kubernetes-ip-address-exhaustion-with-cilium/&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;먼저 &lt;b&gt;Kubernetes Host Scope&lt;/b&gt;은 Kubernetes에 의해서 PodCIDR이 각 노드에 할당 됩니다. Cilium Agent는 &lt;code&gt;v1.Node&lt;/code&gt; 오브젝트에 PodCIDR이 할당될 때가지 startup 하지 않고 대기하며, 이후에 PodCIDR에 따라, host-scope allocator가 파드 IP를 할당합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이 모드에서는 kube-controller-manager에 의해서 노드별 PodCIDR이 할당되고 Cilium에서도 해당 정보를 이용합니다. 표에서 CIDR configuration 주체가 Kubernetes로 확인됩니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1488&quot; data-origin-height=&quot;454&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/ckwJcp/btsPETqg1w5/7E77HWTGHkixOD3UFw6Elk/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/ckwJcp/btsPETqg1w5/7E77HWTGHkixOD3UFw6Elk/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/ckwJcp/btsPETqg1w5/7E77HWTGHkixOD3UFw6Elk/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FckwJcp%2FbtsPETqg1w5%2F7E77HWTGHkixOD3UFw6Elk%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1488&quot; height=&quot;454&quot; data-origin-width=&quot;1488&quot; data-origin-height=&quot;454&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;출처: &lt;a href=&quot;https://docs.cilium.io/en/stable/network/concepts/ipam/kubernetes/&quot;&gt;https://docs.cilium.io/en/stable/network/concepts/ipam/kubernetes/&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;b&gt;Cluster Scope&lt;/b&gt; IPAM 모드에서는 각 노드에 노드별 PodCIDR을 할당하고, 노드의 Cilium Agent의 host-scope allocator를 통해서 파드IP를 할당합니다. Kubernetes Host Scope는 kube-controller-manager가 &lt;code&gt;v1.Node&lt;/code&gt; 리소스에 노드별 PodCIDR을 할당하지만, cluster Scope에서는 Cilium Operator가 v2.CiliumNode 리소스에 노드별 PodCIDR을 할당한다는 차이가 있습니다. 또한 Cluster sope에서는 multiple pod CIDR을 사용할 수 있습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;714&quot; data-origin-height=&quot;175&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/lfmrU/btsPGtjPHxu/hn4zmoLIosxwPCOvCsVLH0/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/lfmrU/btsPGtjPHxu/hn4zmoLIosxwPCOvCsVLH0/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/lfmrU/btsPGtjPHxu/hn4zmoLIosxwPCOvCsVLH0/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FlfmrU%2FbtsPGtjPHxu%2Fhn4zmoLIosxwPCOvCsVLH0%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;714&quot; height=&quot;175&quot; data-origin-width=&quot;714&quot; data-origin-height=&quot;175&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;출처: &lt;a href=&quot;https://docs.cilium.io/en/stable/network/concepts/ipam/cluster-pool/&quot;&gt;https://docs.cilium.io/en/stable/network/concepts/ipam/cluster-pool/&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;b&gt;Multi-Pool&lt;/b&gt;은 beta feature로, 여러 다른 IPAM pools에서 PodCIDR을 할당하는 것을 지원합니다. 이를 통해서 동일 노드에 다른 IP 대역을 할당하거나, 동적으로 PodCIDR을 추가하는 기능을 제공합니다. 문서를 살펴보면 Cluster Scope에서도 다른 IP대역을 할당할 수 있지만 동적으로 추가할 수 없으며, Multi-pool에서는 &lt;code&gt;ipam.cilium.io/ip-pool: mars&lt;/code&gt;와 같이 파드 스펙에 ip-pool을 명시하여 파드 IP 할당 특정 ip-pool로 지정하여 원하는 IP 대역에서 할당 받을 수 있습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1672&quot; data-origin-height=&quot;680&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/cb7WeM/btsPDNEhpLe/uMRx1FOZHq9RIH30iCdTJ0/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/cb7WeM/btsPDNEhpLe/uMRx1FOZHq9RIH30iCdTJ0/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/cb7WeM/btsPDNEhpLe/uMRx1FOZHq9RIH30iCdTJ0/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2Fcb7WeM%2FbtsPDNEhpLe%2FuMRx1FOZHq9RIH30iCdTJ0%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1672&quot; height=&quot;680&quot; data-origin-width=&quot;1672&quot; data-origin-height=&quot;680&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;출처: &lt;a href=&quot;https://docs.cilium.io/en/stable/network/concepts/ipam/multi-pool/&quot;&gt;https://docs.cilium.io/en/stable/network/concepts/ipam/multi-pool/&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;참고로 문서를 살펴보면 Cilium을 다른 퍼블릭 클라우드에서도 사용이 가능한 것을 알 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이러한 모드를&lt;b&gt; Cilium CNI Chaining&lt;/b&gt; 이라고 하며, Cilium이 수행하는 역할을 다른 CNI와 협업하여 수행하도록 하는 일종의 하이브리드 모드라고 생각할 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;보통 클라우드에서 제공하는 관리형 쿠버네티스 서비스는 해당 클라우드 네트워크에 최적화된 IPAM을 제공하고 있고, IPAM은 클라우드 사의 CNI를 사용해 파드 네트워크 할당과 파드 간 연결을 클라우드 사에서 제공하고, 로드 밸런싱, 네트워크 정책 등 Datapath에 해당하는 역할을 Cilium에서 수행하는 방식으로 동작합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;예를 들어, Azure CNI powered by Cilium의 CNI 는 아래와 같이 동작합니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;576&quot; data-origin-height=&quot;345&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/xDTlB/btsPDE8ytaD/afgn6Ca5IjRw4GTKOVdgN1/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/xDTlB/btsPDE8ytaD/afgn6Ca5IjRw4GTKOVdgN1/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/xDTlB/btsPDE8ytaD/afgn6Ca5IjRw4GTKOVdgN1/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FxDTlB%2FbtsPDE8ytaD%2Fafgn6Ca5IjRw4GTKOVdgN1%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;576&quot; height=&quot;345&quot; data-origin-width=&quot;576&quot; data-origin-height=&quot;345&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;출처: &lt;a href=&quot;https://isovalent.com/blog/post/tutorial-azure-cni-powered-by-cilium/&quot;&gt;https://isovalent.com/blog/post/tutorial-azure-cni-powered-by-cilium/&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Cilium에서 제공하는 IPAM 모드의 특성을 살펴보았으며, 실습을 통해 Kubernetes host scope과 Cluster scope에 대해서 세부적으로 살펴보겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;Kubernetes host scope&lt;/h3&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;실습 환경의 클러스터는 Kubernetes host scope으로 구성되어 있으며, 아래와 같이 정보를 확인해보겠습니다.&lt;/p&gt;
&lt;pre class=&quot;routeros&quot;&gt;&lt;code&gt;# 클러스터 정보 확인
kubectl cluster-info dump | grep -m 2 -E &quot;cluster-cidr|service-cluster-ip-range&quot;
                            &quot;--service-cluster-ip-range=10.96.0.0/16&quot;,
                            &quot;--cluster-cidr=10.244.0.0/16&quot;,

# ipam 모드 확인
cilium config view | grep ^ipam
ipam                                              kubernetes

# 노드별 파드에 할당되는 IPAM(PodCIDR) 정보 확인
# --allocate-node-cidrs=true 로 설정된 kube-controller-manager에서 CIDR을 자동 할당함
kubectl get nodes -o jsonpath='{range .items[*]}{.metadata.name}{&quot;\t&quot;}{.spec.podCIDR}{&quot;\n&quot;}{end}'


# 확인 결과
(⎈|HomeLab:N/A) root@k8s-ctr:~# cilium config view | grep ^routing-mode
routing-mode                                      native
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl cluster-info dump | grep -m 2 -E &quot;cluster-cidr|service-cluster-ip-range&quot;
                            &quot;--service-cluster-ip-range=10.96.0.0/16&quot;,
                            &quot;--cluster-cidr=10.244.0.0/16&quot;,
(⎈|HomeLab:N/A) root@k8s-ctr:~# cilium config view | grep ^ipam
ipam                                              kubernetes
ipam-cilium-node-update-rate                      15s
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get nodes -o jsonpath='{range .items[*]}{.metadata.name}{&quot;\t&quot;}{.spec.podCIDR}{&quot;\n&quot;}{end}'
k8s-ctr 10.244.0.0/24
k8s-w1  10.244.1.0/24


# kube-controller-manager에 아래와 같은 속성을 확인할 수 있음
kubectl describe pod -n kube-system kube-controller-manager-k8s-ctr
...
    Command:
      kube-controller-manager
      --allocate-node-cidrs=true # node cidr을 할당 함
      --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf
      --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf
      --bind-address=127.0.0.1
      --client-ca-file=/etc/kubernetes/pki/ca.crt
      --cluster-cidr=10.244.0.0/16 # podCIDR, 여기서 노드 당 24bit 씩 할당 함
      --cluster-name=kubernetes
      --cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt
      --cluster-signing-key-file=/etc/kubernetes/pki/ca.key
      --controllers=*,bootstrapsigner,tokencleaner
      --kubeconfig=/etc/kubernetes/controller-manager.conf
      --leader-elect=true
      --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
      --root-ca-file=/etc/kubernetes/pki/ca.crt
      --service-account-private-key-file=/etc/kubernetes/pki/sa.key
      --service-cluster-ip-range=10.96.0.0/16 # Service CIDR
      --use-service-account-credentials=true
...

kubectl get ciliumnode -o json | grep podCIDRs -A2

# 파드 정보 : 상태, 파드 IP 확인
kubectl get ciliumendpoints.cilium.io -A

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get ciliumnode -o json | grep podCIDRs -A2
                    &quot;podCIDRs&quot;: [
                        &quot;10.244.0.0/24&quot;
                    ],
--
                    &quot;podCIDRs&quot;: [
                        &quot;10.244.1.0/24&quot;
                    ],

# ciliumNode의 ownerReferences가 v1.Node 인 것으로 확인된다.
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get ciliumnode -o json
{
    &quot;apiVersion&quot;: &quot;v1&quot;,
...
        {
            &quot;apiVersion&quot;: &quot;cilium.io/v2&quot;,
            &quot;kind&quot;: &quot;CiliumNode&quot;,
            &quot;metadata&quot;: {
                &quot;creationTimestamp&quot;: &quot;2025-08-02T06:12:57Z&quot;,
                &quot;generation&quot;: 2,
                &quot;labels&quot;: {
                    &quot;beta.kubernetes.io/arch&quot;: &quot;amd64&quot;,
                    &quot;beta.kubernetes.io/os&quot;: &quot;linux&quot;,
                    &quot;kubernetes.io/arch&quot;: &quot;amd64&quot;,
                    &quot;kubernetes.io/hostname&quot;: &quot;k8s-w1&quot;,
                    &quot;kubernetes.io/os&quot;: &quot;linux&quot;
                },
                &quot;name&quot;: &quot;k8s-w1&quot;,
                &quot;ownerReferences&quot;: [
                    {
                        &quot;apiVersion&quot;: &quot;v1&quot;,
                        &quot;kind&quot;: &quot;Node&quot;,
                        &quot;name&quot;: &quot;k8s-w1&quot;,
                        &quot;uid&quot;: &quot;02854e0c-b8fa-4891-868b-3a42c93d8656&quot;
                    }
                ],
                &quot;resourceVersion&quot;: &quot;1654&quot;,
                &quot;uid&quot;: &quot;5f1ab6b3-89c7-4249-a7a4-efcd8c5e7f7c&quot;
            },
...

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get ciliumendpoints.cilium.io -A
NAMESPACE            NAME                                      SECURITY IDENTITY   ENDPOINT STATE   IPV4           IPV6
cilium-monitoring    grafana-5c69859d9-fphx8                   4388                ready            10.244.0.75
cilium-monitoring    prometheus-6fc896bc5d-b45nk               25471               ready            10.244.0.122
kube-system          coredns-674b8bbfcf-5lc2v                  8066                ready            10.244.0.45
kube-system          coredns-674b8bbfcf-f9qln                  8066                ready            10.244.0.181
kube-system          hubble-relay-5dcd46f5c-vlwg4              35931               ready            10.244.0.146
kube-system          hubble-ui-76d4965bb6-5qtp7                38335               ready            10.244.0.85
local-path-storage   local-path-provisioner-74f9666bc9-hmkjc   35629               ready            10.244.0.66
&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;샘플 애플리케이션 배포하여 상세하게 알아보겠습니다.&lt;/p&gt;
&lt;pre class=&quot;yaml&quot;&gt;&lt;code&gt;# 샘플 애플리케이션 배포
cat &amp;lt;&amp;lt; EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
  name: webpod
spec:
  replicas: 2
  selector:
    matchLabels:
      app: webpod
  template:
    metadata:
      labels:
        app: webpod
    spec:
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchExpressions:
              - key: app
                operator: In
                values:
                - sample-app
            topologyKey: &quot;kubernetes.io/hostname&quot;
      containers:
      - name: webpod
        image: traefik/whoami
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: webpod
  labels:
    app: webpod
spec:
  selector:
    app: webpod
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80
  type: ClusterIP
EOF


# k8s-ctr 노드에 curl-pod 파드 배포
cat &amp;lt;&amp;lt;EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
  name: curl-pod
  labels:
    app: curl
spec:
  nodeName: k8s-ctr
  containers:
  - name: curl
    image: nicolaka/netshoot
    command: [&quot;tail&quot;]
    args: [&quot;-f&quot;, &quot;/dev/null&quot;]
  terminationGracePeriodSeconds: 0
EOF&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;배포된 실습 애플리케이션을 살펴 보겠습니다.&lt;/p&gt;
&lt;pre class=&quot;routeros&quot;&gt;&lt;code&gt;# 배포 확인
kubectl get deploy,svc,ep webpod -owide
kubectl get endpointslices -l app=webpod
kubectl get ciliumendpoints # IP 확인
kubectl exec -it -n kube-system ds/cilium -c cilium-agent -- cilium-dbg endpoint list


(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get deploy,svc,ep webpod -owide
Warning: v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
NAME                     READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS   IMAGES           SELECTOR
deployment.apps/webpod   2/2     2            2           46s   webpod       traefik/whoami   app=webpod

NAME             TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE   SELECTOR
service/webpod   ClusterIP   10.96.195.112   &amp;lt;none&amp;gt;        80/TCP    46s   app=webpod

NAME               ENDPOINTS                        AGE
endpoints/webpod   10.244.0.164:80,10.244.1.77:80   44s
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get endpointslices -l app=webpod
NAME           ADDRESSTYPE   PORTS   ENDPOINTS                  AGE
webpod-ntxzf   IPv4          80      10.244.1.77,10.244.0.164   57s
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get ciliumendpoints
NAME                      SECURITY IDENTITY   ENDPOINT STATE   IPV4           IPV6
curl-pod                  13646               ready            10.244.0.188
webpod-697b545f57-cxr5t   54559               ready            10.244.0.164
webpod-697b545f57-mhtm7   54559               ready            10.244.1.77
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it -n kube-system ds/cilium -c cilium-agent -- cilium-dbg endpoint list
ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                         IPv6   IPv4           STATUS
           ENFORCEMENT        ENFORCEMENT
225        Disabled           Disabled          4388       k8s:app=grafana                                                                            10.244.0.75    ready
                                                           k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=cilium-monitoring                            
                                                           k8s:io.cilium.k8s.policy.cluster=default                                                                    
                                                           k8s:io.cilium.k8s.policy.serviceaccount=default                                                             
                                                           k8s:io.kubernetes.pod.namespace=cilium-monitoring                                                           
237        Disabled           Disabled          35629      k8s:app=local-path-provisioner                                                             10.244.0.66    ready
                                                           k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=local-path-storage                           
                                                           k8s:io.cilium.k8s.policy.cluster=default                                                                    
                                                           k8s:io.cilium.k8s.policy.serviceaccount=local-path-provisioner-service-account                              
                                                           k8s:io.kubernetes.pod.namespace=local-path-storage                                                          
485        Disabled           Disabled          8066       k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system                 10.244.0.181   ready
                                                           k8s:io.cilium.k8s.policy.cluster=default                                                                    
                                                           k8s:io.cilium.k8s.policy.serviceaccount=coredns                                                             
                                                           k8s:io.kubernetes.pod.namespace=kube-system                                                                 
                                                           k8s:k8s-app=kube-dns                                                                                        
679        Disabled           Disabled          1          k8s:node-role.kubernetes.io/control-plane                                                                 ready
                                                           k8s:node.kubernetes.io/exclude-from-external-load-balancers                                                 
                                                           reserved:host                                                                                               
779        Disabled           Disabled          8066       k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system                 10.244.0.45    ready
                                                           k8s:io.cilium.k8s.policy.cluster=default                                                                    
                                                           k8s:io.cilium.k8s.policy.serviceaccount=coredns                                                             
                                                           k8s:io.kubernetes.pod.namespace=kube-system                                                                 
                                                           k8s:k8s-app=kube-dns                                                                                        
1465       Disabled           Disabled          25471      k8s:app=prometheus                                                                         10.244.0.122   ready
                                                           k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=cilium-monitoring                            
                                                           k8s:io.cilium.k8s.policy.cluster=default                                                                    
                                                           k8s:io.cilium.k8s.policy.serviceaccount=prometheus-k8s                                                      
                                                           k8s:io.kubernetes.pod.namespace=cilium-monitoring                                                           
1649       Disabled           Disabled          35931      k8s:app.kubernetes.io/name=hubble-relay                                                    10.244.0.146   ready
                                                           k8s:app.kubernetes.io/part-of=cilium                                                                        
                                                           k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system                                  
                                                           k8s:io.cilium.k8s.policy.cluster=default                                                                    
                                                           k8s:io.cilium.k8s.policy.serviceaccount=hubble-relay                                                        
                                                           k8s:io.kubernetes.pod.namespace=kube-system                                                                 
                                                           k8s:k8s-app=hubble-relay                                                                                    
2054       Disabled           Disabled          54559      k8s:app=webpod                                                                             10.244.0.164   ready
                                                           k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default                                      
                                                           k8s:io.cilium.k8s.policy.cluster=default                                                                    
                                                           k8s:io.cilium.k8s.policy.serviceaccount=default                                                             
                                                           k8s:io.kubernetes.pod.namespace=default                                                                     
2082       Disabled           Disabled          13646      k8s:app=curl                                                                               10.244.0.188   ready
                                                           k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default                                      
                                                           k8s:io.cilium.k8s.policy.cluster=default                                                                    
                                                           k8s:io.cilium.k8s.policy.serviceaccount=default                                                             
                                                           k8s:io.kubernetes.pod.namespace=default                                                                     
2713       Disabled           Disabled          38335      k8s:app.kubernetes.io/name=hubble-ui                                                       10.244.0.85    ready
                                                           k8s:app.kubernetes.io/part-of=cilium                                                                        
                                                           k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system                                  
                                                           k8s:io.cilium.k8s.policy.cluster=default                                                                    
                                                           k8s:io.cilium.k8s.policy.serviceaccount=hubble-ui                                                           
                                                           k8s:io.kubernetes.pod.namespace=kube-system                                                                 
                                                           k8s:k8s-app=hubble-ui     

# 통신 확인
kubectl exec -it curl-pod -- curl webpod | grep Hostname
kubectl exec -it curl-pod -- sh -c 'while true; do curl -s webpod | grep Hostname; sleep 1; done'

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it curl-pod -- curl webpod | grep Hostname
Hostname: webpod-697b545f57-mhtm7
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it curl-pod -- sh -c 'while true; do curl -s webpod | grep Hostname; sleep 1; done'
Hostname: webpod-697b545f57-cxr5t
Hostname: webpod-697b545f57-cxr5t
Hostname: webpod-697b545f57-cxr5t
Hostname: webpod-697b545f57-cxr5t
Hostname: webpod-697b545f57-mhtm7
Hostname: webpod-697b545f57-cxr5t&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Hubble을 통해서 추가로 확인해보겠습니다.&lt;/p&gt;
&lt;pre class=&quot;reasonml&quot;&gt;&lt;code&gt;# hubble ui 웹 접속 주소 확인 : default 네임스페이스 확인
NODEIP=$(ip -4 addr show eth1 | grep -oP '(?&amp;lt;=inet\s)\d+(\.\d+){3}')
echo -e &quot;http://$NODEIP:30003&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Hubble UI에서 확인 가능합니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;2209&quot; data-origin-height=&quot;1289&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/Hq99y/btsPDtTAKyP/EiJd3JmYClyDwNZtGMQBR1/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/Hq99y/btsPDtTAKyP/EiJd3JmYClyDwNZtGMQBR1/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/Hq99y/btsPDtTAKyP/EiJd3JmYClyDwNZtGMQBR1/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FHq99y%2FbtsPDtTAKyP%2FEiJd3JmYClyDwNZtGMQBR1%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;2209&quot; height=&quot;1289&quot; data-origin-width=&quot;2209&quot; data-origin-height=&quot;1289&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Hubble CLI로도 확인 가능합니다.&lt;/p&gt;
&lt;pre class=&quot;haskell&quot;&gt;&lt;code&gt;# hubble relay 포트 포워딩 실행
cilium hubble port-forward&amp;amp;
hubble status


(⎈|HomeLab:N/A) root@k8s-ctr:~# cilium hubble port-forward&amp;amp;
[1] 11741
ℹ️  Hubble Relay is available at 127.0.0.1:4245
(⎈|HomeLab:N/A) root@k8s-ctr:~# hubble status
Healthcheck (via localhost:4245): Ok
Current/Max Flows: 5,565/8,190 (67.95%)
Flows/s: 46.12
Connected Nodes: 2/2

# flow log 모니터링
hubble observe -f --protocol tcp --to-pod curl-pod
hubble observe -f --protocol tcp --from-pod curl-pod
hubble observe -f --protocol tcp --pod curl-pod

(⎈|HomeLab:N/A) root@k8s-ctr:~# hubble observe -f --protocol tcp --pod curl-pod
# pre-xlate-fwd , TRACED : NAT (IP 변환) 전 , 추적 중인 flow
Aug  2 07:11:40.494: default/curl-pod (ID:13646) &amp;lt;&amp;gt; 10.96.195.112:80 (world) pre-xlate-fwd TRACED (TCP) 
# post-xlate-fwd , TRANSLATED : NAT 후의 흐름 , NAT 변환이 일어났음
Aug  2 07:11:40.495: default/curl-pod (ID:13646) &amp;lt;&amp;gt; default/webpod-697b545f57-cxr5t:80 (ID:54559) post-xlate-fwd TRANSLATED (TCP)
# 이후는 바로 파드로 통신이 일어난다.
Aug  2 07:11:40.495: default/curl-pod:41394 (ID:13646) -&amp;gt; default/webpod-697b545f57-cxr5t:80 (ID:54559) to-endpoint FORWARDED (TCP Flags: SYN)
Aug  2 07:11:40.495: default/curl-pod:41394 (ID:13646) &amp;lt;- default/webpod-697b545f57-cxr5t:80 (ID:54559) to-endpoint FORWARDED (TCP Flags: SYN, ACK)
Aug  2 07:11:40.495: default/curl-pod:41394 (ID:13646) -&amp;gt; default/webpod-697b545f57-cxr5t:80 (ID:54559) to-endpoint FORWARDED (TCP Flags: ACK)
Aug  2 07:11:40.496: default/curl-pod:41394 (ID:13646) -&amp;gt; default/webpod-697b545f57-cxr5t:80 (ID:54559) to-endpoint FORWARDED (TCP Flags: ACK, PSH)
Aug  2 07:11:40.496: default/curl-pod:41394 (ID:13646) &amp;lt;&amp;gt; default/webpod-697b545f57-cxr5t (ID:54559) pre-xlate-rev TRACED (TCP)
Aug  2 07:11:40.500: default/curl-pod:41394 (ID:13646) &amp;lt;&amp;gt; default/webpod-697b545f57-cxr5t (ID:54559) pre-xlate-rev TRACED (TCP)
Aug  2 07:11:40.500: default/curl-pod:41394 (ID:13646) &amp;lt;&amp;gt; default/webpod-697b545f57-cxr5t (ID:54559) pre-xlate-rev TRACED (TCP)
Aug  2 07:11:40.500: default/curl-pod:41394 (ID:13646) &amp;lt;&amp;gt; default/webpod-697b545f57-cxr5t (ID:54559) pre-xlate-rev TRACED (TCP)
Aug  2 07:11:40.500: default/curl-pod:41394 (ID:13646) &amp;lt;&amp;gt; default/webpod-697b545f57-cxr5t (ID:54559) pre-xlate-rev TRACED (TCP)
Aug  2 07:11:40.500: default/curl-pod:41394 (ID:13646) &amp;lt;- default/webpod-697b545f57-cxr5t:80 (ID:54559) to-endpoint FORWARDED (TCP Flags: ACK, PSH)
Aug  2 07:11:40.505: default/curl-pod:41394 (ID:13646) -&amp;gt; default/webpod-697b545f57-cxr5t:80 (ID:54559) to-endpoint FORWARDED (TCP Flags: ACK, FIN)
Aug  2 07:11:40.505: default/curl-pod:41394 (ID:13646) &amp;lt;- default/webpod-697b545f57-cxr5t:80 (ID:54559) to-endpoint FORWARDED (TCP Flags: ACK, FIN)
Aug  2 07:11:40.511: default/curl-pod:41394 (ID:13646) -&amp;gt; default/webpod-697b545f57-cxr5t:80 (ID:54559) to-endpoint FORWARDED (TCP Flags: ACK)

# 호출 시도
kubectl exec -it curl-pod -- curl webpod | grep Hostname
kubectl exec -it curl-pod -- curl webpod | grep Hostname
혹은
kubectl exec -it curl-pod -- sh -c 'while true; do curl -s webpod | grep Hostname; sleep 1; done'&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;Cluster Scope 마이그레이션 실습&lt;/h3&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이제 클러스터의 IPAM 모드를 Cluster Scope으로 변경해보겠습니다. 이는 테스트를 위한 것이며, 공식 문서에서는 IPAM mode 를 변경하는 것을 권장하지 않습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;https://docs.cilium.io/en/stable/network/concepts/ipam/&quot;&gt;https://docs.cilium.io/en/stable/network/concepts/ipam/&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1103&quot; data-origin-height=&quot;191&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/rTPFq/btsPFCaqi0h/YlCKbKTZz1VWycuCWF7dgK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/rTPFq/btsPFCaqi0h/YlCKbKTZz1VWycuCWF7dgK/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/rTPFq/btsPFCaqi0h/YlCKbKTZz1VWycuCWF7dgK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FrTPFq%2FbtsPFCaqi0h%2FYlCKbKTZz1VWycuCWF7dgK%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1103&quot; height=&quot;191&quot; data-origin-width=&quot;1103&quot; data-origin-height=&quot;191&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아래와 같이 진행합니다.&lt;/p&gt;
&lt;pre class=&quot;yaml&quot;&gt;&lt;code&gt;# 반복 요청 해두기
kubectl exec -it curl-pod -- sh -c 'while true; do curl -s webpod | grep Hostname; sleep 1; done'

# Cluster Scopre 로 설정 변경
# helm upgrade를 할 때, 실제 버전이 변경되면 설정이 맞지 않을 수 있어서, 버전을 명시해야 함
helm upgrade cilium cilium/cilium --namespace kube-system --version 1.17.6 --reuse-values \
--set ipam.mode=&quot;cluster-pool&quot; --set ipam.operator.clusterPoolIPv4PodCIDRList={&quot;172.20.0.0/16&quot;} --set ipv4NativeRoutingCIDR=172.20.0.0/16


kubectl -n kube-system rollout restart deploy/cilium-operator # 오퍼레이터 재시작 필요
kubectl -n kube-system rollout restart ds/cilium


(⎈|HomeLab:N/A) root@k8s-ctr:~# helm upgrade cilium cilium/cilium --namespace kube-system --version 1.17.6 --reuse-values \
--set ipam.mode=&quot;cluster-pool&quot; --set ipam.operator.clusterPoolIPv4PodCIDRList={&quot;172.20.0.0/16&quot;} --set ipv4NativeRoutingCIDR=172.20.0.0/16
Release &quot;cilium&quot; has been upgraded. Happy Helming!
NAME: cilium
LAST DEPLOYED: Sat Aug  2 16:26:56 2025
NAMESPACE: kube-system
STATUS: deployed
REVISION: 2
TEST SUITE: None
NOTES:
You have successfully installed Cilium with Hubble Relay and Hubble UI.

Your release version is 1.17.6.

For any further help, visit https://docs.cilium.io/en/v1.17/gettinghelp
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl -n kube-system rollout restart deploy/cilium-operator
deployment.apps/cilium-operator restarted
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl -n kube-system rollout restart ds/cilium
daemonset.apps/cilium restarted


# 변경 확인
kubectl get nodes -o jsonpath='{range .items[*]}{.metadata.name}{&quot;\t&quot;}{.spec.podCIDR}{&quot;\n&quot;}{end}'
cilium config view | grep ^ipam
ipam                                              cluster-pool

kubectl get ciliumnode -o json | grep podCIDRs -A2
kubectl get ciliumendpoints.cilium.io -A

# 변경 되지 않았다!
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get nodes -o jsonpath='{range .items[*]}{.metadata.name}{&quot;\t&quot;}{.spec.podCIDR}{&quot;\n&quot;}{end}'
k8s-ctr 10.244.0.0/24
k8s-w1  10.244.1.0/24
(⎈|HomeLab:N/A) root@k8s-ctr:~# cilium config view | grep ^ipam
ipam                                              cluster-pool
ipam-cilium-node-update-rate                      15s

# cilium node
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get ciliumnode -o json | grep podCIDRs -A2
                    &quot;podCIDRs&quot;: [
                        &quot;10.244.0.0/24&quot;
                    ],
--
                    &quot;podCIDRs&quot;: [
                        &quot;10.244.1.0/24&quot;
                    ],
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get ciliumendpoints.cilium.io -A
NAMESPACE            NAME                                      SECURITY IDENTITY   ENDPOINT STATE   IPV4           IPV6
cilium-monitoring    grafana-5c69859d9-fphx8                   4388                ready            10.244.0.75
cilium-monitoring    prometheus-6fc896bc5d-b45nk               25471               ready            10.244.0.122
default              curl-pod                                  13646               ready            10.244.0.188
default              webpod-697b545f57-cxr5t                   54559               ready            10.244.0.164
default              webpod-697b545f57-mhtm7                   54559               ready            10.244.1.77
kube-system          coredns-674b8bbfcf-5lc2v                  8066                ready            10.244.0.45
kube-system          coredns-674b8bbfcf-f9qln                  8066                ready            10.244.0.181
kube-system          hubble-relay-5dcd46f5c-vlwg4              35931               ready            10.244.0.146
kube-system          hubble-ui-76d4965bb6-5qtp7                38335               ready            10.244.0.85
local-path-storage   local-path-provisioner-74f9666bc9-hmkjc   35629               ready            10.244.0.66


# 그리고 이 시점까지는 테스트 연결도 계속 잘된다.

# ciliumNode 자체가 값을 저장하고 있음
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get ciliumnode
NAME      CILIUMINTERNALIP   INTERNALIP       AGE
k8s-ctr   10.244.0.93        192.168.10.100   80m
k8s-w1    10.244.1.42        192.168.10.101   79m

# ciliumnode를 삭제해서 변경되는지 확인 해본다.
kubectl delete ciliumnode k8s-w1
kubectl -n kube-system rollout restart ds/cilium
kubectl get ciliumnode -o json | grep podCIDRs -A2
kubectl get ciliumendpoints.cilium.io -A

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl delete ciliumnode k8s-w1
ciliumnode.cilium.io &quot;k8s-w1&quot; deleted
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl -n kube-system rollout restart ds/cilium
daemonset.apps/cilium restarted
# 이 값은 cilium 파드가 정상화 되어야 들어옴 (시간이 걸림)
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get ciliumnode -o json | grep podCIDRs -A2
                    &quot;podCIDRs&quot;: [
                        &quot;10.244.0.0/24&quot;
                    ],
--
                    &quot;podCIDRs&quot;: [
                        &quot;172.20.0.0/24&quot;
                    ],

# 다만 PodIP는 변경되지 않음
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get ciliumendpoints.cilium.io -A
NAMESPACE            NAME                                      SECURITY IDENTITY   ENDPOINT STATE   IPV4           IPV6
cilium-monitoring    grafana-5c69859d9-fphx8                   4388                ready            10.244.0.75
cilium-monitoring    prometheus-6fc896bc5d-b45nk               25471               ready            10.244.0.122
default              curl-pod                                  13646               ready            10.244.0.188
default              webpod-697b545f57-cxr5t                   54559               ready            10.244.0.164
kube-system          coredns-674b8bbfcf-5lc2v                  8066                ready            10.244.0.45
kube-system          coredns-674b8bbfcf-f9qln                  8066                ready            10.244.0.181
kube-system          hubble-relay-5dcd46f5c-vlwg4              35931               ready            10.244.0.146
kube-system          hubble-ui-76d4965bb6-5qtp7                38335               ready            10.244.0.85
local-path-storage   local-path-provisioner-74f9666bc9-hmkjc   35629               ready            10.244.0.66

# 컨트롤 플레인도 진행함
kubectl delete ciliumnode k8s-ctr
kubectl -n kube-system rollout restart ds/cilium
kubectl get ciliumnode -o json | grep podCIDRs -A2
kubectl get ciliumendpoints.cilium.io -A 

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl delete ciliumnode k8s-ctr
ciliumnode.cilium.io &quot;k8s-ctr&quot; deleted
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl -n kube-system rollout restart ds/cilium
daemonset.apps/cilium restarted
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get ciliumnode -o json | grep podCIDRs -A2
                    &quot;podCIDRs&quot;: [
                        &quot;172.20.1.0/24&quot;
                    ],
--
                    &quot;podCIDRs&quot;: [
                        &quot;172.20.0.0/24&quot;
                    ],
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get ciliumendpoints.cilium.io -A
NAMESPACE     NAME                       SECURITY IDENTITY   ENDPOINT STATE   IPV4           IPV6
kube-system   coredns-674b8bbfcf-zfdq8   8066                ready            172.20.0.224


# 노드의 poccidr static routing 자동 변경 적용 확인
ip -c route

(⎈|HomeLab:N/A) root@k8s-ctr:~# ip -c route
default via 10.0.2.2 dev eth0 proto dhcp src 10.0.2.15 metric 100
10.0.2.0/24 dev eth0 proto kernel scope link src 10.0.2.15 metric 100
10.0.2.2 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100
10.0.2.3 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100
10.10.0.0/16 via 192.168.10.200 dev eth1 proto static
172.20.0.0/24 via 192.168.10.101 dev eth1 proto kernel # 추가됨
172.20.1.107 dev lxc624d897a501a proto kernel scope link # 추가됨
192.168.10.0/24 dev eth1 proto kernel scope link src 192.168.10.100

# 직접 rollout restart 하자! 
kubectl get pod -A -owide | grep 10.244.

kubectl -n kube-system rollout restart deploy/hubble-relay deploy/hubble-ui
kubectl -n cilium-monitoring rollout restart deploy/prometheus deploy/grafana
kubectl rollout restart deploy/webpod
kubectl delete pod curl-pod


(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get pod -A -owide | grep 10.244.
cilium-monitoring    grafana-5c69859d9-fphx8                   0/1     Running   0               92m   10.244.0.75      k8s-ctr   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
cilium-monitoring    prometheus-6fc896bc5d-b45nk               1/1     Running   0               92m   10.244.0.122     k8s-ctr   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
default              curl-pod                                  1/1     Running   0               37m   10.244.0.188     k8s-ctr   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
default              webpod-697b545f57-cxr5t                   1/1     Running   0               37m   10.244.0.164     k8s-ctr   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
default              webpod-697b545f57-mhtm7                   1/1     Running   0               37m   10.244.1.77      k8s-w1    &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system          hubble-relay-5dcd46f5c-vlwg4              0/1     Running   0               93m   10.244.0.146     k8s-ctr   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system          hubble-ui-76d4965bb6-5qtp7                1/2     Running   3 (5s ago)      93m   10.244.0.85      k8s-ctr   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
local-path-storage   local-path-provisioner-74f9666bc9-hmkjc   1/1     Running   0               92m   10.244.0.66      k8s-ctr   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get pod -A -owide
NAMESPACE            NAME                                      READY   STATUS    RESTARTS        AGE     IP               NODE      NOMINATED NODE   READINESS GATES
cilium-monitoring    grafana-6fc685b7f-mmvjc                   1/1     Running   0               6m23s   172.20.0.129     k8s-w1    &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
cilium-monitoring    prometheus-7f7454f75b-9m5xg               1/1     Running   0               6m23s   172.20.0.228     k8s-w1    &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
default              webpod-5cd486cdc5-cz54t                   1/1     Running   0               6m20s   172.20.0.60      k8s-w1    &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
default              webpod-5cd486cdc5-x25s9                   1/1     Running   0               5m13s   172.20.1.185     k8s-ctr   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system          cilium-envoy-rt7pm                        1/1     Running   0               99m     192.168.10.100   k8s-ctr   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system          cilium-envoy-z64wz                        1/1     Running   1 (90m ago)     94m     192.168.10.101   k8s-w1    &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system          cilium-hhdxb                              1/1     Running   0               8m20s   192.168.10.100   k8s-ctr   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system          cilium-hlm97                              1/1     Running   0               8m21s   192.168.10.101   k8s-w1    &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system          cilium-operator-74fdbd546b-zv4fb          1/1     Running   1 (9m48s ago)   18m     192.168.10.101   k8s-w1    &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system          coredns-674b8bbfcf-j48sp                  1/1     Running   0               7m23s   172.20.1.107     k8s-ctr   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system          coredns-674b8bbfcf-zfdq8                  1/1     Running   0               7m38s   172.20.0.224     k8s-w1    &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system          etcd-k8s-ctr                              1/1     Running   0               100m    192.168.10.100   k8s-ctr   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system          hubble-relay-7c9f877b66-lnplv             1/1     Running   0               6m28s   172.20.0.244     k8s-w1    &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system          hubble-ui-6f57b45c65-sx4t9                2/2     Running   0               6m28s   172.20.0.196     k8s-w1    &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system          kube-apiserver-k8s-ctr                    1/1     Running   0               100m    192.168.10.100   k8s-ctr   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system          kube-controller-manager-k8s-ctr           1/1     Running   8 (9m31s ago)   100m    192.168.10.100   k8s-ctr   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system          kube-proxy-gdbb4                          1/1     Running   1 (90m ago)     94m     192.168.10.101   k8s-w1    &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system          kube-proxy-vshp2                          1/1     Running   0               100m    192.168.10.100   k8s-ctr   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system          kube-scheduler-k8s-ctr                    1/1     Running   6 (46m ago)     100m    192.168.10.100   k8s-ctr   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
local-path-storage   local-path-provisioner-74f9666bc9-hmkjc   1/1     Running   0               99m     10.244.0.66      k8s-ctr   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;


# 파드 IP 변경 확인!
kubectl get ciliumendpoints.cilium.io -A

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get ciliumendpoints.cilium.io -A
NAMESPACE           NAME                            SECURITY IDENTITY   ENDPOINT STATE   IPV4           IPV6
cilium-monitoring   grafana-6fc685b7f-mmvjc         4388                ready            172.20.0.129
cilium-monitoring   prometheus-7f7454f75b-9m5xg     25471               ready            172.20.0.228
default             webpod-5cd486cdc5-cz54t         54559               ready            172.20.0.60
default             webpod-5cd486cdc5-x25s9         54559               ready            172.20.1.185
kube-system         coredns-674b8bbfcf-j48sp        8066                ready            172.20.1.107
kube-system         coredns-674b8bbfcf-zfdq8        8066                ready            172.20.0.224
kube-system         hubble-relay-7c9f877b66-lnplv   35931               ready            172.20.0.244
kube-system         hubble-ui-6f57b45c65-sx4t9      38335               ready            172.20.0.196

# k8s-ctr 노드에 curl-pod 파드 배포
cat &amp;lt;&amp;lt;EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
  name: curl-pod
  labels:
    app: curl
spec:
  nodeName: k8s-ctr
  containers:
  - name: curl
    image: nicolaka/netshoot
    command: [&quot;tail&quot;]
    args: [&quot;-f&quot;, &quot;/dev/null&quot;]
  terminationGracePeriodSeconds: 0
EOF

# 반복 요청
kubectl exec -it curl-pod -- sh -c 'while true; do curl -s webpod | grep Hostname; sleep 1; done'

# 정상적으로 수행됨
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it curl-pod -- sh -c 'while true; do curl -s webpod | grep Hostname; sleep 1; done'
Hostname: webpod-5cd486cdc5-cz54t
Hostname: webpod-5cd486cdc5-x25s9&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;IPAM에 대한 설명을 마치고, 이제 Routing을 알아보겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;3. Routing&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Cilium에서 제공하는 라우팅 방식에는 크게 Encapsulation 모드와 Native Routing 모드가 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Cilium 문서의 아래를 참고하실 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;참고: &lt;a href=&quot;https://docs.cilium.io/en/stable/network/concepts/routing/&quot;&gt;https://docs.cilium.io/en/stable/network/concepts/routing/&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1416&quot; data-origin-height=&quot;717&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bK8atK/btsPDluCYyt/FkNXZyI6rYomeQw9KRpf1k/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bK8atK/btsPDluCYyt/FkNXZyI6rYomeQw9KRpf1k/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bK8atK/btsPDluCYyt/FkNXZyI6rYomeQw9KRpf1k/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbK8atK%2FbtsPDluCYyt%2FFkNXZyI6rYomeQw9KRpf1k%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1416&quot; height=&quot;717&quot; data-origin-width=&quot;1416&quot; data-origin-height=&quot;717&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;출처: Cilium (현재는 해당 이미지를 사용하지 않음)&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;Encapsulation 모드&lt;/h3&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Encapsulation 모드는 노드 간 Tunnel을 구성하여 UDP 기반의 encapsulation 프로토콜인 VXLAN이나 Geneve를 사용합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Cilium 기존의 네트워킹 인프라에서 요구 사항이 가장 적은 Encapsulation 모드에서 자동으로 실행됩니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이 모드에서는 모든 클러스터 노드가 &lt;b&gt;UDP&lt;/b&gt; 기반 캡슐화 프로토콜인 &lt;b&gt;VXLAN&lt;/b&gt; 또는 &lt;b&gt;Geneve&lt;/b&gt;를 사용하여 메시 터널을 형성합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Cilium 노드가 이미 서로 연결될 수 있다면 모든 라우팅 요구 사항이 이미 충족된다는 것을 의미하며, Cilium 노드 간의 모든 트래픽이 캡슐화됩니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;기본 네트워크는 IPv4를 지원해야 합니다. 기본 네트워크와 방화벽은 캡슐화된 패킷을 허용해야 합니다:&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;VXLAN (default) : UDP 8472&lt;/li&gt;
&lt;li&gt;Geneve : UDP 6081&lt;/li&gt;
&lt;/ul&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;다만 Encapsulation 모드에서는 encapsulation을 위한 header를 추가하므로, 페이로드에 사용할 수 있는 유효 MTU가 네이티브 라우팅(VXLAN의 경우 네트워크 패킷당 50바이트)보다 낮아지고, 결과적으로 네트워크 연결에 대한 최대 처리량이 낮아집니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;Native Routing 모드&lt;/h3&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아래는 Native Routing 모드를 설명하는 그림으로, encapsulation이 없이 각 노드에 대한 파드 CIDR을 직접 라우팅으로 처리합니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;882&quot; data-origin-height=&quot;375&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/mLzqG/btsPE5KS47Y/kKaKgPh5h8k5ZDjRGSh01k/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/mLzqG/btsPE5KS47Y/kKaKgPh5h8k5ZDjRGSh01k/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/mLzqG/btsPE5KS47Y/kKaKgPh5h8k5ZDjRGSh01k/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FmLzqG%2FbtsPE5KS47Y%2FkKaKgPh5h8k5ZDjRGSh01k%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;882&quot; height=&quot;375&quot; data-origin-width=&quot;882&quot; data-origin-height=&quot;375&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;출처: &lt;a href=&quot;https://docs.cilium.io/en/stable/network/concepts/routing/&quot;&gt;https://docs.cilium.io/en/stable/network/concepts/routing/&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;네이티브 라우팅 모드에서는 Cilium이 다른 로컬 엔드포인트를 대상으로 하지 않는 모든 패킷을 Linux 커널의 라우팅 하위 시스템에 위임합니다. 따라서 클러스터 노드를 연결하는 네트워크는 PodCIDR을 라우팅할 수 있어야 합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;위 그림과 같이 각 개별 노드는 다른 모든 노드의 모든 파드 IP를 인식하고 이를 표현하기 위해 Linux 커널 라우팅 테이블에 삽입됩니다. 모든 노드가 단일 L2 네트워크를 공유하는 경우 &lt;code&gt;auto-direct-node-routes: true&lt;/code&gt;하여 이 문제를 해결할 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;만약 단일 L2 네트워크가 아니라면 &lt;b&gt;BGP&lt;/b&gt; 데몬과 같은 추가 시스템 구성 요소를 실행하여 라우팅을 처리하거나 static routing을 처리해야 합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;실습 환경에서 제공한 주요 옵션은 아래와 같습니다.&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;&lt;code&gt;routing-mode: native&lt;/code&gt;: Native routing mode 활성화&lt;/li&gt;
&lt;li&gt;&lt;code&gt;ipv4-native-routing-cidr: x.x.x.x/y&lt;/code&gt;: Native Routing에 필요한 CIDR 설정(masquerading없이 통신하는 대역)&lt;/li&gt;
&lt;li&gt;&lt;code&gt;auto-direct-node-routes: true&lt;/code&gt; : 동일 L2 네트워크 공유 시, 각 노드의 PodCIDR을 Linux 커널 라우팅 테이블에 삽입.&lt;/li&gt;
&lt;/ul&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Encapsulation은 다음 포스팅에서 다루도록 하겠으며, 이번 실습에서는 Direct routing 모드로 실습을 이어가겠습니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;# 파드 IP 확인
kubectl get pod -owide

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get pod -owide
NAME                      READY   STATUS    RESTARTS   AGE   IP             NODE      NOMINATED NODE   READINESS GATES
curl-pod                  1/1     Running   0          21m   172.20.1.218   k8s-ctr   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
webpod-5cd486cdc5-cz54t   1/1     Running   0          29m   172.20.0.60    k8s-w1    &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
webpod-5cd486cdc5-x25s9   1/1     Running   0          28m   172.20.1.185   k8s-ctr   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;

# k8s-ctr 노드에 172.20.1.0 대역, k8s-w1 노드에 172.20.0.0 대역을 사용하고 있음

# Webpod1,2 파드 IP
export WEBPODIP1=$(kubectl get -l app=webpod pods --field-selector spec.nodeName=k8s-ctr -o jsonpath='{.items[0].status.podIP}')
export WEBPODIP2=$(kubectl get -l app=webpod pods --field-selector spec.nodeName=k8s-w1  -o jsonpath='{.items[0].status.podIP}')
echo $WEBPODIP1 $WEBPODIP2

# curl-pod 에서 WEBPODIP2 로 ping
kubectl exec -it curl-pod -- ping $WEBPODIP2


(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it curl-pod -- ping $WEBPODIP2
PING 172.20.0.60 (172.20.0.60) 56(84) bytes of data.
64 bytes from 172.20.0.60: icmp_seq=1 ttl=62 time=4.06 ms
64 bytes from 172.20.0.60: icmp_seq=2 ttl=62 time=3.71 ms
^C
--- 172.20.0.60 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 3.712/3.884/4.057/0.172 ms

# 커널 라우팅 확인
kubectl get no -owide
ip -c route
sshpass -p 'vagrant' ssh -o StrictHostKeyChecking=no vagrant@k8s-w1 hostname
sshpass -p 'vagrant' ssh vagrant@k8s-w1 ip -c route

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get no -owide
NAME      STATUS   ROLES           AGE    VERSION   INTERNAL-IP      EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION     CONTAINER-RUNTIME
k8s-ctr   Ready    control-plane   127m   v1.33.2   192.168.10.100   &amp;lt;none&amp;gt;        Ubuntu 24.04.2 LTS   6.8.0-53-generic   containerd://1.7.27
k8s-w1    Ready    &amp;lt;none&amp;gt;          122m   v1.33.2   192.168.10.101   &amp;lt;none&amp;gt;        Ubuntu 24.04.2 LTS   6.8.0-53-generic   containerd://1.7.27

# k8s-ctr    192.168.10.100
# k8s-w1    192.168.10.101
(⎈|HomeLab:N/A) root@k8s-ctr:~# ip -c route
default via 10.0.2.2 dev eth0 proto dhcp src 10.0.2.15 metric 100
10.0.2.0/24 dev eth0 proto kernel scope link src 10.0.2.15 metric 100
10.0.2.2 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100
10.0.2.3 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100
10.10.0.0/16 via 192.168.10.200 dev eth1 proto static
172.20.0.0/24 via 192.168.10.101 dev eth1 proto kernel # k8s-w1 으로 라우팅
172.20.1.107 dev lxc624d897a501a proto kernel scope link
172.20.1.185 dev lxc23eb4002c53c proto kernel scope link
172.20.1.218 dev lxcba4acff7647e proto kernel scope link
192.168.10.0/24 dev eth1 proto kernel scope link src 192.168.10.100

(⎈|HomeLab:N/A) root@k8s-ctr:~# sshpass -p 'vagrant' ssh -o StrictHostKeyChecking=no vagrant@k8s-w1 hostname
Warning: Permanently added 'k8s-w1' (ED25519) to the list of known hosts.
k8s-w1
(⎈|HomeLab:N/A) root@k8s-ctr:~# sshpass -p 'vagrant' ssh vagrant@k8s-w1 ip -c route
default via 10.0.2.2 dev eth0 proto dhcp src 10.0.2.15 metric 100
10.0.2.0/24 dev eth0 proto kernel scope link src 10.0.2.15 metric 100
10.0.2.2 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100
10.0.2.3 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100
10.10.0.0/16 via 192.168.10.200 dev eth1 proto static
172.20.0.60 dev lxc1c2545263e45 proto kernel scope link
172.20.0.129 dev lxc7edd78f581de proto kernel scope link
172.20.0.196 dev lxc19b5c39e75c7 proto kernel scope link
172.20.0.224 dev lxc6c3c1638ee10 proto kernel scope link
172.20.0.228 dev lxcc58c441bcbb6 proto kernel scope link
172.20.0.244 dev lxcdda24b8df7da proto kernel scope link
172.20.1.0/24 via 192.168.10.100 dev eth1 proto kernel # k8s-ctr 으로 라우팅
192.168.10.0/24 dev eth1 proto kernel scope link src 192.168.10.101

# ping을 수행하고 파드 to 파드 통신을 tcpdump로 확인해보기
kubectl exec -it curl-pod -- ping $WEBPODIP2

# tcpdump 수행
tcpdump -i eth1 icmp

# 직접 통신이 일어나고 있음 
(⎈|HomeLab:N/A) root@k8s-ctr:~# echo $WEBPODIP1 $WEBPODIP2
172.20.1.185 172.20.0.60
(⎈|HomeLab:N/A) root@k8s-ctr:~# tcpdump -i eth1 icmp
tcpdump: verbose output suppressed, use -v[v]... for full protocol decode
listening on eth1, link-type EN10MB (Ethernet), snapshot length 262144 bytes
17:19:50.500859 IP 172.20.1.218 &amp;gt; 172.20.0.60: ICMP echo request, id 24, seq 9, length 64
17:19:50.503170 IP 172.20.0.60 &amp;gt; 172.20.1.218: ICMP echo reply, id 24, seq 9, length 64
17:19:51.502534 IP 172.20.1.218 &amp;gt; 172.20.0.60: ICMP echo request, id 24, seq 10, length 64
17:19:51.503919 IP 172.20.0.60 &amp;gt; 172.20.1.218: ICMP echo reply, id 24, seq 10, length 64
17:19:52.504428 IP 172.20.1.218 &amp;gt; 172.20.0.60: ICMP echo request, id 24, seq 11, length 64
&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Native routing에 대해서 알아보았으며, 이제 Masquerading에 대해서 확인해보겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;4. Masquerading&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;b&gt;Masquerading&lt;/b&gt;은 리눅스 네트워크에서 흔히 쓰이는 용어로 내부 IP 주소를 외부에서 인식 가능한 IP 주소로 바꿔주는 것을 의미합니다. 보통 &lt;b&gt;NAT(Network Address Translation)&lt;/b&gt;의 한 형태라고 볼 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아래 그림을 보면 파드는 10.10.10.1이라는 IP를 사용하고 있으며, 노드는 192.168.1.1을 사용하고 있습니다. 이때 외부 통신(google.com)을 할 때, HTTP request를 보면 IPv4가 192.168.1.1(Masquerading) -&amp;gt; google.com으로 변경된 것을 알 수 있습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;768&quot; data-origin-height=&quot;472&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/nwMhz/btsPF785iLL/jAg3BD6ukOxXtB7nAXPTK0/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/nwMhz/btsPF785iLL/jAg3BD6ukOxXtB7nAXPTK0/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/nwMhz/btsPF785iLL/jAg3BD6ukOxXtB7nAXPTK0/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FnwMhz%2FbtsPF785iLL%2FjAg3BD6ukOxXtB7nAXPTK0%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;768&quot; height=&quot;472&quot; data-origin-width=&quot;768&quot; data-origin-height=&quot;472&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;출처: &lt;a href=&quot;https://docs.cilium.io/en/stable/network/concepts/masquerading/&quot;&gt;https://docs.cilium.io/en/stable/network/concepts/masquerading/&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Cilium은 노드의 IP 주소가 네트워크에서 라우팅 가능하기 때문에 클러스터를 떠나는 모든 트래픽의 소스 IP 주소를 노드 IP로 &lt;b&gt;masquerade&lt;/b&gt; 합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;기본 동작은 로컬 노드의 IP 할당 CIDR 내에서 모든 목적지를 제외하는 것입니다. 이때 파드 IP대역은 &lt;code&gt;ipv4-native-routing-cidr: 10.0.0.0.0/8&lt;/code&gt; (또는 IPv6 주소의 경우 &lt;code&gt;ipv6-native-routing-cidr: fd00:/100&lt;/code&gt;) 옵션을 사용하여 masquerade를 제외하도록 지정할 수 있습니다. 이 경우 해당 CIDR 내의 모든 목적지는 masquerade 되지 않습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Masquerade의 구현 방식은 eBPF-based(default), iptables-based(lagacy) 방식이 있으며, 기본적으로 eBPF 기반으로 설정이 되어 있습니다.&lt;/p&gt;
&lt;pre class=&quot;routeros&quot;&gt;&lt;code&gt;# 확인
kubectl exec -it -n kube-system ds/cilium -c cilium-agent  -- cilium status | grep Masquerading

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it -n kube-system ds/cilium -c cilium-agent  -- cilium status | grep Masquerading
Masquerading:            BPF   [eth0, eth1]   172.20.0.0/16 [IPv4: Enabled, IPv6: Disabled]

# ipv4-native-routing-cidr 정의 확인
cilium config view  | grep ipv4-native-routing-cidr

(⎈|HomeLab:N/A) root@k8s-ctr:~# cilium config view  | grep ipv4-native-routing-cidr
ipv4-native-routing-cidr                          172.20.0.0/16&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;Masquerading 기본 동작 확인&lt;/h3&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;기본적으로 &lt;code&gt;ipv4-native-routing-cidr&lt;/code&gt; 범위를 벗어난 IP 주소로 향하는 파드의 모든 패킷은 Masquerading되지만, 다른 노드(Node IP)로 향하는 패킷과 파드에서 노드의 External IP로의 트래픽도 Masquerading되지 않습니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;# 노드 IP로 통신 확인
tcpdump -i eth1 icmp -nn

kubectl exec -it curl-pod -- ping 192.168.10.101

# k8s-ctr에 위치한 curl-pod에서 k8s-w1으로 ping 수행
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get no -owide
NAME      STATUS   ROLES           AGE     VERSION   INTERNAL-IP      EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION     CONTAINER-RUNTIME
k8s-ctr   Ready    control-plane   3h31m   v1.33.2   192.168.10.100   &amp;lt;none&amp;gt;        Ubuntu 24.04.2 LTS   6.8.0-53-generic   containerd://1.7.27
k8s-w1    Ready    &amp;lt;none&amp;gt;          3h25m   v1.33.2   192.168.10.101   &amp;lt;none&amp;gt;        Ubuntu 24.04.2 LTS   6.8.0-53-generic   containerd://1.7.27
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get po -owide
NAME                      READY   STATUS    RESTARTS   AGE    IP             NODE      NOMINATED NODE   READINESS GATES
curl-pod                  1/1     Running   0          108m   172.20.1.218   k8s-ctr   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
webpod-5cd486cdc5-cz54t   1/1     Running   0          117m   172.20.0.60    k8s-w1    &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
webpod-5cd486cdc5-x25s9   1/1     Running   0          116m   172.20.1.185   k8s-ctr   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it curl-pod -- ping 192.168.10.101
PING 192.168.10.101 (192.168.10.101) 56(84) bytes of data.
64 bytes from 192.168.10.101: icmp_seq=1 ttl=63 time=5.33 ms
64 bytes from 192.168.10.101: icmp_seq=2 ttl=63 time=1.14 ms
...

# 파드 -&amp;gt; Node IP로 바로 전달됨
(⎈|HomeLab:N/A) root@k8s-ctr:~# tcpdump -i eth1 -nn icmp
tcpdump: verbose output suppressed, use -v[v]... for full protocol decode
listening on eth1, link-type EN10MB (Ethernet), snapshot length 262144 bytes
18:36:44.199725 IP 172.20.1.218 &amp;gt; 192.168.10.101: ICMP echo request, id 30, seq 10, length 64
18:36:44.203845 IP 192.168.10.101 &amp;gt; 172.20.1.218: ICMP echo reply, id 30, seq 10, length 64
18:36:45.201781 IP 172.20.1.218 &amp;gt; 192.168.10.101: ICMP echo request, id 30, seq 11, length 64
18:36:45.203088 IP 192.168.10.101 &amp;gt; 172.20.1.218: ICMP echo reply, id 30, seq 11, length 64&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;앞서 실습 환경에는 router라는 VM이 생성되어 있고 10.10.0.0/16에 대한 dummy interface로 대역을 가지고 있 것으로 말씀드렸습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;실습 환경에서 노드 통신과 router로의 통신이 어떻게 차이가 나는지 확인해보겠습니다.&lt;/p&gt;
&lt;pre class=&quot;routeros&quot;&gt;&lt;code&gt;# router
ip -br -c -4 addr

root@router:~# ip -br -c a
lo               UNKNOWN        127.0.0.1/8 ::1/128
eth0             UP             10.0.2.15/24 metric 100 fd17:625c:f037:2:a00:27ff:fe6b:69c9/64 fe80::a00:27ff:fe6b:69c9/64
eth1             UP             192.168.10.200/24 fe80::a00:27ff:fed1:1a0d/64
loop1            UNKNOWN        10.10.1.200/24 fe80::4c64:a5ff:fe5d:2386/64
loop2            UNKNOWN        10.10.2.200/24 fe80::7cb5:ff:fefc:b543/64

# k8s-ctr
ip -c route | grep static

(⎈|HomeLab:N/A) root@k8s-ctr:~# ip -c route | grep static
10.10.0.0/16 via 192.168.10.200 dev eth1 proto static

# 터미널 2개 사용
[k8s-ctr] tcpdump -i eth1 -nn icmp
[router] tcpdump -i eth1 -nn icmp

# router eth1 192.168.10.200 로 ping &amp;gt;&amp;gt; IP 확인해보자!
kubectl exec -it curl-pod -- ping 192.168.10.200
...

# 확인 결과 (노드의 IP인 192.168.10.100으로 Masquerading되어 나간다)
root@router:~# tcpdump -i eth1 -nn icmp
tcpdump: verbose output suppressed, use -v[v]... for full protocol decode
listening on eth1, link-type EN10MB (Ethernet), snapshot length 262144 bytes
18:46:16.251364 IP 192.168.10.100 &amp;gt; 192.168.10.200: ICMP echo request, id 33096, seq 1, length 64
18:46:16.251711 IP 192.168.10.200 &amp;gt; 192.168.10.100: ICMP echo reply, id 33096, seq 1, length 64
18:46:17.251146 IP 192.168.10.100 &amp;gt; 192.168.10.200: ICMP echo request, id 33096, seq 2, length 64
18:46:17.251198 IP 192.168.10.200 &amp;gt; 192.168.10.100: ICMP echo reply, id 33096, seq 2, length 64
18:46:18.252629 IP 192.168.10.100 &amp;gt; 192.168.10.200: ICMP echo request, id 33096, seq 3, length 64
18:46:18.252672 IP 192.168.10.200 &amp;gt; 192.168.10.100: ICMP echo reply, id 33096, seq 3, length 64


# router 의 dummy interface 로 ping &amp;gt;&amp;gt; IP 확인해보자!
kubectl exec -it curl-pod -- ping 10.10.1.200
kubectl exec -it curl-pod -- ping 10.10.2.200

# 확인 결과 (노드 IP로 확인된다)
root@router:~# tcpdump -i eth1 -nn icmp
tcpdump: verbose output suppressed, use -v[v]... for full protocol decode
listening on eth1, link-type EN10MB (Ethernet), snapshot length 262144 bytes
18:57:02.146562 IP 192.168.10.100 &amp;gt; 10.10.1.200: ICMP echo request, id 53176, seq 1, length 64
18:57:02.146625 IP 10.10.1.200 &amp;gt; 192.168.10.100: ICMP echo reply, id 53176, seq 1, length 64
18:57:08.087465 IP 192.168.10.100 &amp;gt; 10.10.2.200: ICMP echo request, id 65311, seq 1, length 64
18:57:08.087599 IP 10.10.2.200 &amp;gt; 192.168.10.100: ICMP echo reply, id 65311, seq 1, length 64&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;실습 결과를 보면 파드 간 통신, 그리고 파드와 노드 간 통신에서는 Masquerade가 일어나지 않고, 같은 IP대역의 다른 VM(192.168.10.200)이나 다른 대역(10.10.1.200, 10.10.2.200)에 대해서는 Masquerade가 수행되는 것을 알 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;ip-masq-agent 를 이용한 Masquerading 예외 처리&lt;/h3&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;eBPF 기반의 ip-masq-agent를 사용하면 &lt;code&gt;nonMasqueradeCIDRs&lt;/code&gt;, &lt;code&gt;masqLinkLocal&lt;/code&gt;, and &lt;code&gt;masqLinkLocalIPv6&lt;/code&gt; 를 설정할 우 있습니다. 여기서 &lt;code&gt;nonMasqueradeCIDRs&lt;/code&gt; 으로 정의된 CIDR은 masquerade에서 제외됩니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;예를 들어, 사내망과 NAT 없는 통신 필요 시 해당 설정에 대역들 추가 할 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;실습을 통해 알아보겠습니다.&lt;/p&gt;
&lt;pre class=&quot;routeros&quot;&gt;&lt;code&gt;# 아래 설정값은 cilium 데몬셋 자동 재시작됨
helm upgrade cilium cilium/cilium --namespace kube-system --version 1.17.6 --reuse-values \
--set ipMasqAgent.enabled=true --set ipMasqAgent.config.nonMasqueradeCIDRs='{10.10.1.0/24,10.10.2.0/24}'

# 설정 확인
cilium config view  | grep -i ip-masq

(⎈|HomeLab:N/A) root@k8s-ctr:~# cilium config view  | grep -i ip-masq
enable-ip-masq-agent                              true

# ip-masq-agent configmap 생성 확인
kubectl get cm -n kube-system ip-masq-agent -o yaml | yq

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get cm -n kube-system ip-masq-agent
NAME            DATA   AGE
ip-masq-agent   1      10m
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get cm -n kube-system ip-masq-agent -oyaml |yq
{
  &quot;apiVersion&quot;: &quot;v1&quot;,
  &quot;data&quot;: {
    &quot;config&quot;: &quot;{\&quot;nonMasqueradeCIDRs\&quot;:[\&quot;10.10.1.0/24\&quot;,\&quot;10.10.2.0/24\&quot;]}&quot;
  },
  &quot;kind&quot;: &quot;ConfigMap&quot;,
  &quot;metadata&quot;: {
    &quot;annotations&quot;: {
      &quot;meta.helm.sh/release-name&quot;: &quot;cilium&quot;,
      &quot;meta.helm.sh/release-namespace&quot;: &quot;kube-system&quot;
    },
    &quot;creationTimestamp&quot;: &quot;2025-08-02T10:16:23Z&quot;,
    &quot;labels&quot;: {
      &quot;app.kubernetes.io/managed-by&quot;: &quot;Helm&quot;
    },
    &quot;name&quot;: &quot;ip-masq-agent&quot;,
    &quot;namespace&quot;: &quot;kube-system&quot;,
    &quot;resourceVersion&quot;: &quot;27704&quot;,
    &quot;uid&quot;: &quot;b7facd6f-78bb-42c0-a91e-c95d9b32edcc&quot;
  }
}

# 등록 확인
kubectl -n kube-system exec ds/cilium -c cilium-agent -- cilium-dbg bpf ipmasq list
IP PREFIX/ADDRESS   


(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl -n kube-system exec ds/cilium -c cilium-agent -- cilium-dbg bpf ipmasq list
IP PREFIX/ADDRESS
10.10.1.0/24
10.10.2.0/24
169.254.0.0/16&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이전에 masquerade되었던 router의 10.10.1.200과 10.10.2.200으로 다시 통신테스트를 수행해보겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;먼저 PodCIDR에 대해서 router에서는 인식이 되지 않기 때문에, 라우팅을 추가합니다.&lt;/p&gt;
&lt;pre class=&quot;routeros&quot;&gt;&lt;code&gt;# router 에 static route 설정 : 아래 노드별 PodCIDR에 대한 static routing 설정
ip route add 172.20.1.0/24 via 192.168.10.100
ip route add 172.20.0.0/24 via 192.168.10.101
ip -c route | grep 172.20

# 확인
root@router:~# ip route add 172.20.1.0/24 via 192.168.10.100
root@router:~# ip route add 172.20.0.0/24 via 192.168.10.101
root@router:~# ip -c route | grep 172.20
172.20.0.0/24 via 192.168.10.101 dev eth1
172.20.1.0/24 via 192.168.10.100 dev eth1&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;통신 테스트를 수행합니다.&lt;/p&gt;
&lt;pre class=&quot;routeros&quot;&gt;&lt;code&gt;# 터미널 2개 사용
[k8s-ctr] tcpdump -i eth1 -nn icmp
[router] tcpdump -i eth1 -nn icmp

# router eth1 192.168.10.200와 각 dummy interface로 로 ping IP 확인해봅니다.
kubectl exec -it curl-pod -- ping -c 1 192.168.10.200
kubectl exec -it curl-pod -- ping -c 1 10.10.1.200
kubectl exec -it curl-pod -- ping -c 1 10.10.2.200

# 결과 확인
root@router:~# tcpdump -i eth1 -nn icmp
tcpdump: verbose output suppressed, use -v[v]... for full protocol decode
listening on eth1, link-type EN10MB (Ethernet), snapshot length 262144 bytes
# masquerading 됨
19:34:13.183112 IP 192.168.10.100 &amp;gt; 192.168.10.200: ICMP echo request, id 55683, seq 1, length 64
19:34:13.183161 IP 192.168.10.200 &amp;gt; 192.168.10.100: ICMP echo reply, id 55683, seq 1, length 64
# masquerading 되지 않음
19:34:19.933338 IP 172.20.1.218 &amp;gt; 10.10.1.200: ICMP echo request, id 82, seq 1, length 64
19:34:19.933378 IP 10.10.1.200 &amp;gt; 172.20.1.218: ICMP echo reply, id 82, seq 1, length 64
19:34:25.438848 IP 172.20.1.218 &amp;gt; 10.10.2.200: ICMP echo request, id 88, seq 1, length 64
19:34:25.438899 IP 10.10.2.200 &amp;gt; 172.20.1.218: ICMP echo reply, id 88, seq 1, length 64&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;ip-masq-agent를 설정한 대역에 대해서는 더이상 masquerade가 되지 않는 것을 확인하였습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;마치며&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이번 포스트에서는 Cilium의 IPAM과 Routing, 그리고 masquerade를 알아보면서 Cilium의 파드 통신의 특성을 살펴봤습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;다음 포스트에서는 Cilium에서 NodeLocalDNS를 활용하는 방법을 살펴보겠습니다.&lt;/p&gt;</description>
      <category>Cilium</category>
      <category>cilium</category>
      <category>ipam</category>
      <category>kubernetes</category>
      <category>MASQUERADING</category>
      <category>routing</category>
      <author>한명</author>
      <guid isPermaLink="true">https://a-person.tistory.com/51</guid>
      <comments>https://a-person.tistory.com/51#entry51comment</comments>
      <pubDate>Sat, 2 Aug 2025 21:39:17 +0900</pubDate>
    </item>
    <item>
      <title>[2] Cilium Observability</title>
      <link>https://a-person.tistory.com/50</link>
      <description>&lt;p data-ke-size=&quot;size16&quot;&gt;이번 포스트에서는 Cilium를 통해 어떻게 Network Observability를 향상 시킬 수 있는지 확인해보겠습니다.&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;목차&lt;/h2&gt;
&lt;ol style=&quot;list-style-type: decimal;&quot; data-ke-list-type=&quot;decimal&quot;&gt;
&lt;li&gt;실습 환경 구성&lt;/li&gt;
&lt;li&gt;Hubble 소개 및 설치&lt;/li&gt;
&lt;li&gt;Hubble을 이용한 Network Observability&lt;/li&gt;
&lt;li&gt;Cilium/Hubble을 Prometheus로 연동하기&lt;/li&gt;
&lt;/ol&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;1. 실습 환경 구성&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;먼저 Vagrant를 통해서 실습환경을 구성하겠습니다.&lt;/p&gt;
&lt;pre class=&quot;awk&quot;&gt;&lt;code&gt;mkdir cilium-lab &amp;amp;&amp;amp; cd cilium-lab

curl -O https://raw.githubusercontent.com/gasida/vagrant-lab/refs/heads/main/cilium-study/2w/Vagrantfile

vagrant up&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;vagrant 로 실행이 완료되면 아래와 같이 노드가 생성되며, cilium CNI Plugin이 이미 설치된 상태로 구성됩니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;일반적인 환경을 먼저 살펴보겠습니다.&lt;/p&gt;
&lt;pre class=&quot;yaml&quot;&gt;&lt;code&gt;(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get no
NAME      STATUS   ROLES           AGE     VERSION
k8s-ctr   Ready    control-plane   11m     v1.33.2
k8s-w1    Ready    &amp;lt;none&amp;gt;          7m13s   v1.33.2
k8s-w2    Ready    &amp;lt;none&amp;gt;          2m30s   v1.33.2
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl cluster-info dump | grep -m 2 -E &quot;cluster-cidr|service-cluster-ip-range&quot;
                            &quot;--service-cluster-ip-range=10.96.0.0/16&quot;,
                            &quot;--cluster-cidr=10.244.0.0/16&quot;,
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl describe cm -n kube-system kubeadm-config
Name:         kubeadm-config
Namespace:    kube-system
Labels:       &amp;lt;none&amp;gt;
Annotations:  &amp;lt;none&amp;gt;

Data
====
ClusterConfiguration:
----
apiServer: {}
apiVersion: kubeadm.k8s.io/v1beta4
caCertificateValidityPeriod: 87600h0m0s
certificateValidityPeriod: 8760h0m0s
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
encryptionAlgorithm: RSA-2048
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.k8s.io
kind: ClusterConfiguration
kubernetesVersion: v1.33.2
networking:
  dnsDomain: cluster.local
  podSubnet: 10.244.0.0/16
  serviceSubnet: 10.96.0.0/16
proxy:
  disabled: true
scheduler: {}



BinaryData
====

Events:  &amp;lt;none&amp;gt;
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl describe cm -n kube-system kubelet-config
Name:         kubelet-config
Namespace:    kube-system
Labels:       &amp;lt;none&amp;gt;
Annotations:  kubeadm.kubernetes.io/component-config.hash: sha256:0ff07274ab31cc8c0f9d989e90179a90b6e9b633c8f3671993f44185a0791127

Data
====
kubelet:
----
apiVersion: kubelet.config.k8s.io/v1beta1
authentication:
  anonymous:
    enabled: false
  webhook:
    cacheTTL: 0s
    enabled: true
  x509:
    clientCAFile: /etc/kubernetes/pki/ca.crt
authorization:
  mode: Webhook
  webhook:
    cacheAuthorizedTTL: 0s
    cacheUnauthorizedTTL: 0s
cgroupDriver: systemd
clusterDNS:
- 10.96.0.10
clusterDomain: cluster.local
containerRuntimeEndpoint: &quot;&quot;
cpuManagerReconcilePeriod: 0s
crashLoopBackOff: {}
evictionPressureTransitionPeriod: 0s
fileCheckFrequency: 0s
healthzBindAddress: 127.0.0.1
healthzPort: 10248
httpCheckFrequency: 0s
imageMaximumGCAge: 0s
imageMinimumGCAge: 0s
kind: KubeletConfiguration
logging:
  flushFrequency: 0
  options:
    json:
      infoBufferSize: &quot;0&quot;
    text:
      infoBufferSize: &quot;0&quot;
  verbosity: 0
memorySwap: {}
nodeStatusReportFrequency: 0s
nodeStatusUpdateFrequency: 0s
resolvConf: /run/systemd/resolve/resolv.conf
rotateCertificates: true
runtimeRequestTimeout: 0s
shutdownGracePeriod: 0s
shutdownGracePeriodCriticalPods: 0s
staticPodPath: /etc/kubernetes/manifests
streamingConnectionIdleTimeout: 0s
syncFrequency: 0s
volumeStatsAggPeriod: 0s



BinaryData
====

Events:  &amp;lt;none&amp;gt;

# node의 podCIDR은 cluster-info에 정의된 --cluster-cidr=10.244.0.0/16&quot;로 확인됨
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get nodes -o jsonpath='{range .items[*]}{.metadata.name}{&quot;\t&quot;}{.spec.podCIDR}{&quot;\n&quot;}{end}'
k8s-ctr 10.244.0.0/24
k8s-w1  10.244.2.0/24
k8s-w2  10.244.1.0/24

# ciliumNode에서는 172.20.0.0/16 대역으로 각 노드에 할당됨
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get ciliumnode -o json | grep podCIDRs -A2
                    &quot;podCIDRs&quot;: [
                        &quot;172.20.0.0/24&quot;
                    ],
--
                    &quot;podCIDRs&quot;: [
                        &quot;172.20.1.0/24&quot;
                    ],
--
                    &quot;podCIDRs&quot;: [
                        &quot;172.20.2.0/24&quot;
                    ],

# ciliumNode에서 정의된 대역을 사용 중
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get pod -A -owide
NAMESPACE     NAME                               READY   STATUS    RESTARTS        AGE   IP               NODE      NOMINATED NODE   READINESS GATES
kube-system   cilium-86lfc                       1/1     Running   0               37m   192.168.10.100   k8s-ctr   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system   cilium-envoy-j6wtm                 1/1     Running   0               33m   192.168.10.101   k8s-w1    &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system   cilium-envoy-lcrfc                 1/1     Running   0               37m   192.168.10.100   k8s-ctr   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system   cilium-envoy-tmrtv                 1/1     Running   0               28m   192.168.10.102   k8s-w2    &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system   cilium-lz7jx                       1/1     Running   0               33m   192.168.10.101   k8s-w1    &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system   cilium-operator-5bc66f5b9b-wqft9   1/1     Running   3 (6m31s ago)   37m   192.168.10.100   k8s-ctr   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system   cilium-v2fp4                       1/1     Running   0               28m   192.168.10.102   k8s-w2    &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system   coredns-674b8bbfcf-d87dt           1/1     Running   0               37m   172.20.0.126     k8s-ctr   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system   coredns-674b8bbfcf-gmrml           1/1     Running   0               37m   172.20.0.242     k8s-ctr   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system   etcd-k8s-ctr                       1/1     Running   0               37m   192.168.10.100   k8s-ctr   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system   kube-apiserver-k8s-ctr             1/1     Running   0               37m   192.168.10.100   k8s-ctr   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system   kube-controller-manager-k8s-ctr    1/1     Running   4 (6m32s ago)   37m   192.168.10.100   k8s-ctr   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system   kube-scheduler-k8s-ctr             1/1     Running   4 (6m31s ago)   37m   192.168.10.100   k8s-ctr   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;

(⎈|HomeLab:N/A) root@k8s-ctr:~# cilium status
    /&amp;macr;&amp;macr;\
 /&amp;macr;&amp;macr;\__/&amp;macr;&amp;macr;\    Cilium:             OK
 \__/&amp;macr;&amp;macr;\__/    Operator:           OK
 /&amp;macr;&amp;macr;\__/&amp;macr;&amp;macr;\    Envoy DaemonSet:    OK
 \__/&amp;macr;&amp;macr;\__/    Hubble Relay:       disabled
    \__/       ClusterMesh:        disabled

DaemonSet              cilium                   Desired: 3, Ready: 3/3, Available: 3/3
DaemonSet              cilium-envoy             Desired: 3, Ready: 3/3, Available: 3/3
Deployment             cilium-operator          Desired: 1, Ready: 1/1, Available: 1/1
Containers:            cilium                   Running: 3
                       cilium-envoy             Running: 3
                       cilium-operator          Running: 1
                       clustermesh-apiserver
                       hubble-relay
Cluster Pods:          2/2 managed by Cilium
Helm chart version:    1.17.6
Image versions         cilium             quay.io/cilium/cilium:v1.17.6@sha256:544de3d4fed7acba72758413812780a4972d47c39035f2a06d6145d8644a3353: 3
                       cilium-envoy       quay.io/cilium/cilium-envoy:v1.33.4-1752151664-7c2edb0b44cf95f326d628b837fcdd845102ba68@sha256:318eff387835ca2717baab42a84f35a83a5f9e7d519253df87269f80b9ff0171: 3
                       cilium-operator    quay.io/cilium/operator-generic:v1.17.6@sha256:91ac3bf7be7bed30e90218f219d4f3062a63377689ee7246062fa0cc3839d096: 1&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;2. Hubble 소개 및 설치&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;앞선 포스트에서 살펴본 바와 같이 Cilium은 쿠버네티스 환경에서 eBPF 기술을 활용하여 기존 CNI Plugin보다 향상된 속도와 가시성을 제공해줄 수 있다는 것을 알아보았습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Cilium은 내부적으로 Hubble이라는 Observability를 제공하는 컴포넌트를 가지고 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;b&gt;Hubble&lt;/b&gt;은 Cilium과 eBPF 위에 구축된 완전 분산형 네트워크 및 보안 관측 플랫폼으로, 서비스 간 통신 및 네트워크 인프라의 동작을 투명한 방식으로 깊이 있게 관찰할 수 있도록 설계되었습니다. eBPF를 기반으로 하기 때문에 Hubble은 동적으로 구성 가능한 가시성을 제공하며, 성능 오버헤드를 최소화하면서도 사용자에게 필요한 상세한 정보를 제공합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Hubble에 대한 설명은 아래 문서를 참고하실 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;참고: &lt;a href=&quot;https://docs.cilium.io/en/stable/overview/intro/#intro&quot;&gt;https://docs.cilium.io/en/stable/overview/intro/#intro&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;Hubble 구성요소&lt;/h3&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;참고: &lt;a href=&quot;https://docs.cilium.io/en/stable/overview/component-overview/#hubble&quot;&gt;https://docs.cilium.io/en/stable/overview/component-overview/#hubble&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;b&gt;Server&lt;/b&gt;: &lt;code&gt;Hubble Server&lt;/code&gt;는 각 노드에서 실행되며, &lt;b&gt;Cilium으로부터 eBPF 기반 가시성을 수집&lt;/b&gt;합니다. 성능 최적화를 위해 &lt;b&gt;Cilium Agent에 내장&lt;/b&gt;되어 있으며, gRPC 서비스로 흐름 데이터와 Prometheus 메트릭을 제공합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;b&gt;Relay&lt;/b&gt;: &lt;code&gt;hubble-relay&lt;/code&gt;는 &lt;b&gt;클러스터 내 모든 Hubble 서버를 인식&lt;/b&gt;하고, 각 Hubble Server의 gRPC API에 연결하여 &lt;b&gt;클러스터 전체의 가시성을 제공&lt;/b&gt;하는 독립형 컴포넌트입니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;b&gt;Client (CLI)&lt;/b&gt;: &lt;code&gt;hubble&lt;/code&gt; CLI는 &lt;code&gt;hubble-relay&lt;/code&gt; 또는 로컬 서버의 gRPC API에 연결하여 Flow 이벤트를 조회할 수 있는 명령줄 도구입니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;b&gt;Graphical UI(GUI)&lt;/b&gt;: &lt;code&gt;hubble-ui&lt;/code&gt;는 릴레이 기반 가시성을 활용하여 &lt;b&gt;서비스 종속성&lt;/b&gt;과 &lt;b&gt;연결 맵&lt;/b&gt;을 시각적으로 표시하는 그래픽 사용자 인터페이스입니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;기본적으로 Hubble Server에서 제공하는 Hubble API는 Cilium Agent에 내장되어 노드 범위로 작동하기 때문에 노드 수준의 네트워크에 대한 인사이트를 제공합니다. Hubble Relay를 배포하여 각 Hubble Server를 인식하고 클러스터 수준의 네트워크 가시성을 제공할 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;추가로 사용자 인터페이스로 hubble CLI와 hubble UI를 제공합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;Hubble 설치&lt;/h3&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Hubble을 설치하기 전 아래와 같이 사전 정보를 확인 합니다.&lt;/p&gt;
&lt;pre class=&quot;routeros&quot;&gt;&lt;code&gt;# cilium 상태
cilium status
cilium config view | grep -i hubble
kubectl get cm -n kube-system cilium-config -o json | jq

# secret 및 listen 포트 확인
kubectl get secret -n kube-system | grep -iE 'cilium-ca|hubble'
ss -tnlp | grep -iE 'cilium|hubble' | tee before.txt

# 확인 결과
(⎈|HomeLab:N/A) root@k8s-ctr:~# cilium status
    /&amp;macr;&amp;macr;\
 /&amp;macr;&amp;macr;\__/&amp;macr;&amp;macr;\    Cilium:             OK
 \__/&amp;macr;&amp;macr;\__/    Operator:           OK
 /&amp;macr;&amp;macr;\__/&amp;macr;&amp;macr;\    Envoy DaemonSet:    OK
 \__/&amp;macr;&amp;macr;\__/    Hubble Relay:       disabled
    \__/       ClusterMesh:        disabled

DaemonSet              cilium                   Desired: 3, Ready: 3/3, Available: 3/3
DaemonSet              cilium-envoy             Desired: 3, Ready: 3/3, Available: 3/3
Deployment             cilium-operator          Desired: 1, Ready: 1/1, Available: 1/1
Containers:            cilium                   Running: 3
                       cilium-envoy             Running: 3
                       cilium-operator          Running: 1
                       clustermesh-apiserver
                       hubble-relay
Cluster Pods:          2/2 managed by Cilium
Helm chart version:    1.17.6
Image versions         cilium             quay.io/cilium/cilium:v1.17.6@sha256:544de3d4fed7acba72758413812780a4972d47c39035f2a06d6145d8644a3353: 3
                       cilium-envoy       quay.io/cilium/cilium-envoy:v1.33.4-1752151664-7c2edb0b44cf95f326d628b837fcdd845102ba68@sha256:318eff387835ca2717baab42a84f35a83a5f9e7d519253df87269f80b9ff0171: 3
                       cilium-operator    quay.io/cilium/operator-generic:v1.17.6@sha256:91ac3bf7be7bed30e90218f219d4f3062a63377689ee7246062fa0cc3839d096: 1
(⎈|HomeLab:N/A) root@k8s-ctr:~# cilium config view | grep -i hubble
enable-hubble                                     false
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get cm -n kube-system cilium-config -o json | jq
{
  &quot;apiVersion&quot;: &quot;v1&quot;,
  &quot;data&quot;: {
    &quot;agent-not-ready-taint-key&quot;: &quot;node.cilium.io/agent-not-ready&quot;,
    &quot;arping-refresh-period&quot;: &quot;30s&quot;,
    &quot;auto-direct-node-routes&quot;: &quot;true&quot;,
    &quot;bpf-distributed-lru&quot;: &quot;false&quot;,
    &quot;bpf-events-drop-enabled&quot;: &quot;true&quot;,
    &quot;bpf-events-policy-verdict-enabled&quot;: &quot;true&quot;,
    &quot;bpf-events-trace-enabled&quot;: &quot;true&quot;,
    &quot;bpf-lb-acceleration&quot;: &quot;disabled&quot;,
    &quot;bpf-lb-algorithm-annotation&quot;: &quot;false&quot;,
    &quot;bpf-lb-external-clusterip&quot;: &quot;false&quot;,
    &quot;bpf-lb-map-max&quot;: &quot;65536&quot;,
    &quot;bpf-lb-mode-annotation&quot;: &quot;false&quot;,
    &quot;bpf-lb-sock&quot;: &quot;false&quot;,
    &quot;bpf-lb-source-range-all-types&quot;: &quot;false&quot;,
    &quot;bpf-map-dynamic-size-ratio&quot;: &quot;0.0025&quot;,
    &quot;bpf-policy-map-max&quot;: &quot;16384&quot;,
    &quot;bpf-root&quot;: &quot;/sys/fs/bpf&quot;,
    &quot;cgroup-root&quot;: &quot;/run/cilium/cgroupv2&quot;,
    &quot;cilium-endpoint-gc-interval&quot;: &quot;5m0s&quot;,
    &quot;cluster-id&quot;: &quot;0&quot;,
    &quot;cluster-name&quot;: &quot;default&quot;,
    &quot;cluster-pool-ipv4-cidr&quot;: &quot;172.20.0.0/16&quot;,
    &quot;cluster-pool-ipv4-mask-size&quot;: &quot;24&quot;,
    &quot;clustermesh-enable-endpoint-sync&quot;: &quot;false&quot;,
    &quot;clustermesh-enable-mcs-api&quot;: &quot;false&quot;,
    &quot;cni-exclusive&quot;: &quot;true&quot;,
    &quot;cni-log-file&quot;: &quot;/var/run/cilium/cilium-cni.log&quot;,
    &quot;custom-cni-conf&quot;: &quot;false&quot;,
    &quot;datapath-mode&quot;: &quot;veth&quot;,
    &quot;debug&quot;: &quot;true&quot;,
    &quot;debug-verbose&quot;: &quot;&quot;,
    &quot;default-lb-service-ipam&quot;: &quot;lbipam&quot;,
    &quot;direct-routing-skip-unreachable&quot;: &quot;false&quot;,
    &quot;dnsproxy-enable-transparent-mode&quot;: &quot;true&quot;,
    &quot;dnsproxy-socket-linger-timeout&quot;: &quot;10&quot;,
    &quot;egress-gateway-reconciliation-trigger-interval&quot;: &quot;1s&quot;,
    &quot;enable-auto-protect-node-port-range&quot;: &quot;true&quot;,
    &quot;enable-bpf-clock-probe&quot;: &quot;false&quot;,
    &quot;enable-bpf-masquerade&quot;: &quot;true&quot;,
    &quot;enable-endpoint-health-checking&quot;: &quot;false&quot;,
    &quot;enable-endpoint-lockdown-on-policy-overflow&quot;: &quot;false&quot;,
    &quot;enable-endpoint-routes&quot;: &quot;true&quot;,
    &quot;enable-experimental-lb&quot;: &quot;false&quot;,
    &quot;enable-health-check-loadbalancer-ip&quot;: &quot;false&quot;,
    &quot;enable-health-check-nodeport&quot;: &quot;true&quot;,
    &quot;enable-health-checking&quot;: &quot;false&quot;,
    &quot;enable-hubble&quot;: &quot;false&quot;,
    &quot;enable-internal-traffic-policy&quot;: &quot;true&quot;,
    &quot;enable-ipv4&quot;: &quot;true&quot;,
    &quot;enable-ipv4-big-tcp&quot;: &quot;false&quot;,
    &quot;enable-ipv4-masquerade&quot;: &quot;true&quot;,
    &quot;enable-ipv6&quot;: &quot;false&quot;,
    &quot;enable-ipv6-big-tcp&quot;: &quot;false&quot;,
    &quot;enable-ipv6-masquerade&quot;: &quot;true&quot;,
    &quot;enable-k8s-networkpolicy&quot;: &quot;true&quot;,
    &quot;enable-k8s-terminating-endpoint&quot;: &quot;true&quot;,
    &quot;enable-l2-neigh-discovery&quot;: &quot;true&quot;,
    &quot;enable-l7-proxy&quot;: &quot;true&quot;,
    &quot;enable-lb-ipam&quot;: &quot;true&quot;,
    &quot;enable-local-redirect-policy&quot;: &quot;false&quot;,
    &quot;enable-masquerade-to-route-source&quot;: &quot;false&quot;,
    &quot;enable-metrics&quot;: &quot;true&quot;,
    &quot;enable-node-selector-labels&quot;: &quot;false&quot;,
    &quot;enable-non-default-deny-policies&quot;: &quot;true&quot;,
    &quot;enable-policy&quot;: &quot;default&quot;,
    &quot;enable-policy-secrets-sync&quot;: &quot;true&quot;,
    &quot;enable-runtime-device-detection&quot;: &quot;true&quot;,
    &quot;enable-sctp&quot;: &quot;false&quot;,
    &quot;enable-source-ip-verification&quot;: &quot;true&quot;,
    &quot;enable-svc-source-range-check&quot;: &quot;true&quot;,
    &quot;enable-tcx&quot;: &quot;true&quot;,
    &quot;enable-vtep&quot;: &quot;false&quot;,
    &quot;enable-well-known-identities&quot;: &quot;false&quot;,
    &quot;enable-xt-socket-fallback&quot;: &quot;true&quot;,
    &quot;envoy-access-log-buffer-size&quot;: &quot;4096&quot;,
    &quot;envoy-base-id&quot;: &quot;0&quot;,
    &quot;envoy-keep-cap-netbindservice&quot;: &quot;false&quot;,
    &quot;external-envoy-proxy&quot;: &quot;true&quot;,
    &quot;health-check-icmp-failure-threshold&quot;: &quot;3&quot;,
    &quot;http-retry-count&quot;: &quot;3&quot;,
    &quot;identity-allocation-mode&quot;: &quot;crd&quot;,
    &quot;identity-gc-interval&quot;: &quot;15m0s&quot;,
    &quot;identity-heartbeat-timeout&quot;: &quot;30m0s&quot;,
    &quot;install-no-conntrack-iptables-rules&quot;: &quot;true&quot;,
    &quot;ipam&quot;: &quot;cluster-pool&quot;,
    &quot;ipam-cilium-node-update-rate&quot;: &quot;15s&quot;,
    &quot;iptables-random-fully&quot;: &quot;false&quot;,
    &quot;ipv4-native-routing-cidr&quot;: &quot;172.20.0.0/16&quot;,
    &quot;k8s-require-ipv4-pod-cidr&quot;: &quot;false&quot;,
    &quot;k8s-require-ipv6-pod-cidr&quot;: &quot;false&quot;,
    &quot;kube-proxy-replacement&quot;: &quot;true&quot;,
    &quot;kube-proxy-replacement-healthz-bind-address&quot;: &quot;&quot;,
    &quot;max-connected-clusters&quot;: &quot;255&quot;,
    &quot;mesh-auth-enabled&quot;: &quot;true&quot;,
    &quot;mesh-auth-gc-interval&quot;: &quot;5m0s&quot;,
    &quot;mesh-auth-queue-size&quot;: &quot;1024&quot;,
    &quot;mesh-auth-rotated-identities-queue-size&quot;: &quot;1024&quot;,
    &quot;monitor-aggregation&quot;: &quot;medium&quot;,
    &quot;monitor-aggregation-flags&quot;: &quot;all&quot;,
    &quot;monitor-aggregation-interval&quot;: &quot;5s&quot;,
    &quot;nat-map-stats-entries&quot;: &quot;32&quot;,
    &quot;nat-map-stats-interval&quot;: &quot;30s&quot;,
    &quot;node-port-bind-protection&quot;: &quot;true&quot;,
    &quot;nodeport-addresses&quot;: &quot;&quot;,
    &quot;nodes-gc-interval&quot;: &quot;5m0s&quot;,
    &quot;operator-api-serve-addr&quot;: &quot;127.0.0.1:9234&quot;,
    &quot;operator-prometheus-serve-addr&quot;: &quot;:9963&quot;,
    &quot;policy-cidr-match-mode&quot;: &quot;&quot;,
    &quot;policy-secrets-namespace&quot;: &quot;cilium-secrets&quot;,
    &quot;policy-secrets-only-from-secrets-namespace&quot;: &quot;true&quot;,
    &quot;preallocate-bpf-maps&quot;: &quot;false&quot;,
    &quot;procfs&quot;: &quot;/host/proc&quot;,
    &quot;proxy-connect-timeout&quot;: &quot;2&quot;,
    &quot;proxy-idle-timeout-seconds&quot;: &quot;60&quot;,
    &quot;proxy-initial-fetch-timeout&quot;: &quot;30&quot;,
    &quot;proxy-max-concurrent-retries&quot;: &quot;128&quot;,
    &quot;proxy-max-connection-duration-seconds&quot;: &quot;0&quot;,
    &quot;proxy-max-requests-per-connection&quot;: &quot;0&quot;,
    &quot;proxy-xff-num-trusted-hops-egress&quot;: &quot;0&quot;,
    &quot;proxy-xff-num-trusted-hops-ingress&quot;: &quot;0&quot;,
    &quot;remove-cilium-node-taints&quot;: &quot;true&quot;,
    &quot;routing-mode&quot;: &quot;native&quot;,
    &quot;service-no-backend-response&quot;: &quot;reject&quot;,
    &quot;set-cilium-is-up-condition&quot;: &quot;true&quot;,
    &quot;set-cilium-node-taints&quot;: &quot;true&quot;,
    &quot;synchronize-k8s-nodes&quot;: &quot;true&quot;,
    &quot;tofqdns-dns-reject-response-code&quot;: &quot;refused&quot;,
    &quot;tofqdns-enable-dns-compression&quot;: &quot;true&quot;,
    &quot;tofqdns-endpoint-max-ip-per-hostname&quot;: &quot;1000&quot;,
    &quot;tofqdns-idle-connection-grace-period&quot;: &quot;0s&quot;,
    &quot;tofqdns-max-deferred-connection-deletes&quot;: &quot;10000&quot;,
    &quot;tofqdns-proxy-response-max-delay&quot;: &quot;100ms&quot;,
    &quot;tunnel-protocol&quot;: &quot;vxlan&quot;,
    &quot;tunnel-source-port-range&quot;: &quot;0-0&quot;,
    &quot;unmanaged-pod-watcher-interval&quot;: &quot;15&quot;,
    &quot;vtep-cidr&quot;: &quot;&quot;,
    &quot;vtep-endpoint&quot;: &quot;&quot;,
    &quot;vtep-mac&quot;: &quot;&quot;,
    &quot;vtep-mask&quot;: &quot;&quot;,
    &quot;write-cni-conf-when-ready&quot;: &quot;/host/etc/cni/net.d/05-cilium.conflist&quot;
  },
  &quot;kind&quot;: &quot;ConfigMap&quot;,
  &quot;metadata&quot;: {
    &quot;annotations&quot;: {
      &quot;meta.helm.sh/release-name&quot;: &quot;cilium&quot;,
      &quot;meta.helm.sh/release-namespace&quot;: &quot;kube-system&quot;
    },
    &quot;creationTimestamp&quot;: &quot;2025-07-26T10:31:51Z&quot;,
    &quot;labels&quot;: {
      &quot;app.kubernetes.io/managed-by&quot;: &quot;Helm&quot;
    },
    &quot;name&quot;: &quot;cilium-config&quot;,
    &quot;namespace&quot;: &quot;kube-system&quot;,
    &quot;resourceVersion&quot;: &quot;412&quot;,
    &quot;uid&quot;: &quot;2ec939f1-16fe-48a4-a536-5d990b4030f9&quot;
  }
}

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get secret -n kube-system | grep -iE 'cilium-ca|hubble'

# 현재는 cilium 관련 포트들만 오픈되어 있음
(⎈|HomeLab:N/A) root@k8s-ctr:~# ss -tnlp | grep -iE 'cilium|hubble' | tee before.txt
LISTEN 0      4096          0.0.0.0:9964       0.0.0.0:*    users:((&quot;cilium-envoy&quot;,pid=5464,fd=25))
LISTEN 0      4096          0.0.0.0:9964       0.0.0.0:*    users:((&quot;cilium-envoy&quot;,pid=5464,fd=24))
LISTEN 0      4096        127.0.0.1:9879       0.0.0.0:*    users:((&quot;cilium-agent&quot;,pid=5643,fd=47))
LISTEN 0      4096        127.0.0.1:9878       0.0.0.0:*    users:((&quot;cilium-envoy&quot;,pid=5464,fd=27))
LISTEN 0      4096        127.0.0.1:9878       0.0.0.0:*    users:((&quot;cilium-envoy&quot;,pid=5464,fd=26))
LISTEN 0      4096        127.0.0.1:9891       0.0.0.0:*    users:((&quot;cilium-operator&quot;,pid=8103,fd=6))
LISTEN 0      4096        127.0.0.1:9890       0.0.0.0:*    users:((&quot;cilium-agent&quot;,pid=5643,fd=6))
LISTEN 0      4096        127.0.0.1:37563      0.0.0.0:*    users:((&quot;cilium-agent&quot;,pid=5643,fd=42))
LISTEN 0      4096        127.0.0.1:9234       0.0.0.0:*    users:((&quot;cilium-operator&quot;,pid=8103,fd=9))
LISTEN 0      4096                *:9963             *:*    users:((&quot;cilium-operator&quot;,pid=8103,fd=7))&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Hubble의 설치 방안에는 Cilium CLI를 사용하거나 Helm을 사용할 수 있습뇌다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;참고: &lt;a href=&quot;https://docs.cilium.io/en/stable/observability/hubble/setup/#enable-hubble-in-cilium&quot;&gt;https://docs.cilium.io/en/stable/observability/hubble/setup/#enable-hubble-in-cilium&lt;/a&gt;&lt;/p&gt;
&lt;pre class=&quot;routeros&quot;&gt;&lt;code&gt;# 설치방안 1 : hubble 활성화, 메트릭 설정 등등
helm upgrade cilium cilium/cilium --namespace kube-system --reuse-values \
--set hubble.enabled=true \
--set hubble.relay.enabled=true \
--set hubble.ui.enabled=true \
--set hubble.ui.service.type=NodePort \ # Hubble UI 접근을 위한 설정
--set hubble.ui.service.nodePort=31234 \
--set hubble.export.static.enabled=true \ # Hubble event를 로그로 저장
--set hubble.export.static.filePath=/var/run/cilium/hubble/events.log \
--set prometheus.enabled=true \
--set operator.prometheus.enabled=true \
--set hubble.metrics.enableOpenMetrics=true \
--set hubble.metrics.enabled=&quot;{dns,drop,tcp,flow,port-distribution,icmp,httpV2:exemplars=true;labelsContext=source_ip\,source_namespace\,source_workload\,destination_ip\,destination_namespace\,destination_workload\,traffic_direction}&quot;

# 설치방안 2 : hubble 활성화
cilium hubble enable
cilium hubble enable --ui&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;여기서는 Helm을 통해서 설치를 진행하겠습니다.&lt;/p&gt;
&lt;pre class=&quot;yaml&quot;&gt;&lt;code&gt;# 설치 진행
(⎈|HomeLab:N/A) root@k8s-ctr:~# helm upgrade cilium cilium/cilium --namespace kube-system --reuse-values \
--set hubble.enabled=true \
--set hubble.relay.enabled=true \
--set hubble.ui.enabled=true \
--set hubble.ui.service.type=NodePort \
--set hubble.ui.service.nodePort=31234 \
--set hubble.export.static.enabled=true \
--set hubble.export.static.filePath=/var/run/cilium/hubble/events.log \
--set prometheus.enabled=true \
--set operator.prometheus.enabled=true \
--set hubble.metrics.enableOpenMetrics=true \
--set hubble.metrics.enabled=&quot;{dns,drop,tcp,flow,port-distribution,icmp,httpV2:exemplars=true;labelsContext=source_ip\,source_namespace\,source_workload\,destination_ip\,destination_namespace\,destination_workload\,traffic_direction}&quot;
Release &quot;cilium&quot; has been upgraded. Happy Helming!
NAME: cilium
LAST DEPLOYED: Sat Jul 26 21:02:57 2025
NAMESPACE: kube-system
STATUS: deployed
REVISION: 2
TEST SUITE: None
NOTES:
You have successfully installed Cilium with Hubble Relay and Hubble UI.

Your release version is 1.17.6.

For any further help, visit https://docs.cilium.io/en/v1.17/gettinghelp

# cilium status에서 hubble 관련 정보가 확인됨
(⎈|HomeLab:N/A) root@k8s-ctr:~# cilium status
    /&amp;macr;&amp;macr;\
 /&amp;macr;&amp;macr;\__/&amp;macr;&amp;macr;\    Cilium:             OK
 \__/&amp;macr;&amp;macr;\__/    Operator:           OK
 /&amp;macr;&amp;macr;\__/&amp;macr;&amp;macr;\    Envoy DaemonSet:    OK
 \__/&amp;macr;&amp;macr;\__/    Hubble Relay:       OK
    \__/       ClusterMesh:        disabled

DaemonSet              cilium                   Desired: 3, Ready: 3/3, Available: 3/3
DaemonSet              cilium-envoy             Desired: 3, Ready: 3/3, Available: 3/3
Deployment             cilium-operator          Desired: 1, Ready: 1/1, Available: 1/1
Deployment             hubble-relay             Desired: 1, Ready: 1/1, Available: 1/1
Deployment             hubble-ui                Desired: 1, Ready: 1/1, Available: 1/1
Containers:            cilium                   Running: 3
                       cilium-envoy             Running: 3
                       cilium-operator          Running: 1
                       clustermesh-apiserver
                       hubble-relay             Running: 1
                       hubble-ui                Running: 1
Cluster Pods:          4/4 managed by Cilium
Helm chart version:    1.17.6
Image versions         cilium             quay.io/cilium/cilium:v1.17.6@sha256:544de3d4fed7acba72758413812780a4972d47c39035f2a06d6145d8644a3353: 3
                       cilium-envoy       quay.io/cilium/cilium-envoy:v1.33.4-1752151664-7c2edb0b44cf95f326d628b837fcdd845102ba68@sha256:318eff387835ca2717baab42a84f35a83a5f9e7d519253df87269f80b9ff0171: 3
                       cilium-operator    quay.io/cilium/operator-generic:v1.17.6@sha256:91ac3bf7be7bed30e90218f219d4f3062a63377689ee7246062fa0cc3839d096: 1
                       hubble-relay       quay.io/cilium/hubble-relay:v1.17.6@sha256:7d17ec10b3d37341c18ca56165b2f29a715cb8ee81311fd07088d8bf68c01e60: 1
                       hubble-ui          quay.io/cilium/hubble-ui-backend:v0.13.2@sha256:a034b7e98e6ea796ed26df8f4e71f83fc16465a19d166eff67a03b822c0bfa15: 1
                       hubble-ui          quay.io/cilium/hubble-ui:v0.13.2@sha256:9e37c1296b802830834cc87342a9182ccbb71ffebb711971e849221bd9d59392: 1


# cilium config와 configmap에서 hubble 정보 확인
cilium config view | grep -i hubble
kubectl get cm -n kube-system cilium-config -o json | grep -i hubble

(⎈|HomeLab:N/A) root@k8s-ctr:~# cilium config view | grep -i hubble
enable-hubble                                     true
enable-hubble-open-metrics                        true
hubble-disable-tls                                false
hubble-export-allowlist
hubble-export-denylist
hubble-export-fieldmask
hubble-export-file-max-backups                    5
hubble-export-file-max-size-mb                    10
hubble-export-file-path                           /var/run/cilium/hubble/events.log
hubble-listen-address                             :4244
hubble-metrics                                    dns drop tcp flow port-distribution icmp httpV2:exemplars=true;labelsContext=source_ip,source_namespace,source_workload,destination_ip,destination_namespace,destination_workload,traffic_direction
hubble-metrics-server                             :9965
hubble-metrics-server-enable-tls                  false
hubble-socket-path                                /var/run/cilium/hubble.sock
hubble-tls-cert-file                              /var/lib/cilium/tls/hubble/server.crt
hubble-tls-client-ca-files                        /var/lib/cilium/tls/hubble/client-ca.crt
hubble-tls-key-file                               /var/lib/cilium/tls/hubble/server.key
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get cm -n kube-system cilium-config -o json | grep -i hubble
        &quot;enable-hubble&quot;: &quot;true&quot;,
        &quot;enable-hubble-open-metrics&quot;: &quot;true&quot;,
        &quot;hubble-disable-tls&quot;: &quot;false&quot;,
        &quot;hubble-export-allowlist&quot;: &quot;&quot;,
        &quot;hubble-export-denylist&quot;: &quot;&quot;,
        &quot;hubble-export-fieldmask&quot;: &quot;&quot;,
        &quot;hubble-export-file-max-backups&quot;: &quot;5&quot;,
        &quot;hubble-export-file-max-size-mb&quot;: &quot;10&quot;,
        &quot;hubble-export-file-path&quot;: &quot;/var/run/cilium/hubble/events.log&quot;,
        &quot;hubble-listen-address&quot;: &quot;:4244&quot;,
        &quot;hubble-metrics&quot;: &quot;dns drop tcp flow port-distribution icmp httpV2:exemplars=true;labelsContext=source_ip,source_namespace,source_workload,destination_ip,destination_namespace,destination_workload,traffic_direction&quot;,
        &quot;hubble-metrics-server&quot;: &quot;:9965&quot;,
        &quot;hubble-metrics-server-enable-tls&quot;: &quot;false&quot;,
        &quot;hubble-socket-path&quot;: &quot;/var/run/cilium/hubble.sock&quot;,
        &quot;hubble-tls-cert-file&quot;: &quot;/var/lib/cilium/tls/hubble/server.crt&quot;,
        &quot;hubble-tls-client-ca-files&quot;: &quot;/var/lib/cilium/tls/hubble/client-ca.crt&quot;,
        &quot;hubble-tls-key-file&quot;: &quot;/var/lib/cilium/tls/hubble/server.key&quot;,

kubectl get secret -n kube-system | grep -iE 'cilium-ca|hubble'

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get secret -n kube-system | grep -iE 'cilium-ca|hubble'
cilium-ca                      Opaque                          2      2m37s
hubble-relay-client-certs      kubernetes.io/tls               3      2m37s
hubble-server-certs            kubernetes.io/tls               3      2m37s

# Enabling Hubble requires the TCP port 4244 to be open on all nodes running Cilium.
ss -tnlp | grep -iE 'cilium|hubble' | tee after.txt
vi -d before.txt after.txt # 비교

# cilium-agent에 아래와 같은 포트들이 추가됨 (hubble은 cilium에 내장됨)
LISTEN 0      4096                *:9962             *:*    users:((&quot;cilium-agent&quot;,pid=9318,fd=7))
LISTEN 0      4096                *:9965             *:*    users:((&quot;cilium-agent&quot;,pid=9318,fd=39))
LISTEN 0      4096                *:4244             *:*    users:((&quot;cilium-agent&quot;,pid=9318,fd=10))


# 파드, 서비스 정보 확인
kubectl get pod -n kube-system -l k8s-app=hubble-relay
kubectl get svc,ep -n kube-system hubble-relay


(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get pod -n kube-system -l k8s-app=hubble-relay
NAME                           READY   STATUS    RESTARTS   AGE
hubble-relay-5dcd46f5c-nqx4n   1/1     Running   0          5m
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get svc,ep -n kube-system hubble-relay
Warning: v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
NAME                   TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
service/hubble-relay   ClusterIP   10.96.100.252   &amp;lt;none&amp;gt;        80/TCP    5m6s

NAME                     ENDPOINTS           AGE
endpoints/hubble-relay   172.20.1.240:4245   5m6s


# hubble-relay 는 hubble-peer 의 서비스(ClusterIP :443)을 통해 모든 노드의 :4244에 요청 가져올 수 있음
kubectl get cm -n kube-system
kubectl describe cm -n kube-system hubble-relay-config

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl describe cm -n kube-system hubble-relay-config
Name:         hubble-relay-config
Namespace:    kube-system
Labels:       app.kubernetes.io/managed-by=Helm
Annotations:  meta.helm.sh/release-name: cilium
              meta.helm.sh/release-namespace: kube-system

Data
====
config.yaml:
----
cluster-name: default
peer-service: &quot;hubble-peer.kube-system.svc.cluster.local.:443&quot;
listen-address: :4245
gops: true
gops-port: &quot;9893&quot;
retry-timeout:
sort-buffer-len-max:
sort-buffer-drain-timeout:
tls-hubble-client-cert-file: /var/lib/hubble-relay/tls/client.crt
tls-hubble-client-key-file: /var/lib/hubble-relay/tls/client.key
tls-hubble-server-ca-files: /var/lib/hubble-relay/tls/hubble-server-ca.crt

disable-server-tls: true



BinaryData
====

Events:  &amp;lt;none&amp;gt;

# Hubble peer 확인 (hubble peer의 endpoint가 각 노드의 4244이다)
kubectl get svc,ep -n kube-system hubble-peer

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get svc,ep -n kube-system hubble-peer
Warning: v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
NAME                  TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)   AGE
service/hubble-peer   ClusterIP   10.96.45.88   &amp;lt;none&amp;gt;        443/TCP   10m

NAME                    ENDPOINTS                                                     AGE
endpoints/hubble-peer   192.168.10.100:4244,192.168.10.101:4244,192.168.10.102:4244   10m


# Hubble UI 확인
kubectl get svc,ep -n kube-system hubble-ui

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get svc,ep -n kube-system hubble-ui
Warning: v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
NAME                TYPE       CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
service/hubble-ui   NodePort   10.96.63.111   &amp;lt;none&amp;gt;        80:31234/TCP   10m

NAME                  ENDPOINTS           AGE
endpoints/hubble-ui   172.20.2.230:8081   10m

# hubble ui 웹 접속 주소 확인
NODEIP=$(ip -4 addr show eth1 | grep -oP '(?&amp;lt;=inet\s)\d+(\.\d+){3}')
echo -e &quot;http://$NODEIP:31234&quot;

(⎈|HomeLab:N/A) root@k8s-ctr:~# NODEIP=$(ip -4 addr show eth1 | grep -oP '(?&amp;lt;=inet\s)\d+(\.\d+){3}')
echo -e &quot;http://$NODEIP:31234&quot;
http://192.168.10.100:31234&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;명령으로 확인된 주소를 통해 Hubble UI 접근하면 아래와 같이 확인됩니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;2240&quot; data-origin-height=&quot;1093&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bBnuZW/btsPCk77jW6/BFVtn2FHQokeFsjI0S3udk/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bBnuZW/btsPCk77jW6/BFVtn2FHQokeFsjI0S3udk/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bBnuZW/btsPCk77jW6/BFVtn2FHQokeFsjI0S3udk/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbBnuZW%2FbtsPCk77jW6%2FBFVtn2FHQokeFsjI0S3udk%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;2240&quot; height=&quot;1093&quot; data-origin-width=&quot;2240&quot; data-origin-height=&quot;1093&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이제 Hubble CLI 설치하겠습니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;# Linux 실습 환경에 설치 시
HUBBLE_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/hubble/master/stable.txt)
HUBBLE_ARCH=amd64
if [ &quot;$(uname -m)&quot; = &quot;aarch64&quot; ]; then HUBBLE_ARCH=arm64; fi
curl -L --fail --remote-name-all https://github.com/cilium/hubble/releases/download/$HUBBLE_VERSION/hubble-linux-${HUBBLE_ARCH}.tar.gz{,.sha256sum}
sudo tar xzvfC hubble-linux-${HUBBLE_ARCH}.tar.gz /usr/local/bin
which hubble
hubble status

(⎈|HomeLab:N/A) root@k8s-ctr:~# which hubble
/usr/local/bin/hubble

# hubble relay를 명시하지 않아서 에러가 발생
(⎈|HomeLab:N/A) root@k8s-ctr:~# hubble status
failed getting status: rpc error: code = Unavailable desc = connection error: desc = &quot;transport: Error while dialing: dial tcp 127.0.0.1:4245: connect: connection refused&quot;

# port-forward를 통해서 127.0.0.1:4245를 통해 hubble relay를 연결 가능하게 함
cilium hubble port-forward&amp;amp;

(⎈|HomeLab:N/A) root@k8s-ctr:~# cilium hubble port-forward&amp;amp;
[1] 11205
(⎈|HomeLab:N/A) root@k8s-ctr:~# ℹ️  Hubble Relay is available at 127.0.0.1:4245

# Now you can validate that you can access the Hubble API via the installed CLI
hubble status
Healthcheck (via localhost:4245): Ok
Current/Max Flows: 12,285/12,285 (100.00%)
Flows/s: 41.20

# hubble (api) server 기본 접속 주소 확인
hubble config view 

(⎈|HomeLab:N/A) root@k8s-ctr:~# hubble status
Healthcheck (via localhost:4245): Ok
Current/Max Flows: 12,285/12,285 (100.00%)
Flows/s: 37.05
Connected Nodes: 3/3


# You can also query the flow API and look for flows
kubectl get ciliumendpoints.cilium.io -n kube-system # SECURITY IDENTITY
hubble observe
hubble observe -h
hubble observe -f

# hubble observe로 마치 tcpdump와 같은 Flow 정보를 확인할 수 있다.
(⎈|HomeLab:N/A) root@k8s-ctr:~# hubble observe
Jul 26 12:35:47.068: 127.0.0.1:58854 (world) &amp;lt;&amp;gt; kube-system/hubble-ui-76d4965bb6-4sw6n (ID:779) pre-xlate-rev TRACED (TCP)
Jul 26 12:35:47.070: 127.0.0.1:58854 (world) &amp;lt;&amp;gt; kube-system/hubble-ui-76d4965bb6-4sw6n (ID:779) pre-xlate-rev TRACED (TCP)
Jul 26 12:35:47.073: 127.0.0.1:8090 (world) &amp;lt;&amp;gt; kube-system/hubble-ui-76d4965bb6-4sw6n (ID:779) pre-xlate-rev TRACED (TCP)
Jul 26 12:35:47.075: 127.0.0.1:8090 (world) &amp;lt;&amp;gt; kube-system/hubble-ui-76d4965bb6-4sw6n (ID:779) pre-xlate-rev TRACED (TCP)
Jul 26 12:35:47.075: 192.168.10.100:51650 (kube-apiserver) &amp;lt;- kube-system/hubble-ui-76d4965bb6-4sw6n:8081 (ID:779) to-network FORWARDED (TCP Flags: ACK, PSH)
Jul 26 12:35:48.061: 127.0.0.1:8090 (world) &amp;lt;&amp;gt; kube-system/hubble-ui-76d4965bb6-4sw6n (ID:779) pre-xlate-rev TRACED (TCP)
Jul 26 12:35:48.061: 127.0.0.1:58868 (world) &amp;lt;&amp;gt; kube-system/hubble-ui-76d4965bb6-4sw6n (ID:779) pre-xlate-rev TRACED (TCP)
Jul 26 12:35:48.061: 127.0.0.1:8090 (world) &amp;lt;&amp;gt; kube-system/hubble-ui-76d4965bb6-4sw6n (ID:779) pre-xlate-rev TRACED (TCP)
Jul 26 12:35:48.063: 127.0.0.1:58868 (world) &amp;lt;&amp;gt; kube-system/hubble-ui-76d4965bb6-4sw6n (ID:779) pre-xlate-rev TRACED (TCP)
Jul 26 12:35:48.064: 127.0.0.1:58868 (world) &amp;lt;&amp;gt; kube-system/hubble-ui-76d4965bb6-4sw6n (ID:779) pre-xlate-rev TRACED (TCP)
Jul 26 12:35:48.066: 127.0.0.1:58868 (world) &amp;lt;&amp;gt; kube-system/hubble-ui-76d4965bb6-4sw6n (ID:779) pre-xlate-rev TRACED (TCP)
Jul 26 12:35:48.067: 127.0.0.1:58868 (world) &amp;lt;&amp;gt; kube-system/hubble-ui-76d4965bb6-4sw6n (ID:779) pre-xlate-rev TRACED (TCP)
Jul 26 12:35:48.315: 192.168.10.102:51698 (host) -&amp;gt; 192.168.10.100:6443 (kube-apiserver) to-network FORWARDED (TCP Flags: ACK)
Jul 26 12:35:49.088: 192.168.10.102:37624 (host) -&amp;gt; 192.168.10.100:6443 (kube-apiserver) to-network FORWARDED (TCP Flags: ACK)
Jul 26 12:35:50.268: 10.0.2.15:32840 (host) &amp;lt;&amp;gt; kube-system/hubble-relay-5dcd46f5c-nqx4n (ID:61891) pre-xlate-rev TRACED (TCP)
Jul 26 12:35:50.273: 10.0.2.15:32840 (host) -&amp;gt; kube-system/hubble-relay-5dcd46f5c-nqx4n:4222 (ID:61891) to-endpoint FORWARDED (TCP Flags: ACK, PSH)
Jul 26 12:35:50.277: 10.0.2.15:32840 (host) &amp;lt;&amp;gt; kube-system/hubble-relay-5dcd46f5c-nqx4n (ID:61891) pre-xlate-rev TRACED (TCP)
Jul 26 12:35:50.278: 10.0.2.15:32840 (host) &amp;lt;&amp;gt; kube-system/hubble-relay-5dcd46f5c-nqx4n (ID:61891) pre-xlate-rev TRACED (TCP)
Jul 26 12:35:50.278: 10.0.2.15:32840 (host) &amp;lt;- kube-system/hubble-relay-5dcd46f5c-nqx4n:4222 (ID:61891) to-stack FORWARDED (TCP Flags: ACK, PSH)
Jul 26 12:35:50.281: 10.0.2.15:32840 (host) &amp;lt;&amp;gt; kube-system/hubble-relay-5dcd46f5c-nqx4n (ID:61891) pre-xlate-rev TRACED (TCP)
Jul 26 12:35:50.281: 10.0.2.15:32840 (host) &amp;lt;&amp;gt; kube-system/hubble-relay-5dcd46f5c-nqx4n (ID:61891) pre-xlate-rev TRACED (TCP)
Jul 26 12:35:50.285: 10.0.2.15:32840 (host) -&amp;gt; kube-system/hubble-relay-5dcd46f5c-nqx4n:4222 (ID:61891) to-endpoint FORWARDED (TCP Flags: ACK, FIN)
Jul 26 12:35:50.286: 10.0.2.15:32840 (host) &amp;lt;- kube-system/hubble-relay-5dcd46f5c-nqx4n:4222 (ID:61891) to-stack FORWARDED (TCP Flags: ACK, FIN)
Jul 26 12:35:50.286: 10.0.2.15:32840 (host) -&amp;gt; kube-system/hubble-relay-5dcd46f5c-nqx4n:4222 (ID:61891) to-endpoint FORWARDED (TCP Flags: ACK)
Jul 26 12:35:51.073: 127.0.0.1:59654 (world) &amp;lt;&amp;gt; 192.168.10.100 (host) pre-xlate-rev TRACED (TCP)
Jul 26 12:35:51.318: 127.0.0.1:8080 (world) &amp;lt;&amp;gt; kube-system/coredns-674b8bbfcf-d87dt (ID:3889) pre-xlate-rev TRACED (TCP)
Jul 26 12:35:51.318: 127.0.0.1:50010 (world) &amp;lt;&amp;gt; kube-system/coredns-674b8bbfcf-d87dt (ID:3889) pre-xlate-rev TRACED (TCP)
Jul 26 12:35:51.321: 127.0.0.1:8080 (world) &amp;lt;&amp;gt; kube-system/coredns-674b8bbfcf-d87dt (ID:3889) pre-xlate-rev TRACED (TCP)
Jul 26 12:35:51.321: 127.0.0.1:8080 (world) &amp;lt;&amp;gt; kube-system/coredns-674b8bbfcf-d87dt (ID:3889) pre-xlate-rev TRACED (TCP)
Jul 26 12:35:51.321: 127.0.0.1:8080 (world) &amp;lt;&amp;gt; kube-system/coredns-674b8bbfcf-d87dt (ID:3889) pre-xlate-rev TRACED (TCP)
Jul 26 12:35:51.324: 127.0.0.1:8080 (world) &amp;lt;&amp;gt; kube-system/coredns-674b8bbfcf-d87dt (ID:3889) pre-xlate-rev TRACED (TCP)
Jul 26 12:35:51.324: 127.0.0.1:50020 (world) &amp;lt;&amp;gt; kube-system/coredns-674b8bbfcf-gmrml (ID:3889) pre-xlate-rev TRACED (TCP)
Jul 26 12:35:51.324: 127.0.0.1:50020 (world) &amp;lt;&amp;gt; kube-system/coredns-674b8bbfcf-gmrml (ID:3889) pre-xlate-rev TRACED (TCP)
Jul 26 12:35:51.324: 127.0.0.1:8080 (world) &amp;lt;&amp;gt; kube-system/coredns-674b8bbfcf-gmrml (ID:3889) pre-xlate-rev TRACED (TCP)
Jul 26 12:35:51.324: 127.0.0.1:8080 (world) &amp;lt;&amp;gt; kube-system/coredns-674b8bbfcf-gmrml (ID:3889) pre-xlate-rev TRACED (TCP)
Jul 26 12:35:51.332: 127.0.0.1:50020 (world) &amp;lt;&amp;gt; kube-system/coredns-674b8bbfcf-gmrml (ID:3889) pre-xlate-rev TRACED (TCP)
Jul 26 12:35:51.332: 127.0.0.1:8080 (world) &amp;lt;&amp;gt; kube-system/coredns-674b8bbfcf-gmrml (ID:3889) pre-xlate-rev TRACED (TCP)
Jul 26 12:35:51.332: 127.0.0.1:8080 (world) &amp;lt;&amp;gt; kube-system/coredns-674b8bbfcf-gmrml (ID:3889) pre-xlate-rev TRACED (TCP)
Jul 26 12:35:51.332: 127.0.0.1:50010 (world) &amp;lt;&amp;gt; kube-system/coredns-674b8bbfcf-d87dt (ID:3889) pre-xlate-rev TRACED (TCP)
Jul 26 12:35:51.332: 127.0.0.1:50010 (world) &amp;lt;&amp;gt; kube-system/coredns-674b8bbfcf-d87dt (ID:3889) pre-xlate-rev TRACED (TCP)
Jul 26 12:35:51.333: 127.0.0.1:8080 (world) &amp;lt;&amp;gt; kube-system/coredns-674b8bbfcf-gmrml (ID:3889) pre-xlate-rev TRACED (TCP)
Jul 26 12:35:51.336: 127.0.0.1:50010 (world) &amp;lt;&amp;gt; kube-system/coredns-674b8bbfcf-d87dt (ID:3889) pre-xlate-rev TRACED (TCP)
Jul 26 12:35:51.336: 127.0.0.1:50010 (world) &amp;lt;&amp;gt; kube-system/coredns-674b8bbfcf-d87dt (ID:3889) pre-xlate-rev TRACED (TCP)
Jul 26 12:35:51.336: 127.0.0.1:50020 (world) &amp;lt;&amp;gt; kube-system/coredns-674b8bbfcf-gmrml (ID:3889) pre-xlate-rev TRACED (TCP)
Jul 26 12:35:52.049: 127.0.0.1:41934 (world) &amp;lt;&amp;gt; kube-system/hubble-relay-5dcd46f5c-nqx4n (ID:61891) pre-xlate-rev TRACED (TCP)
Jul 26 12:35:52.056: 127.0.0.1:41934 (world) &amp;lt;&amp;gt; kube-system/hubble-relay-5dcd46f5c-nqx4n (ID:61891) pre-xlate-rev TRACED (TCP)
Jul 26 12:35:52.057: 127.0.0.1:41934 (world) &amp;lt;&amp;gt; kube-system/hubble-relay-5dcd46f5c-nqx4n (ID:61891) pre-xlate-rev TRACED (TCP)
Jul 26 12:35:52.059: 127.0.0.1:41934 (world) &amp;lt;&amp;gt; kube-system/hubble-relay-5dcd46f5c-nqx4n (ID:61891) pre-xlate-rev TRACED (TCP)
Jul 26 12:35:52.060: 127.0.0.1:8090 (world) &amp;lt;&amp;gt; kube-system/hubble-ui-76d4965bb6-4sw6n (ID:779) pre-xlate-rev TRACED (TCP)
Jul 26 12:35:52.060: 127.0.0.1:58878 (world) &amp;lt;&amp;gt; kube-system/hubble-ui-76d4965bb6-4sw6n (ID:779) pre-xlate-rev TRACED (TCP)
Jul 26 12:35:52.061: 127.0.0.1:41934 (world) &amp;lt;&amp;gt; kube-system/hubble-relay-5dcd46f5c-nqx4n (ID:61891) pre-xlate-rev TRACED (TCP)
Jul 26 12:35:52.063: 127.0.0.1:8090 (world) &amp;lt;&amp;gt; kube-system/hubble-ui-76d4965bb6-4sw6n (ID:779) pre-xlate-rev TRACED (TCP)
Jul 26 12:35:52.065: 127.0.0.1:58878 (world) &amp;lt;&amp;gt; kube-system/hubble-ui-76d4965bb6-4sw6n (ID:779) pre-xlate-rev TRACED (TCP)
Jul 26 12:35:52.065: 127.0.0.1:58878 (world) &amp;lt;&amp;gt; kube-system/hubble-ui-76d4965bb6-4sw6n (ID:779) pre-xlate-rev TRACED (TCP)
Jul 26 12:35:52.069: 127.0.0.1:58878 (world) &amp;lt;&amp;gt; kube-system/hubble-ui-76d4965bb6-4sw6n (ID:779) pre-xlate-rev TRACED (TCP)
Jul 26 12:35:52.079: 10.0.2.15:43342 (host) -&amp;gt; kube-system/hubble-relay-5dcd46f5c-nqx4n:4222 (ID:61891) to-endpoint FORWARDED (TCP Flags: SYN)
Jul 26 12:35:52.079: 10.0.2.15:43342 (host) &amp;lt;- kube-system/hubble-relay-5dcd46f5c-nqx4n:4222 (ID:61891) to-stack FORWARDED (TCP Flags: SYN, ACK)
Jul 26 12:35:52.079: 10.0.2.15:43342 (host) -&amp;gt; kube-system/hubble-relay-5dcd46f5c-nqx4n:4222 (ID:61891) to-endpoint FORWARDED (TCP Flags: ACK)
Jul 26 12:35:52.087: 10.0.2.15:43342 (host) -&amp;gt; kube-system/hubble-relay-5dcd46f5c-nqx4n:4222 (ID:61891) to-endpoint FORWARDED (TCP Flags: ACK, PSH)
Jul 26 12:35:52.094: kube-system/hubble-relay-5dcd46f5c-nqx4n:34040 (ID:61891) -&amp;gt; 192.168.10.101:4244 (host) to-stack FORWARDED (TCP Flags: ACK, PSH)&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이후 실습을 위해서 cilium 관련 명령어를 쉽게 접근할 수 있는 단축키를 설정하겠습니다.&lt;/p&gt;
&lt;pre class=&quot;reasonml&quot;&gt;&lt;code&gt;# cilium 파드 이름
export CILIUMPOD0=$(kubectl get -l k8s-app=cilium pods -n kube-system --field-selector spec.nodeName=k8s-ctr -o jsonpath='{.items[0].metadata.name}')
export CILIUMPOD1=$(kubectl get -l k8s-app=cilium pods -n kube-system --field-selector spec.nodeName=k8s-w1  -o jsonpath='{.items[0].metadata.name}')
export CILIUMPOD2=$(kubectl get -l k8s-app=cilium pods -n kube-system --field-selector spec.nodeName=k8s-w2  -o jsonpath='{.items[0].metadata.name}')
echo $CILIUMPOD0 $CILIUMPOD1 $CILIUMPOD2

# 단축키(alias) 지정
alias c0=&quot;kubectl exec -it $CILIUMPOD0 -n kube-system -c cilium-agent -- cilium&quot;
alias c1=&quot;kubectl exec -it $CILIUMPOD1 -n kube-system -c cilium-agent -- cilium&quot;
alias c2=&quot;kubectl exec -it $CILIUMPOD2 -n kube-system -c cilium-agent -- cilium&quot;

alias c0bpf=&quot;kubectl exec -it $CILIUMPOD0 -n kube-system -c cilium-agent -- bpftool&quot;
alias c1bpf=&quot;kubectl exec -it $CILIUMPOD1 -n kube-system -c cilium-agent -- bpftool&quot;
alias c2bpf=&quot;kubectl exec -it $CILIUMPOD2 -n kube-system -c cilium-agent -- bpftool&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;3. Hubble을 이용한 Network Observability&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Hubble을 이용한 Network Observability를 살펴보기 위해서 샘플 애플리케이션을 배포해보겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;참고: &lt;a href=&quot;https://docs.cilium.io/en/stable/gettingstarted/demo/&quot;&gt;https://docs.cilium.io/en/stable/gettingstarted/demo/&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아래는 애플리케이션의 구조입니다. 각 애플리케이션 지정된 label로 구분되고, deathstart-service가 생성되어 org=empire, class=deathstart 파드를 서비스 해주고 있습니다. 아래의 회색 상자는 파드 개수를 의미합니다. tiefiter와 xwing은 standalone 파드입니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1294&quot; data-origin-height=&quot;920&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bRIKzO/btsPCkfYN1N/U2KpKyiljZ4k3XcpJEnIT1/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bRIKzO/btsPCkfYN1N/U2KpKyiljZ4k3XcpJEnIT1/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bRIKzO/btsPCkfYN1N/U2KpKyiljZ4k3XcpJEnIT1/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbRIKzO%2FbtsPCkfYN1N%2FU2KpKyiljZ4k3XcpJEnIT1%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1294&quot; height=&quot;920&quot; data-origin-width=&quot;1294&quot; data-origin-height=&quot;920&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;출처: &lt;a href=&quot;https://docs.cilium.io/en/stable/gettingstarted/demo/&quot;&gt;https://docs.cilium.io/en/stable/gettingstarted/demo/&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아래와 같이 샘플 애플리케이션을 배포하고, 정보를 살펴보겠습니다.&lt;/p&gt;
&lt;pre class=&quot;yaml&quot;&gt;&lt;code&gt;# 샘플 애플리케이션 배포
kubectl apply -f https://raw.githubusercontent.com/cilium/cilium/1.17.6/examples/minikube/http-sw-app.yaml

# 파드 라벨 labels 확인
kubectl get pod --show-labels

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get pod --show-labels
NAME                        READY   STATUS    RESTARTS   AGE   LABELS
deathstar-8c4c77fb7-4ngqn   1/1     Running   0          44s   app.kubernetes.io/name=deathstar,class=deathstar,org=empire,pod-template-hash=8c4c77fb7
deathstar-8c4c77fb7-6sxwm   1/1     Running   0          44s   app.kubernetes.io/name=deathstar,class=deathstar,org=empire,pod-template-hash=8c4c77fb7
tiefighter                  1/1     Running   0          44s   app.kubernetes.io/name=tiefighter,class=tiefighter,org=empire
xwing                       1/1     Running   0          44s   app.kubernetes.io/name=xwing,class=xwing,org=alliance

kubectl get deploy,svc,ep deathstar

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get deploy,svc,ep deathstar
Warning: v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
NAME                        READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/deathstar   2/2     2            2           59s

NAME                TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE
service/deathstar   ClusterIP   10.96.152.26   &amp;lt;none&amp;gt;        80/TCP    60s

NAME                  ENDPOINTS                         AGE
endpoints/deathstar   172.20.1.207:80,172.20.2.197:80   59s

#
kubectl get ciliumendpoints.cilium.io -A
kubectl get ciliumidentities.cilium.io

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get ciliumendpoints.cilium.io -A
NAMESPACE     NAME                           SECURITY IDENTITY   ENDPOINT STATE   IPV4           IPV6
default       deathstar-8c4c77fb7-4ngqn      35263               ready            172.20.2.197
default       deathstar-8c4c77fb7-6sxwm      35263               ready            172.20.1.207
default       tiefighter                     39100               ready            172.20.1.53
default       xwing                          39228               ready            172.20.2.61
kube-system   coredns-674b8bbfcf-d87dt       3889                ready            172.20.0.126
kube-system   coredns-674b8bbfcf-gmrml       3889                ready            172.20.0.242
kube-system   hubble-relay-5dcd46f5c-nqx4n   61891               ready            172.20.1.240
kube-system   hubble-ui-76d4965bb6-4sw6n     779                 ready            172.20.2.230
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get ciliumidentities.cilium.io
NAME    NAMESPACE     AGE
35263   default       79s
3889    kube-system   134m
39100   default       79s
39228   default       78s
61891   kube-system   48m
779     kube-system   47m&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아래와 같이 &lt;code&gt;cilium endpoint list&lt;/code&gt; 정보를 확인해 봅니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;먼저 &lt;code&gt;POLICY (ingress)&lt;/code&gt;, &lt;code&gt;POLICY (egress)&lt;/code&gt; 를 보시면 현재는 정책이 Disabled로 확인됩니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;여기서 &lt;code&gt;Endpoint&lt;/code&gt;와 &lt;code&gt;Identity&lt;/code&gt;라는 정보를 확인할 수 있습니다. Cilium에서는 각 파드를 &lt;code&gt;endpoint&lt;/code&gt;로 인식하며, 또한 Label 정보를 바탕으로 구분되는 파드들은 &lt;code&gt;Identity&lt;/code&gt;로 구분됩니다. 파드의 &lt;code&gt;endpoint&lt;/code&gt;는 다르지만, 같은 Label로 식별된다면 &lt;code&gt;identity&lt;/code&gt;는 동일한 값을 가집니다.&lt;/p&gt;
&lt;pre class=&quot;routeros&quot;&gt;&lt;code&gt;# DS를 조회한 것이지만, DS 중 하나를 선택한 정보를 보여주는 것임(모든 노드의 endpoint 정보가 아님) 
# 노드별 cilium endpoint list을 하기 위해서는 각 노드의 cilium agent에서 조회해야 함
kubectl exec -it -n kube-system ds/cilium -c cilium-agent -- cilium endpoint list

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it -n kube-system ds/cilium -c cilium-agent -- cilium endpoint list
ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                  IPv6   IPv4           STATUS
           ENFORCEMENT        ENFORCEMENT
794        Disabled           Disabled          35263      k8s:app.kubernetes.io/name=deathstar                                                172.20.2.197   ready
                                                           k8s:class=deathstar
                                                           k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default
                                                           k8s:io.cilium.k8s.policy.cluster=default
                                                           k8s:io.cilium.k8s.policy.serviceaccount=default
                                                           k8s:io.kubernetes.pod.namespace=default
                                                           k8s:org=empire
1055       Disabled           Disabled          1          reserved:host                                                                                      ready
2525       Disabled           Disabled          779        k8s:app.kubernetes.io/name=hubble-ui                                                172.20.2.230   ready
                                                           k8s:app.kubernetes.io/part-of=cilium
                                                           k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system
                                                           k8s:io.cilium.k8s.policy.cluster=default
                                                           k8s:io.cilium.k8s.policy.serviceaccount=hubble-ui
                                                           k8s:io.kubernetes.pod.namespace=kube-system
                                                           k8s:k8s-app=hubble-ui
3429       Disabled           Disabled          39228      k8s:app.kubernetes.io/name=xwing                                                    172.20.2.61    ready
                                                           k8s:class=xwing
                                                           k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default
                                                           k8s:io.cilium.k8s.policy.cluster=default
                                                           k8s:io.cilium.k8s.policy.serviceaccount=default
                                                           k8s:io.kubernetes.pod.namespace=default
                                                           k8s:org=alliance&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;현재 구성에서는 별도의 접근에 대한 정책이 정의되지 않았기 때문에 파드들 간에 접속이 가능합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;접근 테스트를 수행해보고, hubble CLI로 각 요청을 식별해서 Flow를 확인해보겠습니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;# 아래 출력에서 xwing 와 tiefighter 의 IDENTITY 메모
c1 endpoint list | grep -iE 'xwing|tiefighter|deathstar'
c2 endpoint list | grep -iE 'xwing|tiefighter|deathstar'
XWINGID=39228
TIEFIGHTERID=39100
DEATHSTARID=35263


# 모니터링 준비 : 터미널 1개
hubble observe -f

XWINGID=50633
TIEFIGHTERID=19274
DEATHSTARID=318

# hubble observe -f 로만 보면 로그가 많기 땜누에 아래와 같이 identity 기준으로 확인합니다.
hubble observe -f --from-identity $XWINGID
hubble observe -f --protocol tcp --from-identity $XWINGID
hubble observe -f --protocol tcp --from-identity $DEATHSTARID


# 호출 시도 1
kubectl exec xwing -- curl -s -XPOST deathstar.default.svc.cluster.local/v1/request-landing
while true; do kubectl exec xwing -- curl -s -XPOST deathstar.default.svc.cluster.local/v1/request-landing ; sleep 5 ; done


# 요청
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl exec xwing -- curl -s -XPOST deathstar.default.svc.cluster.local/v1/request-landing
Ship landed

# hubble observe
(⎈|HomeLab:N/A) root@k8s-ctr:~# hubble observe -f --from-identity $XWINGID
Jul 26 13:33:23.942: default/xwing (ID:39228) &amp;lt;&amp;gt; 10.96.0.10:53 (world) pre-xlate-fwd TRACED (UDP)
Jul 26 13:33:23.942: default/xwing (ID:39228) &amp;lt;&amp;gt; kube-system/coredns-674b8bbfcf-d87dt:53 (ID:3889) post-xlate-fwd TRANSLATED (UDP)
Jul 26 13:33:23.944: default/xwing:35455 (ID:39228) -&amp;gt; kube-system/coredns-674b8bbfcf-d87dt:53 (ID:3889) to-network FORWARDED (UDP)
Jul 26 13:33:23.968: default/xwing (ID:39228) &amp;lt;&amp;gt; 10.96.0.10:53 (world) pre-xlate-fwd TRACED (UDP)
Jul 26 13:33:23.970: default/xwing (ID:39228) &amp;lt;&amp;gt; kube-system/coredns-674b8bbfcf-d87dt:53 (ID:3889) post-xlate-fwd TRANSLATED (UDP)
Jul 26 13:33:23.970: default/xwing:37453 (ID:39228) -&amp;gt; kube-system/coredns-674b8bbfcf-d87dt:53 (ID:3889) to-network FORWARDED (UDP)
Jul 26 13:33:23.993: default/xwing (ID:39228) &amp;lt;&amp;gt; 10.96.0.10:53 (world) pre-xlate-fwd TRACED (UDP)
Jul 26 13:33:23.993: default/xwing (ID:39228) &amp;lt;&amp;gt; kube-system/coredns-674b8bbfcf-gmrml:53 (ID:3889) post-xlate-fwd TRANSLATED (UDP)
Jul 26 13:33:23.993: default/xwing:58708 (ID:39228) -&amp;gt; kube-system/coredns-674b8bbfcf-gmrml:53 (ID:3889) to-network FORWARDED (UDP)
Jul 26 13:33:24.007: default/xwing (ID:39228) &amp;lt;&amp;gt; 10.96.0.10:53 (world) pre-xlate-fwd TRACED (UDP)
Jul 26 13:33:24.007: default/xwing (ID:39228) &amp;lt;&amp;gt; kube-system/coredns-674b8bbfcf-d87dt:53 (ID:3889) post-xlate-fwd TRANSLATED (UDP)
Jul 26 13:33:24.008: default/xwing:40677 (ID:39228) -&amp;gt; kube-system/coredns-674b8bbfcf-d87dt:53 (ID:3889) to-network FORWARDED (UDP)
Jul 26 13:33:24.027: default/xwing (ID:39228) &amp;lt;&amp;gt; 10.96.152.26:80 (world) pre-xlate-fwd TRACED (TCP)
Jul 26 13:33:24.027: default/xwing (ID:39228) &amp;lt;&amp;gt; default/deathstar-8c4c77fb7-6sxwm:80 (ID:35263) post-xlate-fwd TRANSLATED (TCP)
Jul 26 13:33:24.027: default/xwing:49206 (ID:39228) -&amp;gt; default/deathstar-8c4c77fb7-6sxwm:80 (ID:35263) to-network FORWARDED (TCP Flags: SYN)
Jul 26 13:33:24.036: default/xwing:49206 (ID:39228) -&amp;gt; default/deathstar-8c4c77fb7-6sxwm:80 (ID:35263) to-network FORWARDED (TCP Flags: ACK)
Jul 26 13:33:24.045: default/xwing:49206 (ID:39228) -&amp;gt; default/deathstar-8c4c77fb7-6sxwm:80 (ID:35263) to-network FORWARDED (TCP Flags: ACK, PSH)
Jul 26 13:33:24.062: default/xwing:49206 (ID:39228) -&amp;gt; default/deathstar-8c4c77fb7-6sxwm:80 (ID:35263) to-network FORWARDED (TCP Flags: ACK, FIN)
Jul 26 13:33:24.348: default/xwing:35455 (ID:39228) -&amp;gt; kube-system/coredns-674b8bbfcf-d87dt:53 (ID:3889) to-endpoint FORWARDED (UDP)
Jul 26 13:33:24.360: default/xwing:35455 (ID:39228) &amp;lt;&amp;gt; kube-system/coredns-674b8bbfcf-d87dt (ID:3889) pre-xlate-rev TRACED (UDP)
Jul 26 13:33:24.360: default/xwing:35455 (ID:39228) &amp;lt;&amp;gt; kube-system/coredns-674b8bbfcf-d87dt (ID:3889) pre-xlate-rev TRACED (UDP)
Jul 26 13:33:24.376: default/xwing:37453 (ID:39228) -&amp;gt; kube-system/coredns-674b8bbfcf-d87dt:53 (ID:3889) to-endpoint FORWARDED (UDP)
Jul 26 13:33:24.376: default/xwing:37453 (ID:39228) &amp;lt;&amp;gt; kube-system/coredns-674b8bbfcf-d87dt (ID:3889) pre-xlate-rev TRACED (UDP)
Jul 26 13:33:24.382: default/xwing:37453 (ID:39228) &amp;lt;&amp;gt; kube-system/coredns-674b8bbfcf-d87dt (ID:3889) pre-xlate-rev TRACED (UDP)
Jul 26 13:33:24.403: default/xwing:58708 (ID:39228) -&amp;gt; kube-system/coredns-674b8bbfcf-gmrml:53 (ID:3889) to-endpoint FORWARDED (UDP)
Jul 26 13:33:24.404: default/xwing:58708 (ID:39228) &amp;lt;&amp;gt; kube-system/coredns-674b8bbfcf-gmrml (ID:3889) pre-xlate-rev TRACED (UDP)
Jul 26 13:33:24.404: default/xwing:58708 (ID:39228) &amp;lt;&amp;gt; kube-system/coredns-674b8bbfcf-gmrml (ID:3889) pre-xlate-rev TRACED (UDP)
Jul 26 13:33:24.413: default/xwing:40677 (ID:39228) -&amp;gt; kube-system/coredns-674b8bbfcf-d87dt:53 (ID:3889) to-endpoint FORWARDED (UDP)
Jul 26 13:33:24.416: default/xwing:40677 (ID:39228) &amp;lt;&amp;gt; kube-system/coredns-674b8bbfcf-d87dt (ID:3889) pre-xlate-rev TRACED (UDP)
Jul 26 13:33:24.416: default/xwing:40677 (ID:39228) &amp;lt;&amp;gt; kube-system/coredns-674b8bbfcf-d87dt (ID:3889) pre-xlate-rev TRACED (UDP)
Jul 26 13:33:24.597: default/xwing:49206 (ID:39228) -&amp;gt; default/deathstar-8c4c77fb7-6sxwm:80 (ID:35263) to-endpoint FORWARDED (TCP Flags: SYN)
Jul 26 13:33:24.602: default/xwing:49206 (ID:39228) -&amp;gt; default/deathstar-8c4c77fb7-6sxwm:80 (ID:35263) to-endpoint FORWARDED (TCP Flags: ACK)
Jul 26 13:33:24.606: default/xwing:49206 (ID:39228) &amp;lt;&amp;gt; default/deathstar-8c4c77fb7-6sxwm (ID:35263) pre-xlate-rev TRACED (TCP)
Jul 26 13:33:24.606: default/xwing:49206 (ID:39228) &amp;lt;&amp;gt; default/deathstar-8c4c77fb7-6sxwm (ID:35263) pre-xlate-rev TRACED (TCP)
Jul 26 13:33:24.607: default/xwing:49206 (ID:39228) &amp;lt;&amp;gt; default/deathstar-8c4c77fb7-6sxwm (ID:35263) pre-xlate-rev TRACED (TCP)
Jul 26 13:33:24.612: default/xwing:49206 (ID:39228) -&amp;gt; default/deathstar-8c4c77fb7-6sxwm:80 (ID:35263) to-endpoint FORWARDED (TCP Flags: ACK, PSH)
Jul 26 13:33:24.615: default/xwing:49206 (ID:39228) &amp;lt;&amp;gt; default/deathstar-8c4c77fb7-6sxwm (ID:35263) pre-xlate-rev TRACED (TCP)
Jul 26 13:33:24.616: default/xwing:49206 (ID:39228) &amp;lt;&amp;gt; default/deathstar-8c4c77fb7-6sxwm (ID:35263) pre-xlate-rev TRACED (TCP)
Jul 26 13:33:24.631: default/xwing:49206 (ID:39228) -&amp;gt; default/deathstar-8c4c77fb7-6sxwm:80 (ID:35263) to-endpoint FORWARDED (TCP Flags: ACK, FIN)
Jul 26 13:33:24.643: default/xwing:49206 (ID:39228) -&amp;gt; default/deathstar-8c4c77fb7-6sxwm:80 (ID:35263) to-endpoint FORWARDED (TCP Flags: ACK)
Jul 26 13:33:24.074: default/xwing:49206 (ID:39228) -&amp;gt; default/deathstar-8c4c77fb7-6sxwm:80 (ID:35263) to-network FORWARDED (TCP Flags: ACK)

# -&amp;gt; 허용됨


# 호출 시도 2
kubectl exec tiefighter -- curl -s -XPOST deathstar.default.svc.cluster.local/v1/request-landing
while true; do kubectl exec tiefighter -- curl -s -XPOST deathstar.default.svc.cluster.local/v1/request-landing ; sleep 5 ; done

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl exec tiefighter -- curl -s -XPOST deathstar.default.svc.cluster.local/v1/request-landing
Ship landed

(⎈|HomeLab:N/A) root@k8s-ctr:~# hubble observe -f --protocol tcp --from-identity $DEATHSTARID
Jul 26 13:37:11.202: default/tiefighter:56900 (ID:39100) &amp;lt;- default/deathstar-8c4c77fb7-6sxwm:80 (ID:35263) to-endpoint FORWARDED (TCP Flags: SYN, ACK)
Jul 26 13:37:11.204: default/deathstar-8c4c77fb7-6sxwm:80 (ID:35263) &amp;lt;&amp;gt; default/tiefighter (ID:39100) pre-xlate-rev TRACED (TCP)
Jul 26 13:37:11.208: default/deathstar-8c4c77fb7-6sxwm:80 (ID:35263) &amp;lt;&amp;gt; default/tiefighter (ID:39100) pre-xlate-rev TRACED (TCP)
Jul 26 13:37:11.215: default/tiefighter:56900 (ID:39100) &amp;lt;- default/deathstar-8c4c77fb7-6sxwm:80 (ID:35263) to-endpoint FORWARDED (TCP Flags: ACK, PSH)
Jul 26 13:37:11.242: default/tiefighter:56900 (ID:39100) &amp;lt;- default/deathstar-8c4c77fb7-6sxwm:80 (ID:35263) to-endpoint FORWARDED (TCP Flags: ACK, FIN)

# -&amp;gt; 허용됨&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;동일한 정보를 hubble UI 확인해보면 두가지 요청이 모두 성공하였음을 확인할 수 있습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;2194&quot; data-origin-height=&quot;1228&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/oGNEQ/btsPB09R9bQ/cK3bfklkjKnkeLOUfPgoK0/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/oGNEQ/btsPB09R9bQ/cK3bfklkjKnkeLOUfPgoK0/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/oGNEQ/btsPB09R9bQ/cK3bfklkjKnkeLOUfPgoK0/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FoGNEQ%2FbtsPB09R9bQ%2FcK3bfklkjKnkeLOUfPgoK0%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;2194&quot; height=&quot;1228&quot; data-origin-width=&quot;2194&quot; data-origin-height=&quot;1228&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이제 Cilium Network Policy를 적용해서 아래와 같이 트래픽을 차단하여, 이후 hubble에서 어떻게 확인되는지 확인해보겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1308&quot; data-origin-height=&quot;926&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bWjyvT/btsPz1CLtZ2/LYuCZacqNHumUtXqvDguc1/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bWjyvT/btsPz1CLtZ2/LYuCZacqNHumUtXqvDguc1/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bWjyvT/btsPz1CLtZ2/LYuCZacqNHumUtXqvDguc1/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbWjyvT%2FbtsPz1CLtZ2%2FLYuCZacqNHumUtXqvDguc1%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1308&quot; height=&quot;926&quot; data-origin-width=&quot;1308&quot; data-origin-height=&quot;926&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;출처: &lt;a href=&quot;https://docs.cilium.io/en/stable/gettingstarted/demo/#apply-an-l3-l4-policy&quot;&gt;https://docs.cilium.io/en/stable/gettingstarted/demo/#apply-an-l3-l4-policy&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;CiliumNetworkPolicys는 &quot;endpointSelector&quot;를 사용하여 파드 레이블에서 정책이 적용되는 소스와 목적지를 식별합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아래 정책에서는 &lt;code&gt;endpointSelector&lt;/code&gt;에서 선택된 대상에 대해서 &lt;code&gt;ingress&lt;/code&gt; 정책에 대한 허용 정책이 적용되며, &lt;code&gt;fromEndpoint&lt;/code&gt;로 허용되지 않은 대상은 모두 차단이 됩니다.&lt;/p&gt;
&lt;pre class=&quot;dts&quot;&gt;&lt;code&gt;# CiliumNetworkPolicy
## 아래 정책은 TCP 포트 80에서 레이블(org=empire)이 있는 모든 파드에서 레이블(org=empire, class=deathstar)이 있는 파드로 전송되는 트래픽을 허용합니다.
apiVersion: &quot;cilium.io/v2&quot;
kind: CiliumNetworkPolicy
metadata:
  name: &quot;rule1&quot;
spec:
  description: &quot;L3-L4 policy to restrict deathstar access to empire ships only&quot;
  endpointSelector:
    matchLabels:
      org: empire
      class: deathstar
  ingress:
  - fromEndpoints:
    - matchLabels:
        org: empire
    toPorts:
    - ports:
      - port: &quot;80&quot;
        protocol: TCP&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Cilium Network Policy를 적용합니다.&lt;/p&gt;
&lt;pre class=&quot;haskell&quot;&gt;&lt;code&gt;# 적용
kubectl apply -f https://raw.githubusercontent.com/cilium/cilium/1.17.6/examples/minikube/sw_l3_l4_policy.yaml

# 모니터링
hubble observe -f --type drop

# 호출 시도 1 
kubectl exec xwing -- curl -s -XPOST deathstar.default.svc.cluster.local/v1/request-landing --connect-timeout 2

# 모니터링 
hubble observe -f --protocol tcp --from-identity $DEATHSTARID

# 호출 시도 2
kubectl exec tiefighter -- curl -s -XPOST deathstar.default.svc.cluster.local/v1/request-landing


# 호출 결과
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl exec xwing -- curl -s -XPOST deathstar.default.svc.cluster.local/v1/request-landing --connect-timeout 2
command terminated with exit code 28
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl exec tiefighter -- curl -s -XPOST deathstar.default.svc.cluster.local/v1/request-landing
Ship landed

# 모니터링 결과
(⎈|HomeLab:N/A) root@k8s-ctr:~# hubble observe -f --type drop
Jul 26 13:43:27.098: default/xwing:33982 (ID:39228) &amp;lt;&amp;gt; default/deathstar-8c4c77fb7-2d4dk:80 (ID:35263) Policy denied DROPPED (TCP Flags: SYN)
Jul 26 13:43:28.150: default/xwing:33982 (ID:39228) &amp;lt;&amp;gt; default/deathstar-8c4c77fb7-2d4dk:80 (ID:35263) Policy denied DROPPED (TCP Flags: SYN)

# -&amp;gt; 실패

(⎈|HomeLab:N/A) root@k8s-ctr:~hubble observe -f --protocol tcp --from-identity $DEATHSTARIDID
Jul 26 13:43:50.120: default/tiefighter:45114 (ID:39100) &amp;lt;- default/deathstar-8c4c77fb7-2d4dk:80 (ID:35263) to-endpoint FORWARDED (TCP Flags: SYN, ACK)
Jul 26 13:43:50.134: default/deathstar-8c4c77fb7-2d4dk:80 (ID:35263) &amp;lt;&amp;gt; default/tiefighter (ID:39100) pre-xlate-rev TRACED (TCP)
Jul 26 13:43:50.134: default/deathstar-8c4c77fb7-2d4dk:80 (ID:35263) &amp;lt;&amp;gt; default/tiefighter (ID:39100) pre-xlate-rev TRACED (TCP)
Jul 26 13:43:50.143: default/tiefighter:45114 (ID:39100) &amp;lt;- default/deathstar-8c4c77fb7-2d4dk:80 (ID:35263) to-endpoint FORWARDED (TCP Flags: ACK, PSH)
Jul 26 13:43:50.151: default/tiefighter:45114 (ID:39100) &amp;lt;- default/deathstar-8c4c77fb7-2d4dk:80 (ID:35263) to-endpoint FORWARDED (TCP Flags: ACK, FIN)

# -&amp;gt; 성공&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Hubble UI 에서도 dropped 가 확인됩니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;2189&quot; data-origin-height=&quot;1233&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/NHk39/btsPA0CMlTp/0iZewOs4IK5YSWKB2IekdK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/NHk39/btsPA0CMlTp/0iZewOs4IK5YSWKB2IekdK/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/NHk39/btsPA0CMlTp/0iZewOs4IK5YSWKB2IekdK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FNHk39%2FbtsPA0CMlTp%2F0iZewOs4IK5YSWKB2IekdK%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;2189&quot; height=&quot;1233&quot; data-origin-width=&quot;2189&quot; data-origin-height=&quot;1233&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Cilium의 L7 정책을 생성해보고 Hubble을 통해서 상태를 확인해보겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아래 그림과 같이 tiefighter에서 /v1/request-landing에 대한 POST는 가능하지만, /v1/exhaust-port에 대한 PUT 요청은 불가하도록 정의하겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1354&quot; data-origin-height=&quot;968&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/w0opc/btsPzVJiAmS/6sLTep4vxKdW6Y6ILIr9K0/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/w0opc/btsPzVJiAmS/6sLTep4vxKdW6Y6ILIr9K0/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/w0opc/btsPzVJiAmS/6sLTep4vxKdW6Y6ILIr9K0/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2Fw0opc%2FbtsPzVJiAmS%2F6sLTep4vxKdW6Y6ILIr9K0%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1354&quot; height=&quot;968&quot; data-origin-width=&quot;1354&quot; data-origin-height=&quot;968&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;출처: &lt;a href=&quot;https://docs.cilium.io/en/stable/gettingstarted/demo/#apply-and-test-http-aware-l7-policy&quot;&gt;https://docs.cilium.io/en/stable/gettingstarted/demo/#apply-and-test-http-aware-l7-policy&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Cilium은 eBPF를 기반으로 동작하며 커널 단에서 트래픽을 제어할 수 있습니다. 다만 L7의 경우에는 user space에서 처리가 필요하여, 각 노드에 데몬 셋으로 배포된 cilium-envoy에서 처리를 하게 됩니다. 이로 인해서 L7 정책은 일정 부분 성능상 degrade가 발생할 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이와 같은 차단의 흐름에 대해서 아래 L7 Policy 부분을 참고하실 수 있습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1626&quot; data-origin-height=&quot;783&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/dnUMDy/dJMb82r0AO4/XJLvB4PDkdxQsxSa6I6kXK/tfile.svg&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/dnUMDy/dJMb82r0AO4/XJLvB4PDkdxQsxSa6I6kXK/tfile.svg&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/dnUMDy/dJMb82r0AO4/XJLvB4PDkdxQsxSa6I6kXK/tfile.svg&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FdnUMDy%2FdJMb82r0AO4%2FXJLvB4PDkdxQsxSa6I6kXK%2Ftfile.svg&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1626&quot; height=&quot;783&quot; data-origin-width=&quot;1626&quot; data-origin-height=&quot;783&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;출처: &lt;a href=&quot;https://docs.cilium.io/en/stable/network/ebpf/lifeofapacket/#ingress-to-endpoint&quot;&gt;https://docs.cilium.io/en/stable/network/ebpf/lifeofapacket/#ingress-to-endpoint&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이제 테스트 호출과 hubble observe를 통해서 확인해보겠습니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;# 모니터링 &amp;gt;&amp;gt; Layer3/4 에서는 애플리케이션 상태를 확인 할 수 없음
hubble observe -f --protocol tcp --from-identity $DEATHSTARID

# 호출해서는 안 되는 일부 유지보수 API를 노출
kubectl exec tiefighter -- curl -s -XPUT deathstar.default.svc.cluster.local/v1/exhaust-port


# 호출 결과
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl exec tiefighter -- curl -s -XPUT deathstar.default.svc.cluster.local/v1/exhaust-port
Panic: deathstar exploded

goroutine 1 [running]:
main.HandleGarbage(0x2080c3f50, 0x2, 0x4, 0x425c0, 0x5, 0xa)
        /code/src/github.com/empire/deathstar/
        temp/main.go:9 +0x64
main.main()
        /code/src/github.com/empire/deathstar/
        temp/main.go:5 +0x85

# 모니터링 결과
(⎈|HomeLab:N/A) root@k8s-ctr:~# hubble observe -f --protocol tcp --from-identity $DEATHSTARID
Jul 26 14:13:39.319: default/tiefighter:51052 (ID:39100) &amp;lt;- default/deathstar-8c4c77fb7-6sxwm:80 (ID:35263) to-endpoint FORWARDED (TCP Flags: SYN, ACK)
Jul 26 14:13:39.319: default/deathstar-8c4c77fb7-6sxwm:80 (ID:35263) &amp;lt;&amp;gt; default/tiefighter (ID:39100) pre-xlate-rev TRACED (TCP)
Jul 26 14:13:39.319: default/deathstar-8c4c77fb7-6sxwm:80 (ID:35263) &amp;lt;&amp;gt; default/tiefighter (ID:39100) pre-xlate-rev TRACED (TCP)
Jul 26 14:13:39.332: default/tiefighter:51052 (ID:39100) &amp;lt;- default/deathstar-8c4c77fb7-6sxwm:80 (ID:35263) to-endpoint FORWARDED (TCP Flags: ACK, PSH)
Jul 26 14:13:39.343: default/tiefighter:51052 (ID:39100) &amp;lt;- default/deathstar-8c4c77fb7-6sxwm:80 (ID:35263) to-endpoint FORWARDED (TCP Flags: ACK, FIN)&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아래와 같이 L7 정책을 생성합니다.&lt;/p&gt;
&lt;pre class=&quot;haskell&quot;&gt;&lt;code&gt;# 기존 rule1 정책을 업데이트 해서 사용, 허용되는 L7 정책을 rules에서 세부적으로 명시함
apiVersion: &quot;cilium.io/v2&quot;
kind: CiliumNetworkPolicy
metadata:
  name: &quot;rule1&quot;
spec:
  description: &quot;L7 policy to restrict access to specific HTTP call&quot;
  endpointSelector:
    matchLabels:
      org: empire
      class: deathstar
  ingress:
  - fromEndpoints:
    - matchLabels:
        org: empire
    toPorts:
    - ports:
      - port: &quot;80&quot;
        protocol: TCP
      rules:
        http:
        - method: &quot;POST&quot;
          path: &quot;/v1/request-landing&quot;

# 적용
kubectl apply -f https://raw.githubusercontent.com/cilium/cilium/1.17.6/examples/minikube/sw_l3_l4_l7_policy.yaml


# 요청
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl exec tiefighter -- curl -s -XPOST deathstar.default.svc.cluster.local/v1/request-landing
Ship landed

#-&amp;gt; 허용됨

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl exec tiefighter -- curl -s -XPUT deathstar.default.svc.cluster.local/v1/exhaust-port
Access denied

#-&amp;gt; 차단됨

## 모니터링 결과
(⎈|HomeLab:N/A) root@k8s-ctr:~# hubble observe -f --protocol tcp --from-identity $DEATHSTARID
Jul 26 14:15:12.782: default/tiefighter:53412 (ID:39100) &amp;lt;- default/deathstar-8c4c77fb7-2d4dk:80 (ID:35263) to-endpoint FORWARDED (TCP Flags: SYN, ACK)
Jul 26 14:15:12.782: default/deathstar-8c4c77fb7-2d4dk:80 (ID:35263) &amp;lt;&amp;gt; default/tiefighter (ID:39100) pre-xlate-rev TRACED (TCP)
Jul 26 14:15:12.786: default/deathstar-8c4c77fb7-2d4dk:80 (ID:35263) &amp;lt;&amp;gt; default/tiefighter (ID:39100) pre-xlate-rev TRACED (TCP)
Jul 26 14:15:12.808: 10.0.2.15:55760 (host) &amp;lt;- default/deathstar-8c4c77fb7-2d4dk:80 (ID:35263) to-stack FORWARDED (TCP Flags: SYN, ACK)
Jul 26 14:15:12.827: 10.0.2.15:55760 (host) &amp;lt;- default/deathstar-8c4c77fb7-2d4dk:80 (ID:35263) to-stack FORWARDED (TCP Flags: ACK, PSH)
Jul 26 14:15:12.830: default/tiefighter:53412 (ID:39100) &amp;lt;- default/deathstar-8c4c77fb7-2d4dk:80 (ID:35263) http-response FORWARDED (HTTP/1.1 200 33ms (POST http://deathstar.default.svc.cluster.local/v1/request-landing))
Jul 26 14:15:12.837: default/tiefighter:53412 (ID:39100) &amp;lt;- default/deathstar-8c4c77fb7-2d4dk:80 (ID:35263) to-endpoint FORWARDED (TCP Flags: ACK, PSH)
Jul 26 14:15:12.843: default/tiefighter:53412 (ID:39100) &amp;lt;- default/deathstar-8c4c77fb7-2d4dk:80 (ID:35263) to-endpoint FORWARDED (TCP Flags: ACK, FIN)

#-&amp;gt; 허용

Jul 26 14:15:28.124: 10.0.2.15:55760 (host) &amp;lt;- default/deathstar-8c4c77fb7-2d4dk:80 (ID:35263) to-stack FORWARDED (TCP Flags: ACK)
Jul 26 14:15:41.796: default/tiefighter:59826 (ID:39100) &amp;lt;- default/deathstar-8c4c77fb7-2d4dk:80 (ID:35263) to-endpoint FORWARDED (TCP Flags: SYN, ACK)
Jul 26 14:15:41.796: default/deathstar-8c4c77fb7-2d4dk:80 (ID:35263) &amp;lt;&amp;gt; default/tiefighter (ID:39100) pre-xlate-rev TRACED (TCP)
Jul 26 14:15:41.796: default/deathstar-8c4c77fb7-2d4dk:80 (ID:35263) &amp;lt;&amp;gt; default/tiefighter (ID:39100) pre-xlate-rev TRACED (TCP)
Jul 26 14:15:41.804: default/tiefighter:59826 (ID:39100) &amp;lt;- default/deathstar-8c4c77fb7-2d4dk:80 (ID:35263) to-endpoint FORWARDED (TCP Flags: ACK, PSH)
Jul 26 14:15:41.804: default/tiefighter:59826 (ID:39100) &amp;lt;- default/deathstar-8c4c77fb7-2d4dk:80 (ID:35263) http-response FORWARDED (HTTP/1.1 403 0ms (PUT http://deathstar.default.svc.cluster.local/v1/exhaust-port))
Jul 26 14:15:41.807: default/tiefighter:59826 (ID:39100) &amp;lt;- default/deathstar-8c4c77fb7-2d4dk:80 (ID:35263) to-endpoint FORWARDED (TCP Flags: ACK, FIN)
Jul 26 14:15:42.835: 10.0.2.15:55760 (host) &amp;lt;- default/deathstar-8c4c77fb7-2d4dk:80 (ID:35263) to-stack FORWARDED (TCP Flags: ACK, FIN)

#-&amp;gt; --from-identity 에서는 차단된 결과도 여기서는 FORWARDED로 확인됨

# 대상 파드 관점에서 확인하는 경우 DROPPED 확인 가능
(⎈|HomeLab:N/A) root@k8s-ctr:~# hubble observe -f --pod deathstar
Jul 26 14:22:19.197: default/tiefighter (ID:39100) &amp;lt;&amp;gt; default/deathstar-8c4c77fb7-6sxwm:80 (ID:35263) post-xlate-fwd TRANSLATED (TCP)
Jul 26 14:22:19.200: default/tiefighter:38694 (ID:39100) -&amp;gt; default/deathstar-8c4c77fb7-6sxwm:80 (ID:35263) policy-verdict:L3-L4 INGRESS ALLOWED (TCP Flags: SYN)
Jul 26 14:22:19.201: default/tiefighter:38694 (ID:39100) -&amp;gt; default/deathstar-8c4c77fb7-6sxwm:80 (ID:35263) to-proxy FORWARDED (TCP Flags: SYN)
Jul 26 14:22:19.201: default/tiefighter:38694 (ID:39100) &amp;lt;- default/deathstar-8c4c77fb7-6sxwm:80 (ID:35263) to-endpoint FORWARDED (TCP Flags: SYN, ACK)
Jul 26 14:22:19.202: default/tiefighter:38694 (ID:39100) -&amp;gt; default/deathstar-8c4c77fb7-6sxwm:80 (ID:35263) to-proxy FORWARDED (TCP Flags: ACK)
Jul 26 14:22:19.213: default/deathstar-8c4c77fb7-6sxwm:80 (ID:35263) &amp;lt;&amp;gt; default/tiefighter (ID:39100) pre-xlate-rev TRACED (TCP)
Jul 26 14:22:19.213: default/deathstar-8c4c77fb7-6sxwm:80 (ID:35263) &amp;lt;&amp;gt; default/tiefighter (ID:39100) pre-xlate-rev TRACED (TCP)
Jul 26 14:22:19.217: default/tiefighter:38694 (ID:39100) -&amp;gt; default/deathstar-8c4c77fb7-6sxwm:80 (ID:35263) to-proxy FORWARDED (TCP Flags: ACK, PSH)
Jul 26 14:22:19.223: default/tiefighter:38694 (ID:39100) -&amp;gt; default/deathstar-8c4c77fb7-6sxwm:80 (ID:35263) http-request FORWARDED (HTTP/1.1 POST http://deathstar.default.svc.cluster.local/v1/request-landing)
Jul 26 14:22:19.230: 10.0.2.15:37226 (ID:39100) -&amp;gt; default/deathstar-8c4c77fb7-6sxwm:80 (ID:35263) to-endpoint FORWARDED (TCP Flags: SYN)
Jul 26 14:22:19.232: 10.0.2.15:37226 (host) &amp;lt;- default/deathstar-8c4c77fb7-6sxwm:80 (ID:35263) to-stack FORWARDED (TCP Flags: SYN, ACK)
Jul 26 14:22:19.232: 10.0.2.15:37226 (ID:39100) -&amp;gt; default/deathstar-8c4c77fb7-6sxwm:80 (ID:35263) to-endpoint FORWARDED (TCP Flags: ACK)
Jul 26 14:22:19.234: 10.0.2.15:37226 (host) &amp;lt;&amp;gt; default/deathstar-8c4c77fb7-6sxwm (ID:35263) pre-xlate-rev TRACED (TCP)
Jul 26 14:22:19.239: 10.0.2.15:37226 (host) &amp;lt;&amp;gt; default/deathstar-8c4c77fb7-6sxwm (ID:35263) pre-xlate-rev TRACED (TCP)
Jul 26 14:22:19.239: 10.0.2.15:37226 (host) &amp;lt;&amp;gt; default/deathstar-8c4c77fb7-6sxwm (ID:35263) pre-xlate-rev TRACED (TCP)
Jul 26 14:22:19.243: 10.0.2.15:37226 (ID:39100) -&amp;gt; default/deathstar-8c4c77fb7-6sxwm:80 (ID:35263) to-endpoint FORWARDED (TCP Flags: ACK, PSH)
Jul 26 14:22:19.251: 10.0.2.15:37226 (host) &amp;lt;&amp;gt; default/deathstar-8c4c77fb7-6sxwm (ID:35263) pre-xlate-rev TRACED (TCP)
Jul 26 14:22:19.251: 10.0.2.15:37226 (host) &amp;lt;&amp;gt; default/deathstar-8c4c77fb7-6sxwm (ID:35263) pre-xlate-rev TRACED (TCP)
Jul 26 14:22:19.258: 10.0.2.15:37226 (host) &amp;lt;- default/deathstar-8c4c77fb7-6sxwm:80 (ID:35263) to-stack FORWARDED (TCP Flags: ACK, PSH)
Jul 26 14:22:19.261: default/tiefighter:38694 (ID:39100) &amp;lt;- default/deathstar-8c4c77fb7-6sxwm:80 (ID:35263) http-response FORWARDED (HTTP/1.1 200 37ms (POST http://deathstar.default.svc.cluster.local/v1/request-landing))
Jul 26 14:22:19.263: default/tiefighter:38694 (ID:39100) &amp;lt;- default/deathstar-8c4c77fb7-6sxwm:80 (ID:35263) to-endpoint FORWARDED (TCP Flags: ACK, PSH)
Jul 26 14:22:19.282: default/tiefighter:38694 (ID:39100) -&amp;gt; default/deathstar-8c4c77fb7-6sxwm:80 (ID:35263) to-proxy FORWARDED (TCP Flags: ACK, FIN)
Jul 26 14:22:19.283: default/tiefighter:38694 (ID:39100) &amp;lt;- default/deathstar-8c4c77fb7-6sxwm:80 (ID:35263) to-endpoint FORWARDED (TCP Flags: ACK, FIN)

# -&amp;gt; 허용됨

Jul 26 14:22:29.493: default/tiefighter (ID:39100) &amp;lt;&amp;gt; default/deathstar-8c4c77fb7-6sxwm:80 (ID:35263) post-xlate-fwd TRANSLATED (TCP)
Jul 26 14:22:29.494: default/tiefighter:52896 (ID:39100) -&amp;gt; default/deathstar-8c4c77fb7-6sxwm:80 (ID:35263) policy-verdict:L3-L4 INGRESS ALLOWED (TCP Flags: SYN)
Jul 26 14:22:29.496: default/tiefighter:52896 (ID:39100) -&amp;gt; default/deathstar-8c4c77fb7-6sxwm:80 (ID:35263) to-proxy FORWARDED (TCP Flags: SYN)
Jul 26 14:22:29.496: default/tiefighter:52896 (ID:39100) &amp;lt;- default/deathstar-8c4c77fb7-6sxwm:80 (ID:35263) to-endpoint FORWARDED (TCP Flags: SYN, ACK)
Jul 26 14:22:29.501: default/tiefighter:52896 (ID:39100) -&amp;gt; default/deathstar-8c4c77fb7-6sxwm:80 (ID:35263) to-proxy FORWARDED (TCP Flags: ACK)
Jul 26 14:22:29.501: default/deathstar-8c4c77fb7-6sxwm:80 (ID:35263) &amp;lt;&amp;gt; default/tiefighter (ID:39100) pre-xlate-rev TRACED (TCP)
Jul 26 14:22:29.505: default/deathstar-8c4c77fb7-6sxwm:80 (ID:35263) &amp;lt;&amp;gt; default/tiefighter (ID:39100) pre-xlate-rev TRACED (TCP)
Jul 26 14:22:29.507: default/tiefighter:52896 (ID:39100) -&amp;gt; default/deathstar-8c4c77fb7-6sxwm:80 (ID:35263) to-proxy FORWARDED (TCP Flags: ACK, PSH)
Jul 26 14:22:29.514: default/tiefighter:52896 (ID:39100) -&amp;gt; default/deathstar-8c4c77fb7-6sxwm:80 (ID:35263) http-request DROPPED (HTTP/1.1 PUT http://deathstar.default.svc.cluster.local/v1/exhaust-port)
Jul 26 14:22:29.514: default/tiefighter:52896 (ID:39100) &amp;lt;- default/deathstar-8c4c77fb7-6sxwm:80 (ID:35263) http-response FORWARDED (HTTP/1.1 403 0ms (PUT http://deathstar.default.svc.cluster.local/v1/exhaust-port))
Jul 26 14:22:29.517: default/tiefighter:52896 (ID:39100) &amp;lt;- default/deathstar-8c4c77fb7-6sxwm:80 (ID:35263) to-endpoint FORWARDED (TCP Flags: ACK, PSH)
Jul 26 14:22:29.520: default/tiefighter:52896 (ID:39100) -&amp;gt; default/deathstar-8c4c77fb7-6sxwm:80 (ID:35263) to-proxy FORWARDED (TCP Flags: ACK, FIN)
Jul 26 14:22:29.521: default/tiefighter:52896 (ID:39100) &amp;lt;- default/deathstar-8c4c77fb7-6sxwm:80 (ID:35263) to-endpoint FORWARDED (TCP Flags: ACK, FIN)

# -&amp;gt; 차단됨

# xwing에서 요청
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl exec xwing -- curl -s -XPOST deathstar.default.svc.cluster.local/v1/request-landing --connect-timeout 2
command terminated with exit code 28

# 모니터링 결과: L4 수준의 차단은 로그가 다르다.
(⎈|HomeLab:N/A) root@k8s-ctr:~# hubble observe -f --pod deathstar
Jul 26 14:25:21.757: default/xwing (ID:39228) &amp;lt;&amp;gt; default/deathstar-8c4c77fb7-2d4dk:80 (ID:35263) post-xlate-fwd TRANSLATED (TCP)
Jul 26 14:25:22.111: default/xwing:57458 (ID:39228) &amp;lt;&amp;gt; default/deathstar-8c4c77fb7-2d4dk:80 (ID:35263) policy-verdict:none INGRESS DENIED (TCP Flags: SYN)
Jul 26 14:25:22.111: default/xwing:57458 (ID:39228) &amp;lt;&amp;gt; default/deathstar-8c4c77fb7-2d4dk:80 (ID:35263) Policy denied DROPPED (TCP Flags: SYN)
Jul 26 14:25:23.126: default/xwing:57458 (ID:39228) &amp;lt;&amp;gt; default/deathstar-8c4c77fb7-2d4dk:80 (ID:35263) policy-verdict:none INGRESS DENIED (TCP Flags: SYN)
Jul 26 14:25:23.126: default/xwing:57458 (ID:39228) &amp;lt;&amp;gt; default/deathstar-8c4c77fb7-2d4dk:80 (ID:35263) Policy denied DROPPED (TCP Flags: SYN)
&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;L7 요청에 대한 허용/차단에 대한 로그도 hubble UI 에서 확인 가능합니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;2202&quot; data-origin-height=&quot;1227&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bR7oGI/btsPCqtIJQK/q2kxh3XW1mRyIqNnvnXXb0/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bR7oGI/btsPCqtIJQK/q2kxh3XW1mRyIqNnvnXXb0/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bR7oGI/btsPCqtIJQK/q2kxh3XW1mRyIqNnvnXXb0/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbR7oGI%2FbtsPCqtIJQK%2Fq2kxh3XW1mRyIqNnvnXXb0%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;2202&quot; height=&quot;1227&quot; data-origin-width=&quot;2202&quot; data-origin-height=&quot;1227&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;해당 실습을 통해서 Hubble CLI와 UI를 통해서 네트워크 가시화를 확인해보았습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;다음 실습을 진행하기 전 실습을 위해 생성한 리소스 삭제하겠습니다.&lt;/p&gt;
&lt;pre class=&quot;awk&quot;&gt;&lt;code&gt;# 다음 실습을 위해 리소스 삭제
kubectl delete -f https://raw.githubusercontent.com/cilium/cilium/1.17.6/examples/minikube/http-sw-app.yaml
kubectl delete cnp rule1

# 삭제 확인
kubectl get cnp&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;4. Cilium/Hubble을 Prometheus로 연동하기&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;마지막으로 Cilium과 Hubble의 Metrics을 확인해보기 위해서 Prometheus와 연동해보겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Cilium에서 제공하는 addon으로 클러스터에 Prometheus와 Grafana를 설치하겠습니다.&lt;/p&gt;
&lt;pre class=&quot;sql&quot;&gt;&lt;code&gt;# Prometheus, Grafana 설치
kubectl apply -f https://raw.githubusercontent.com/cilium/cilium/1.17.6/examples/kubernetes/addons/prometheus/monitoring-example.yaml

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl apply -f https://raw.githubusercontent.com/cilium/cilium/1.17.6/examples/kubernetes/addons/prometheus/monitoring-example.yaml
namespace/cilium-monitoring created
serviceaccount/prometheus-k8s created
configmap/grafana-config created
configmap/grafana-cilium-dashboard created
configmap/grafana-cilium-operator-dashboard created
configmap/grafana-hubble-dashboard created
configmap/grafana-hubble-l7-http-metrics-by-workload created
configmap/prometheus created
clusterrole.rbac.authorization.k8s.io/prometheus created
clusterrolebinding.rbac.authorization.k8s.io/prometheus created
service/grafana created
service/prometheus created
deployment.apps/grafana created
deployment.apps/prometheus created

# 설치 확인
kubectl get deploy,pod,svc,ep -n cilium-monitoring

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get deploy,pod,svc,ep -n cilium-monitoring
Warning: v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
NAME                         READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/grafana      1/1     1            1           66s
deployment.apps/prometheus   1/1     1            1           66s

NAME                              READY   STATUS    RESTARTS   AGE
pod/grafana-5c69859d9-hjlr5       1/1     Running   0          66s
pod/prometheus-6fc896bc5d-v9fqx   1/1     Running   0          66s

NAME                 TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
service/grafana      ClusterIP   10.96.82.74     &amp;lt;none&amp;gt;        3000/TCP   66s
service/prometheus   ClusterIP   10.96.222.237   &amp;lt;none&amp;gt;        9090/TCP   66s

NAME                   ENDPOINTS           AGE
endpoints/grafana      172.20.2.97:3000    66s
endpoints/prometheus   172.20.2.119:9090   66s

kubectl get cm -n cilium-monitoring

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get cm -n cilium-monitoring
NAME                                         DATA   AGE
grafana-cilium-dashboard                     1      92s
grafana-cilium-operator-dashboard            1      91s
grafana-config                               3      92s
grafana-hubble-dashboard                     1      91s
grafana-hubble-l7-http-metrics-by-workload   1      91s
kube-root-ca.crt                             1      92s
prometheus                                   1      91s

# 아래와 같이 cilium, hubble에 대한 grafana dashboard를 주입하는 configMap이 생성되어 있습니다.
kubectl describe cm -n cilium-monitoring grafana-cilium-dashboard
kubectl describe cm -n cilium-monitoring grafana-hubble-dashboard

# NodePort 설정하여 접근 합니다.
kubectl patch svc -n cilium-monitoring prometheus -p '{&quot;spec&quot;: {&quot;type&quot;: &quot;NodePort&quot;, &quot;ports&quot;: [{&quot;port&quot;: 9090, &quot;targetPort&quot;: 9090, &quot;nodePort&quot;: 30001}]}}'
kubectl patch svc -n cilium-monitoring grafana -p '{&quot;spec&quot;: {&quot;type&quot;: &quot;NodePort&quot;, &quot;ports&quot;: [{&quot;port&quot;: 3000, &quot;targetPort&quot;: 3000, &quot;nodePort&quot;: 30002}]}}'

# 접속 주소 확인
# http://192.168.10.100:30001  # prometheus
# http://192.168.10.100:30002  # grafana&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;그라파나를 접속해보면 아래와 같이 Dashboard가 구성되어 있습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;2213&quot; data-origin-height=&quot;968&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/1fcIb/btsPBHvV4ej/q3GLJZubkLKee7tYkE6amk/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/1fcIb/btsPBHvV4ej/q3GLJZubkLKee7tYkE6amk/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/1fcIb/btsPBHvV4ej/q3GLJZubkLKee7tYkE6amk/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2F1fcIb%2FbtsPBHvV4ej%2Fq3GLJZubkLKee7tYkE6amk%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;2213&quot; height=&quot;968&quot; data-origin-width=&quot;2213&quot; data-origin-height=&quot;968&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;기본 설정에서는 Cilium과 Hubble은 메트릭을 노출하지 않습니다. 다만 본 실습에서는 이를 활성화 하기 위해서 아래와 같이 옵션을 설정하였습니다.&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;&lt;code&gt;prometheus.enabled=true&lt;/code&gt;: Enables metrics for &lt;code&gt;cilium-agent&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;operator.prometheus.enabled=true&lt;/code&gt;: Enables metrics for &lt;code&gt;cilium-operator&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;hubble.metrics.enabled&lt;/code&gt;: Enables the provided list of Hubble metrics.&lt;/li&gt;
&lt;/ul&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이때 노드에는 아래와 같은 포트들이 Listen상태로 확인됩니다.&lt;/p&gt;
&lt;pre class=&quot;lisp&quot;&gt;&lt;code&gt;(⎈|HomeLab:N/A) root@k8s-ctr:~# ss -tnlp | grep -E '9962|9963|9965'
LISTEN 0      4096                *:9963             *:*    users:((&quot;cilium-operator&quot;,pid=12643,fd=7)) # cilium-operator 메트릭
LISTEN 0      4096                *:9962             *:*    users:((&quot;cilium-agent&quot;,pid=9318,fd=7)) # cilium 메트릭
LISTEN 0      4096                *:9965             *:*    users:((&quot;cilium-agent&quot;,pid=9318,fd=39)) # hubble 메트릭&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;또한 prometheus 의 메트릭 scrap을 위한 설정을 확인할 수 있습니다.&lt;/p&gt;
&lt;pre class=&quot;yaml&quot;&gt;&lt;code&gt;(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl describe pod -n kube-system -l k8s-app=cilium | grep prometheus
                      prometheus.io/port: 9962
                      prometheus.io/scrape: true
                      prometheus.io/port: 9962
                      prometheus.io/scrape: true
                      prometheus.io/port: 9962
                      prometheus.io/scrape: true
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl describe pod -n kube-system -l name=cilium-operator | grep prometheus
Annotations:          prometheus.io/port: 9963
                      prometheus.io/scrape: true
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl describe svc -n kube-system hubble-metrics |grep prometheus
                          prometheus.io/port: 9965
                          prometheus.io/scrape: true&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이를 통해서 그라파나 대시보드에서도 확인이 가능합니다.&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;Cilium Metrics&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;2190&quot; data-origin-height=&quot;1095&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/B3j16/btsPBkOw95M/XyKvkbCZh4dClfKAI6o7y0/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/B3j16/btsPBkOw95M/XyKvkbCZh4dClfKAI6o7y0/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/B3j16/btsPBkOw95M/XyKvkbCZh4dClfKAI6o7y0/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FB3j16%2FbtsPBkOw95M%2FXyKvkbCZh4dClfKAI6o7y0%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;2190&quot; height=&quot;1095&quot; data-origin-width=&quot;2190&quot; data-origin-height=&quot;1095&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;Hubble&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;2205&quot; data-origin-height=&quot;1057&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/vMZSR/btsPBlNqCQy/FbrPrkiSu6dbY9hCWomyF0/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/vMZSR/btsPBlNqCQy/FbrPrkiSu6dbY9hCWomyF0/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/vMZSR/btsPBlNqCQy/FbrPrkiSu6dbY9hCWomyF0/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FvMZSR%2FbtsPBlNqCQy%2FFbrPrkiSu6dbY9hCWomyF0%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;2205&quot; height=&quot;1057&quot; data-origin-width=&quot;2205&quot; data-origin-height=&quot;1057&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;마치며&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이번 포스트에서는 Cilium에서의 Observability에 대해서 확인하고 Hubble을 살펴 보았습니다. 또한 Cilium 과 Hubble 에 대한 메트릭을 노출하고 모니터링하기 위해서 Prometheus와 Grafana를 설치하고 연동하는 방법을 확인해보았습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;다음 포스트에서는 Cilium의 파드 통신을 상세하게 살펴보겠습니다.&lt;/p&gt;</description>
      <category>Cilium</category>
      <category>cilium</category>
      <category>grafana</category>
      <category>Hubble</category>
      <category>kubernetes</category>
      <category>prometheus</category>
      <author>한명</author>
      <guid isPermaLink="true">https://a-person.tistory.com/50</guid>
      <comments>https://a-person.tistory.com/50#entry50comment</comments>
      <pubDate>Sun, 27 Jul 2025 00:13:25 +0900</pubDate>
    </item>
    <item>
      <title>[1-2] Cilium 환경 및 통신 확인</title>
      <link>https://a-person.tistory.com/49</link>
      <description>&lt;p data-ke-size=&quot;size16&quot;&gt;이번 포스팅에서는 Cilium CNI Plugin에 대해서 살펴보겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;먼저 실습 환경을 구성하고 Flannel을 설치하여 Kubernetes에서 제공되는 네트워킹이 어떻게 구현되는지 살펴보겠습니다. 이후 Cilium에 대한 배경 설명과 Cilium 설치 이후 환경이 어떻게 달라지는지를 실습을 통해 살펴보겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;본 포스팅은 CloudNet에서 진행하는 Cilium 스터디를 참여하면서, 제공해주신 가이드를 바탕으로 학습한 내용을 정리한 내용입니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;지난 포스트에 이어서 &lt;code&gt;5. Cilium 환경 확인&lt;/code&gt; 부터 이어서 설명하겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;5. Cilium 환경 확인&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Cilium CLI를 설치하여 Cilium 환경을 확인해보겠습니다.&lt;/p&gt;
&lt;pre class=&quot;yaml&quot;&gt;&lt;code&gt;# cilium cli 설치
CILIUM_CLI_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/cilium-cli/main/stable.txt)
CLI_ARCH=amd64
if [ &quot;$(uname -m)&quot; = &quot;aarch64&quot; ]; then CLI_ARCH=arm64; fi
curl -L --fail --remote-name-all https://github.com/cilium/cilium-cli/releases/download/${CILIUM_CLI_VERSION}/cilium-linux-${CLI_ARCH}.tar.gz &amp;gt;/dev/null 2&amp;gt;&amp;amp;1
tar xzvfC cilium-linux-${CLI_ARCH}.tar.gz /usr/local/bin
rm cilium-linux-${CLI_ARCH}.tar.gz

# cilium 상태 확인
(⎈|HomeLab:N/A) root@k8s-ctr:~# which cilium
/usr/local/bin/cilium
(⎈|HomeLab:N/A) root@k8s-ctr:~# cilium status
    /&amp;macr;&amp;macr;\
 /&amp;macr;&amp;macr;\__/&amp;macr;&amp;macr;\    Cilium:             OK
 \__/&amp;macr;&amp;macr;\__/    Operator:           OK
 /&amp;macr;&amp;macr;\__/&amp;macr;&amp;macr;\    Envoy DaemonSet:    OK
 \__/&amp;macr;&amp;macr;\__/    Hubble Relay:       disabled
    \__/       ClusterMesh:        disabled

DaemonSet              cilium                   Desired: 3, Ready: 3/3, Available: 3/3
DaemonSet              cilium-envoy             Desired: 3, Ready: 3/3, Available: 3/3
Deployment             cilium-operator          Desired: 2, Ready: 2/2, Available: 2/2
Containers:            cilium                   Running: 3
                       cilium-envoy             Running: 3
                       cilium-operator          Running: 2
                       clustermesh-apiserver
                       hubble-relay
Cluster Pods:          5/5 managed by Cilium
Helm chart version:    1.17.5
Image versions         cilium             quay.io/cilium/cilium:v1.17.5@sha256:baf8541723ee0b72d6c489c741c81a6fdc5228940d66cb76ef5ea2ce3c639ea6: 3
                       cilium-envoy       quay.io/cilium/cilium-envoy:v1.32.6-1749271279-0864395884b263913eac200ee2048fd985f8e626@sha256:9f69e290a7ea3d4edf9192acd81694089af048ae0d8a67fb63bd62dc1d72203e: 3
                       cilium-operator    quay.io/cilium/operator-generic:v1.17.5@sha256:f954c97eeb1b47ed67d08cc8fb4108fb829f869373cbb3e698a7f8ef1085b09e: 2
(⎈|HomeLab:N/A) root@k8s-ctr:~# cilium config view
agent-not-ready-taint-key                         node.cilium.io/agent-not-ready
arping-refresh-period                             30s
auto-direct-node-routes                           true
bpf-distributed-lru                               false
bpf-events-drop-enabled                           true
bpf-events-policy-verdict-enabled                 true
bpf-events-trace-enabled                          true
bpf-lb-acceleration                               disabled
bpf-lb-algorithm-annotation                       false
bpf-lb-external-clusterip                         false
bpf-lb-map-max                                    65536
bpf-lb-mode-annotation                            false
bpf-lb-sock                                       false
bpf-lb-source-range-all-types                     false
bpf-map-dynamic-size-ratio                        0.0025
bpf-policy-map-max                                16384
bpf-root                                          /sys/fs/bpf
cgroup-root                                       /run/cilium/cgroupv2
cilium-endpoint-gc-interval                       5m0s
cluster-id                                        0
cluster-name                                      default
cluster-pool-ipv4-cidr                            172.20.0.0/16
cluster-pool-ipv4-mask-size                       24
clustermesh-enable-endpoint-sync                  false
clustermesh-enable-mcs-api                        false
cni-exclusive                                     true
cni-log-file                                      /var/run/cilium/cilium-cni.log
custom-cni-conf                                   false
datapath-mode                                     veth
debug                                             false
debug-verbose
default-lb-service-ipam                           lbipam
direct-routing-skip-unreachable                   false
dnsproxy-enable-transparent-mode                  true
dnsproxy-socket-linger-timeout                    10
egress-gateway-reconciliation-trigger-interval    1s
enable-auto-protect-node-port-range               true
enable-bpf-clock-probe                            false
enable-bpf-masquerade                             true
enable-endpoint-health-checking                   true
enable-endpoint-lockdown-on-policy-overflow       false
enable-endpoint-routes                            true
enable-experimental-lb                            false
enable-health-check-loadbalancer-ip               false
enable-health-check-nodeport                      true
enable-health-checking                            true
enable-hubble                                     true
enable-internal-traffic-policy                    true
enable-ipv4                                       true
enable-ipv4-big-tcp                               false
enable-ipv4-masquerade                            true
enable-ipv6                                       false
enable-ipv6-big-tcp                               false
enable-ipv6-masquerade                            true
enable-k8s-networkpolicy                          true
enable-k8s-terminating-endpoint                   true
enable-l2-neigh-discovery                         true
enable-l7-proxy                                   true
enable-lb-ipam                                    true
enable-local-redirect-policy                      false
enable-masquerade-to-route-source                 false
enable-metrics                                    true
enable-node-selector-labels                       false
enable-non-default-deny-policies                  true
enable-policy                                     default
enable-policy-secrets-sync                        true
enable-runtime-device-detection                   true
enable-sctp                                       false
enable-source-ip-verification                     true
enable-svc-source-range-check                     true
enable-tcx                                        true
enable-vtep                                       false
enable-well-known-identities                      false
enable-xt-socket-fallback                         true
envoy-access-log-buffer-size                      4096
envoy-base-id                                     0
envoy-keep-cap-netbindservice                     false
external-envoy-proxy                              true
health-check-icmp-failure-threshold               3
http-retry-count                                  3
hubble-disable-tls                                false
hubble-export-file-max-backups                    5
hubble-export-file-max-size-mb                    10
hubble-listen-address                             :4244
hubble-socket-path                                /var/run/cilium/hubble.sock
hubble-tls-cert-file                              /var/lib/cilium/tls/hubble/server.crt
hubble-tls-client-ca-files                        /var/lib/cilium/tls/hubble/client-ca.crt
hubble-tls-key-file                               /var/lib/cilium/tls/hubble/server.key
identity-allocation-mode                          crd
identity-gc-interval                              15m0s
identity-heartbeat-timeout                        30m0s
install-no-conntrack-iptables-rules               true
ipam                                              cluster-pool
ipam-cilium-node-update-rate                      15s
iptables-random-fully                             false
ipv4-native-routing-cidr                          172.20.0.0/16
k8s-require-ipv4-pod-cidr                         false
k8s-require-ipv6-pod-cidr                         false
kube-proxy-replacement                            true
kube-proxy-replacement-healthz-bind-address
max-connected-clusters                            255
mesh-auth-enabled                                 true
mesh-auth-gc-interval                             5m0s
mesh-auth-queue-size                              1024
mesh-auth-rotated-identities-queue-size           1024
monitor-aggregation                               medium
monitor-aggregation-flags                         all
monitor-aggregation-interval                      5s
nat-map-stats-entries                             32
nat-map-stats-interval                            30s
node-port-bind-protection                         true
nodeport-addresses
nodes-gc-interval                                 5m0s
operator-api-serve-addr                           127.0.0.1:9234
operator-prometheus-serve-addr                    :9963
policy-cidr-match-mode
policy-secrets-namespace                          cilium-secrets
policy-secrets-only-from-secrets-namespace        true
preallocate-bpf-maps                              false
procfs                                            /host/proc
proxy-connect-timeout                             2
proxy-idle-timeout-seconds                        60
proxy-initial-fetch-timeout                       30
proxy-max-concurrent-retries                      128
proxy-max-connection-duration-seconds             0
proxy-max-requests-per-connection                 0
proxy-xff-num-trusted-hops-egress                 0
proxy-xff-num-trusted-hops-ingress                0
remove-cilium-node-taints                         true
routing-mode                                      native
service-no-backend-response                       reject
set-cilium-is-up-condition                        true
set-cilium-node-taints                            true
synchronize-k8s-nodes                             true
tofqdns-dns-reject-response-code                  refused
tofqdns-enable-dns-compression                    true
tofqdns-endpoint-max-ip-per-hostname              1000
tofqdns-idle-connection-grace-period              0s
tofqdns-max-deferred-connection-deletes           10000
tofqdns-proxy-response-max-delay                  100ms
tunnel-protocol                                   vxlan
tunnel-source-port-range                          0-0
unmanaged-pod-watcher-interval                    15
vtep-cidr
vtep-endpoint
vtep-mac
vtep-mask
write-cni-conf-when-ready                         /host/etc/cni/net.d/05-cilium.conflist
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get cm -n kube-system cilium-config -o json | jq
{
  &quot;apiVersion&quot;: &quot;v1&quot;,
  &quot;data&quot;: {
    &quot;agent-not-ready-taint-key&quot;: &quot;node.cilium.io/agent-not-ready&quot;,
    &quot;arping-refresh-period&quot;: &quot;30s&quot;,
    &quot;auto-direct-node-routes&quot;: &quot;true&quot;,
    &quot;bpf-distributed-lru&quot;: &quot;false&quot;,
    &quot;bpf-events-drop-enabled&quot;: &quot;true&quot;,
    &quot;bpf-events-policy-verdict-enabled&quot;: &quot;true&quot;,
    &quot;bpf-events-trace-enabled&quot;: &quot;true&quot;,
    &quot;bpf-lb-acceleration&quot;: &quot;disabled&quot;,
    &quot;bpf-lb-algorithm-annotation&quot;: &quot;false&quot;,
    &quot;bpf-lb-external-clusterip&quot;: &quot;false&quot;,
    &quot;bpf-lb-map-max&quot;: &quot;65536&quot;,
    &quot;bpf-lb-mode-annotation&quot;: &quot;false&quot;,
    &quot;bpf-lb-sock&quot;: &quot;false&quot;,
    &quot;bpf-lb-source-range-all-types&quot;: &quot;false&quot;,
    &quot;bpf-map-dynamic-size-ratio&quot;: &quot;0.0025&quot;,
    &quot;bpf-policy-map-max&quot;: &quot;16384&quot;,
    &quot;bpf-root&quot;: &quot;/sys/fs/bpf&quot;,
    &quot;cgroup-root&quot;: &quot;/run/cilium/cgroupv2&quot;,
    &quot;cilium-endpoint-gc-interval&quot;: &quot;5m0s&quot;,
    &quot;cluster-id&quot;: &quot;0&quot;,
    &quot;cluster-name&quot;: &quot;default&quot;,
    &quot;cluster-pool-ipv4-cidr&quot;: &quot;172.20.0.0/16&quot;,
    &quot;cluster-pool-ipv4-mask-size&quot;: &quot;24&quot;,
    &quot;clustermesh-enable-endpoint-sync&quot;: &quot;false&quot;,
    &quot;clustermesh-enable-mcs-api&quot;: &quot;false&quot;,
    &quot;cni-exclusive&quot;: &quot;true&quot;,
    &quot;cni-log-file&quot;: &quot;/var/run/cilium/cilium-cni.log&quot;,
    &quot;custom-cni-conf&quot;: &quot;false&quot;,
    &quot;datapath-mode&quot;: &quot;veth&quot;,
    &quot;debug&quot;: &quot;false&quot;,
    &quot;debug-verbose&quot;: &quot;&quot;,
    &quot;default-lb-service-ipam&quot;: &quot;lbipam&quot;,
    &quot;direct-routing-skip-unreachable&quot;: &quot;false&quot;,
    &quot;dnsproxy-enable-transparent-mode&quot;: &quot;true&quot;,
    &quot;dnsproxy-socket-linger-timeout&quot;: &quot;10&quot;,
    &quot;egress-gateway-reconciliation-trigger-interval&quot;: &quot;1s&quot;,
    &quot;enable-auto-protect-node-port-range&quot;: &quot;true&quot;,
    &quot;enable-bpf-clock-probe&quot;: &quot;false&quot;,
    &quot;enable-bpf-masquerade&quot;: &quot;true&quot;,
    &quot;enable-endpoint-health-checking&quot;: &quot;true&quot;,
    &quot;enable-endpoint-lockdown-on-policy-overflow&quot;: &quot;false&quot;,
    &quot;enable-endpoint-routes&quot;: &quot;true&quot;,
    &quot;enable-experimental-lb&quot;: &quot;false&quot;,
    &quot;enable-health-check-loadbalancer-ip&quot;: &quot;false&quot;,
    &quot;enable-health-check-nodeport&quot;: &quot;true&quot;,
    &quot;enable-health-checking&quot;: &quot;true&quot;,
    &quot;enable-hubble&quot;: &quot;true&quot;,
    &quot;enable-internal-traffic-policy&quot;: &quot;true&quot;,
    &quot;enable-ipv4&quot;: &quot;true&quot;,
    &quot;enable-ipv4-big-tcp&quot;: &quot;false&quot;,
    &quot;enable-ipv4-masquerade&quot;: &quot;true&quot;,
    &quot;enable-ipv6&quot;: &quot;false&quot;,
    &quot;enable-ipv6-big-tcp&quot;: &quot;false&quot;,
    &quot;enable-ipv6-masquerade&quot;: &quot;true&quot;,
    &quot;enable-k8s-networkpolicy&quot;: &quot;true&quot;,
    &quot;enable-k8s-terminating-endpoint&quot;: &quot;true&quot;,
    &quot;enable-l2-neigh-discovery&quot;: &quot;true&quot;,
    &quot;enable-l7-proxy&quot;: &quot;true&quot;,
    &quot;enable-lb-ipam&quot;: &quot;true&quot;,
    &quot;enable-local-redirect-policy&quot;: &quot;false&quot;,
    &quot;enable-masquerade-to-route-source&quot;: &quot;false&quot;,
    &quot;enable-metrics&quot;: &quot;true&quot;,
    &quot;enable-node-selector-labels&quot;: &quot;false&quot;,
    &quot;enable-non-default-deny-policies&quot;: &quot;true&quot;,
    &quot;enable-policy&quot;: &quot;default&quot;,
    &quot;enable-policy-secrets-sync&quot;: &quot;true&quot;,
    &quot;enable-runtime-device-detection&quot;: &quot;true&quot;,
    &quot;enable-sctp&quot;: &quot;false&quot;,
    &quot;enable-source-ip-verification&quot;: &quot;true&quot;,
    &quot;enable-svc-source-range-check&quot;: &quot;true&quot;,
    &quot;enable-tcx&quot;: &quot;true&quot;,
    &quot;enable-vtep&quot;: &quot;false&quot;,
    &quot;enable-well-known-identities&quot;: &quot;false&quot;,
    &quot;enable-xt-socket-fallback&quot;: &quot;true&quot;,
    &quot;envoy-access-log-buffer-size&quot;: &quot;4096&quot;,
    &quot;envoy-base-id&quot;: &quot;0&quot;,
    &quot;envoy-keep-cap-netbindservice&quot;: &quot;false&quot;,
    &quot;external-envoy-proxy&quot;: &quot;true&quot;,
    &quot;health-check-icmp-failure-threshold&quot;: &quot;3&quot;,
    &quot;http-retry-count&quot;: &quot;3&quot;,
    &quot;hubble-disable-tls&quot;: &quot;false&quot;,
    &quot;hubble-export-file-max-backups&quot;: &quot;5&quot;,
    &quot;hubble-export-file-max-size-mb&quot;: &quot;10&quot;,
    &quot;hubble-listen-address&quot;: &quot;:4244&quot;,
    &quot;hubble-socket-path&quot;: &quot;/var/run/cilium/hubble.sock&quot;,
    &quot;hubble-tls-cert-file&quot;: &quot;/var/lib/cilium/tls/hubble/server.crt&quot;,
    &quot;hubble-tls-client-ca-files&quot;: &quot;/var/lib/cilium/tls/hubble/client-ca.crt&quot;,
    &quot;hubble-tls-key-file&quot;: &quot;/var/lib/cilium/tls/hubble/server.key&quot;,
    &quot;identity-allocation-mode&quot;: &quot;crd&quot;,
    &quot;identity-gc-interval&quot;: &quot;15m0s&quot;,
    &quot;identity-heartbeat-timeout&quot;: &quot;30m0s&quot;,
    &quot;install-no-conntrack-iptables-rules&quot;: &quot;true&quot;,
    &quot;ipam&quot;: &quot;cluster-pool&quot;,
    &quot;ipam-cilium-node-update-rate&quot;: &quot;15s&quot;,
    &quot;iptables-random-fully&quot;: &quot;false&quot;,
    &quot;ipv4-native-routing-cidr&quot;: &quot;172.20.0.0/16&quot;,
    &quot;k8s-require-ipv4-pod-cidr&quot;: &quot;false&quot;,
    &quot;k8s-require-ipv6-pod-cidr&quot;: &quot;false&quot;,
    &quot;kube-proxy-replacement&quot;: &quot;true&quot;,
    &quot;kube-proxy-replacement-healthz-bind-address&quot;: &quot;&quot;,
    &quot;max-connected-clusters&quot;: &quot;255&quot;,
    &quot;mesh-auth-enabled&quot;: &quot;true&quot;,
    &quot;mesh-auth-gc-interval&quot;: &quot;5m0s&quot;,
    &quot;mesh-auth-queue-size&quot;: &quot;1024&quot;,
    &quot;mesh-auth-rotated-identities-queue-size&quot;: &quot;1024&quot;,
    &quot;monitor-aggregation&quot;: &quot;medium&quot;,
    &quot;monitor-aggregation-flags&quot;: &quot;all&quot;,
    &quot;monitor-aggregation-interval&quot;: &quot;5s&quot;,
    &quot;nat-map-stats-entries&quot;: &quot;32&quot;,
    &quot;nat-map-stats-interval&quot;: &quot;30s&quot;,
    &quot;node-port-bind-protection&quot;: &quot;true&quot;,
    &quot;nodeport-addresses&quot;: &quot;&quot;,
    &quot;nodes-gc-interval&quot;: &quot;5m0s&quot;,
    &quot;operator-api-serve-addr&quot;: &quot;127.0.0.1:9234&quot;,
    &quot;operator-prometheus-serve-addr&quot;: &quot;:9963&quot;,
    &quot;policy-cidr-match-mode&quot;: &quot;&quot;,
    &quot;policy-secrets-namespace&quot;: &quot;cilium-secrets&quot;,
    &quot;policy-secrets-only-from-secrets-namespace&quot;: &quot;true&quot;,
    &quot;preallocate-bpf-maps&quot;: &quot;false&quot;,
    &quot;procfs&quot;: &quot;/host/proc&quot;,
    &quot;proxy-connect-timeout&quot;: &quot;2&quot;,
    &quot;proxy-idle-timeout-seconds&quot;: &quot;60&quot;,
    &quot;proxy-initial-fetch-timeout&quot;: &quot;30&quot;,
    &quot;proxy-max-concurrent-retries&quot;: &quot;128&quot;,
    &quot;proxy-max-connection-duration-seconds&quot;: &quot;0&quot;,
    &quot;proxy-max-requests-per-connection&quot;: &quot;0&quot;,
    &quot;proxy-xff-num-trusted-hops-egress&quot;: &quot;0&quot;,
    &quot;proxy-xff-num-trusted-hops-ingress&quot;: &quot;0&quot;,
    &quot;remove-cilium-node-taints&quot;: &quot;true&quot;,
    &quot;routing-mode&quot;: &quot;native&quot;,
    &quot;service-no-backend-response&quot;: &quot;reject&quot;,
    &quot;set-cilium-is-up-condition&quot;: &quot;true&quot;,
    &quot;set-cilium-node-taints&quot;: &quot;true&quot;,
    &quot;synchronize-k8s-nodes&quot;: &quot;true&quot;,
    &quot;tofqdns-dns-reject-response-code&quot;: &quot;refused&quot;,
    &quot;tofqdns-enable-dns-compression&quot;: &quot;true&quot;,
    &quot;tofqdns-endpoint-max-ip-per-hostname&quot;: &quot;1000&quot;,
    &quot;tofqdns-idle-connection-grace-period&quot;: &quot;0s&quot;,
    &quot;tofqdns-max-deferred-connection-deletes&quot;: &quot;10000&quot;,
    &quot;tofqdns-proxy-response-max-delay&quot;: &quot;100ms&quot;,
    &quot;tunnel-protocol&quot;: &quot;vxlan&quot;,
    &quot;tunnel-source-port-range&quot;: &quot;0-0&quot;,
    &quot;unmanaged-pod-watcher-interval&quot;: &quot;15&quot;,
    &quot;vtep-cidr&quot;: &quot;&quot;,
    &quot;vtep-endpoint&quot;: &quot;&quot;,
    &quot;vtep-mac&quot;: &quot;&quot;,
    &quot;vtep-mask&quot;: &quot;&quot;,
    &quot;write-cni-conf-when-ready&quot;: &quot;/host/etc/cni/net.d/05-cilium.conflist&quot;
  },
  &quot;kind&quot;: &quot;ConfigMap&quot;,
  &quot;metadata&quot;: {
    &quot;annotations&quot;: {
      &quot;meta.helm.sh/release-name&quot;: &quot;cilium&quot;,
      &quot;meta.helm.sh/release-namespace&quot;: &quot;kube-system&quot;
    },
    &quot;creationTimestamp&quot;: &quot;2025-07-18T14:36:16Z&quot;,
    &quot;labels&quot;: {
      &quot;app.kubernetes.io/managed-by&quot;: &quot;Helm&quot;
    },
    &quot;name&quot;: &quot;cilium-config&quot;,
    &quot;namespace&quot;: &quot;kube-system&quot;,
    &quot;resourceVersion&quot;: &quot;124258&quot;,
    &quot;uid&quot;: &quot;0e05248a-2db6-43aa-9332-b964915de26f&quot;
  }
}

# 참고로 아래와 같은 명령으로 설정을 변경할 수 있습니다.
cilium config set debug true &amp;amp;&amp;amp; watch kubectl get pod -A
cilium config view | grep -i debug


# cilium daemon = cilium-dbg
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -n kube-system -c cilium-agent -it ds/cilium -- cilium-dbg config
##### Read-write configurations #####
ConntrackAccounting               : Disabled
ConntrackLocal                    : Disabled
Debug                             : Disabled
DebugLB                           : Disabled
DropNotification                  : Enabled
MonitorAggregationLevel           : Medium
PolicyAccounting                  : Enabled
PolicyAuditMode                   : Disabled
PolicyTracing                     : Disabled
PolicyVerdictNotification         : Enabled
SourceIPVerification              : Enabled
TraceNotification                 : Enabled
MonitorNumPages                   : 64
PolicyEnforcement                 : default

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -n kube-system -c cilium-agent -it ds/cilium -- cilium-dbg status --verbose
KVStore:                Disabled
Kubernetes:             Ok         1.33 (v1.33.2) [linux/amd64]
Kubernetes APIs:        [&quot;EndpointSliceOrEndpoint&quot;, &quot;cilium/v2::CiliumClusterwideNetworkPolicy&quot;, &quot;cilium/v2::CiliumEndpoint&quot;, &quot;cilium/v2::CiliumNetworkPolicy&quot;, &quot;cilium/v2::CiliumNode&quot;, &quot;cilium/v2alpha1::CiliumCIDRGroup&quot;, &quot;core/v1::Namespace&quot;, &quot;core/v1::Pods&quot;, &quot;core/v1::Service&quot;, &quot;networking.k8s.io/v1::NetworkPolicy&quot;]
KubeProxyReplacement:   True   [eth0    10.0.2.15 fd17:625c:f037:2:a00:27ff:fe6b:69c9 fe80::a00:27ff:fe6b:69c9, eth1   192.168.10.101 fe80::a00:27ff:fe6d:8e42 (Direct Routing)]
Host firewall:          Disabled
SRv6:                   Disabled
CNI Chaining:           none
CNI Config file:        successfully wrote CNI configuration file to /host/etc/cni/net.d/05-cilium.conflist
Cilium:                 Ok   1.17.5 (v1.17.5-69aab28c)
NodeMonitor:            Listening for events on 2 CPUs with 64x4096 of shared memory
Cilium health daemon:   Ok
IPAM:                   IPv4: 5/254 allocated from 172.20.0.0/24,
Allocated addresses:
  172.20.0.157 (kube-system/coredns-674b8bbfcf-kw84j)
  172.20.0.178 (health)
  172.20.0.243 (kube-system/coredns-674b8bbfcf-2xwg2)
  172.20.0.89 (default/webpod-6c6d676d8c-m6x7k)
  172.20.0.95 (router)
IPv4 BIG TCP:           Disabled
IPv6 BIG TCP:           Disabled
BandwidthManager:       Disabled
Routing:                Network: Native   Host: BPF
Attach Mode:            TCX
Device Mode:            veth
Masquerading:           BPF   [eth0, eth1]   172.20.0.0/16 [IPv4: Enabled, IPv6: Disabled]
Clock Source for BPF:   ktime
Controller Status:      37/37 healthy
  Name                                                  Last success   Last error   Count   Message
  cilium-health-ep                                      5s ago         never        0       no error
  ct-map-pressure                                       5s ago         never        0       no error
  daemon-validate-config                                39s ago        never        0       no error
  dns-garbage-collector-job                             12s ago        never        0       no error
  endpoint-162-regeneration-recovery                    never          never        0       no error
  endpoint-235-regeneration-recovery                    never          never        0       no error
  endpoint-2913-regeneration-recovery                   never          never        0       no error
  endpoint-349-regeneration-recovery                    never          never        0       no error
  endpoint-703-regeneration-recovery                    never          never        0       no error
  endpoint-gc                                           4m13s ago      never        0       no error
  endpoint-periodic-regeneration                        1m13s ago      never        0       no error
  ep-bpf-prog-watchdog                                  5s ago         never        0       no error
  ipcache-inject-labels                                 9s ago         never        0       no error
  k8s-heartbeat                                         12s ago        never        0       no error
  link-cache                                            12s ago        never        0       no error
  local-identity-checkpoint                             28m59s ago     never        0       no error
  node-neighbor-link-updater                            5s ago         never        0       no error
  proxy-ports-checkpoint                                29m9s ago      never        0       no error
  resolve-identity-162                                  4m8s ago       never        0       no error
  resolve-identity-235                                  4m3s ago       never        0       no error
  resolve-identity-2913                                 3m48s ago      never        0       no error
  resolve-identity-349                                  4m9s ago       never        0       no error
  resolve-identity-703                                  4m3s ago       never        0       no error
  resolve-labels-default/webpod-6c6d676d8c-m6x7k        8m48s ago      never        0       no error
  resolve-labels-kube-system/coredns-674b8bbfcf-2xwg2   29m3s ago      never        0       no error
  resolve-labels-kube-system/coredns-674b8bbfcf-kw84j   29m3s ago      never        0       no error
  sync-lb-maps-with-k8s-services                        29m9s ago      never        0       no error
  sync-policymap-162                                    13m55s ago     never        0       no error
  sync-policymap-235                                    13m54s ago     never        0       no error
  sync-policymap-2913                                   8m48s ago      never        0       no error
  sync-policymap-349                                    13m56s ago     never        0       no error
  sync-policymap-703                                    13m55s ago     never        0       no error
  sync-to-k8s-ciliumendpoint (235)                      2s ago         never        0       no error
  sync-to-k8s-ciliumendpoint (2913)                     5s ago         never        0       no error
  sync-to-k8s-ciliumendpoint (703)                      2s ago         never        0       no error
  sync-utime                                            8s ago         never        0       no error
  write-cni-file                                        29m13s ago     never        0       no error
Proxy Status:            OK, ip 172.20.0.95, 0 redirects active on ports 10000-20000, Envoy: external
Global Identity Range:   min 256, max 65535
Hubble:                  Ok   Current/Max Flows: 4095/4095 (100.00%), Flows/s: 26.26   Metrics: Disabled
KubeProxyReplacement Details:
  Status:                 True
  Socket LB:              Enabled
  Socket LB Tracing:      Enabled
  Socket LB Coverage:     Full
  Devices:                eth0    10.0.2.15 fd17:625c:f037:2:a00:27ff:fe6b:69c9 fe80::a00:27ff:fe6b:69c9, eth1   192.168.10.101 fe80::a00:27ff:fe6d:8e42 (Direct Routing)
  Mode:                   SNAT
  Backend Selection:      Random
  Session Affinity:       Enabled
  Graceful Termination:   Enabled
  NAT46/64 Support:       Disabled
  XDP Acceleration:       Disabled
  Services:
  - ClusterIP:      Enabled
  - NodePort:       Enabled (Range: 30000-32767)
  - LoadBalancer:   Enabled
  - externalIPs:    Enabled
  - HostPort:       Enabled
  Annotations:
  - service.cilium.io/node
  - service.cilium.io/src-ranges-policy
  - service.cilium.io/type
BPF Maps:   dynamic sizing: on (ratio: 0.002500)
  Name                          Size
  Auth                          524288
  Non-TCP connection tracking   65536
  TCP connection tracking       131072
  Endpoint policy               65535
  IP cache                      512000
  IPv4 masquerading agent       16384
  IPv6 masquerading agent       16384
  IPv4 fragmentation            8192
  IPv4 service                  65536
  IPv6 service                  65536
  IPv4 service backend          65536
  IPv6 service backend          65536
  IPv4 service reverse NAT      65536
  IPv6 service reverse NAT      65536
  Metrics                       1024
  Ratelimit metrics             64
  NAT                           131072
  Neighbor table                131072
  Global policy                 16384
  Session affinity              65536
  Sock reverse NAT              65536
  Tunnel                        65536
Encryption:       Disabled
Cluster health:   3/3 reachable   (2025-07-18T15:04:46Z)
Name              IP              Node   Endpoints
  k8s-w1 (localhost):
    Host connectivity to 192.168.10.101:
      ICMP to stack:   OK, RTT=22.277984ms
      HTTP to agent:   OK, RTT=33.90763ms
    Endpoint connectivity to 172.20.0.178:
      ICMP to stack:   OK, RTT=1.595806ms
      HTTP to agent:   OK, RTT=4.287581ms
  k8s-ctr:
    Host connectivity to 192.168.10.100:
      ICMP to stack:   OK, RTT=4.257795ms
      HTTP to agent:   OK, RTT=11.35249ms
    Endpoint connectivity to 172.20.2.230:
      ICMP to stack:   OK, RTT=6.307589ms
      HTTP to agent:   OK, RTT=4.986504ms
  k8s-w2:
    Host connectivity to 192.168.10.102:
      ICMP to stack:   OK, RTT=1.68667ms
      HTTP to agent:   OK, RTT=2.677373ms
    Endpoint connectivity to 172.20.1.164:
      ICMP to stack:   OK, RTT=3.207928ms
      HTTP to agent:   OK, RTT=5.125123ms
Modules Health:
      agent
      ├── controlplane
      │   ├── auth
      │   │   ├── observer-job-auth-gc-identity-events            [OK] OK (2.842&amp;micro;s) [4] (7m50s, x1)
      │   │   ├── observer-job-auth-request-authentication        [OK] Primed (29m, x1)
      │   │   └── timer-job-auth-gc-cleanup                       [OK] OK (17.018&amp;micro;s) (4m13s, x1)
      │   ├── bgp-control-plane
      │   │   └── job-diffstore-events                            [OK] Running (29m, x2)
      │   ├── ciliumenvoyconfig
      │   │   └── experimental
      │   │       ├── job-reconcile                               [OK] OK, 0 object(s) (29m, x2)
      │   │       └── job-refresh                                 [OK] Next refresh in 30m0s (29m, x1)
      │   ├── daemon
      │   │   ├──                                                 [OK] daemon-validate-config (39s, x29)
      │   │   ├── ep-bpf-prog-watchdog
      │   │   │   └── ep-bpf-prog-watchdog                        [OK] ep-bpf-prog-watchdog (5s, x59)
      │   │   └── job-sync-hostips                                [OK] Synchronized (9s, x31)
      │   ├── dynamic-lifecycle-manager
      │   │   ├── job-reconcile                                   [OK] OK, 0 object(s) (29m, x2)
      │   │   └── job-refresh                                     [OK] Next refresh in 30m0s (29m, x1)
      │   ├── enabled-features
      │   │   └── job-update-config-metric                        [OK] Waiting for agent config (29m, x1)
      │   ├── endpoint-manager
      │   │   ├── cilium-endpoint-162 (/)
      │   │   │   ├── datapath-regenerate                         [OK] Endpoint regeneration successful (73s, x17)
      │   │   │   └── policymap-sync                              [OK] sync-policymap-162 (13m, x2)
      │   │   ├── cilium-endpoint-235 (kube-system/coredns-674b8bbfcf-kw84j)
      │   │   │   ├── cep-k8s-sync                                [OK] sync-to-k8s-ciliumendpoint (235) (2s, x176)
      │   │   │   ├── datapath-regenerate                         [OK] Endpoint regeneration successful (73s, x16)
      │   │   │   └── policymap-sync                              [OK] sync-policymap-235 (13m, x2)
      │   │   ├── cilium-endpoint-2913 (default/webpod-6c6d676d8c-m6x7k)
      │   │   │   ├── cep-k8s-sync                                [OK] sync-to-k8s-ciliumendpoint (2913) (5s, x54)
      │   │   │   ├── datapath-regenerate                         [OK] Endpoint regeneration successful (73s, x5)
      │   │   │   └── policymap-sync                              [OK] sync-policymap-2913 (8m48s, x1)
      │   │   ├── cilium-endpoint-349 (/)
      │   │   │   ├── datapath-regenerate                         [OK] Endpoint regeneration successful (73s, x17)
      │   │   │   └── policymap-sync                              [OK] sync-policymap-349 (13m, x2)
      │   │   ├── cilium-endpoint-703 (kube-system/coredns-674b8bbfcf-2xwg2)
      │   │   │   ├── cep-k8s-sync                                [OK] sync-to-k8s-ciliumendpoint (703) (2s, x176)
      │   │   │   ├── datapath-regenerate                         [OK] Endpoint regeneration successful (73s, x16)
      │   │   │   └── policymap-sync                              [OK] sync-policymap-703 (13m, x2)
      │   │   └── endpoint-gc                                     [OK] endpoint-gc (4m13s, x6)
      │   ├── envoy-proxy
      │   │   ├── observer-job-k8s-secrets-resource-events-cilium-secrets    [OK] Primed (29m, x1)
      │   │   └── timer-job-version-check                         [OK] OK (5.893811ms) (4m9s, x1)
      │   ├── hubble
      │   │   └── job-hubble                                      [OK] Running (29m, x1)
      │   ├── identity
      │   │   └── timer-job-id-alloc-update-policy-maps           [OK] OK (5.555409ms) (7m50s, x1)
      │   ├── l2-announcer
      │   │   └── job-l2-announcer-lease-gc                       [OK] Running (29m, x1)
      │   ├── nat-stats
      │   │   └── timer-job-nat-stats                             [OK] OK (1.146975ms) (9s, x1)
      │   ├── node-manager
      │   │   ├── background-sync                                 [OK] Node validation successful (29s, x21)
      │   │   ├── neighbor-link-updater
      │   │   │   ├── k8s-ctr                                     [OK] Node neighbor link update successful (55s, x23)
      │   │   │   └── k8s-w2                                      [OK] Node neighbor link update successful (25s, x22)
      │   │   ├── node-checkpoint-writer                          [OK] node checkpoint written (27m, x3)
      │   │   ├── nodes-add                                       [OK] Node adds successful (28m, x3)
      │   │   └── nodes-update                                    [OK] Node updates successful (28m, x4)
      │   ├── policy
      │   │   └── observer-job-policy-importer                    [OK] Primed (29m, x1)
      │   ├── service-manager
      │   │   ├── job-health-check-event-watcher                  [OK] Waiting for health check events (29m, x1)
      │   │   └── job-service-reconciler                          [OK] 3 NodePort frontend addresses (29m, x1)
      │   ├── service-resolver
      │   │   └── job-service-reloader-initializer                [OK] Running (29m, x1)
      │   └── stale-endpoint-cleanup
      │       └── job-endpoint-cleanup                            [OK] Running (29m, x1)
      ├── datapath
      │   ├── agent-liveness-updater
      │   │   └── timer-job-agent-liveness-updater                [OK] OK (69.346&amp;micro;s) (0s, x1)
      │   ├── iptables
      │   │   ├── ipset
      │   │   │   ├── job-ipset-init-finalizer                    [OK] Running (29m, x1)
      │   │   │   ├── job-reconcile                               [OK] OK, 0 object(s) (29m, x3)
      │   │   │   └── job-refresh                                 [OK] Next refresh in 30m0s (29m, x1)
      │   │   └── job-iptables-reconciliation-loop                [OK] iptables rules full reconciliation completed (29m, x1)
      │   ├── l2-responder
      │   │   └── job-l2-responder-reconciler                     [OK] Running (29m, x1)
      │   ├── maps
      │   │   └── bwmap
      │   │       └── timer-job-pressure-metric-throttle          [OK] OK (2.505&amp;micro;s) (9s, x1)
      │   ├── mtu
      │   │   ├── job-endpoint-mtu-updater                        [OK] Endpoint MTU updated (29m, x1)
      │   │   └── job-mtu-updater                                 [OK] MTU updated (1500) (29m, x1)
      │   ├── node-address
      │   │   └── job-node-address-update                         [OK] 172.20.0.95 (primary), fe80::2c6e:e7ff:fe46:b9c3 (primary) (29m, x1)
      │   ├── orchestrator
      │   │   └── job-reinitialize                                [OK] OK (28m, x2)
      │   └── sysctl
      │       ├── job-reconcile                                   [OK] OK, 16 object(s) (8m52s, x40)
      │       └── job-refresh                                     [OK] Next refresh in 9m39.13838503s (8m52s, x1)
      └── infra
          ├── k8s-synced-crdsync
          │   └── job-sync-crds                                   [OK] Running (29m, x1)
          ├── metrics
          │   ├── job-collect                                     [OK] Sampled 24 metrics in 3.530564ms, next collection at 2025-07-18 15:07:33.389126136 +0000 UTC m=+1806.946221479 (4m9s, x1)
          │   └── timer-job-cleanup                               [OK] Primed (29m, x1)
          └── shell
              └── job-listener                                    [OK] Listening on /var/run/cilium/shell.sock (29m, x1)
&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;노드에서 실제 인터페이스 구성 정보를 확인해보겠습니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;(⎈|HomeLab:N/A) root@k8s-ctr:~# ip -c addr
1: lo: &amp;lt;LOOPBACK,UP,LOWER_UP&amp;gt; mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host noprefixroute
       valid_lft forever preferred_lft forever
2: eth0: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 08:00:27:6b:69:c9 brd ff:ff:ff:ff:ff:ff
    altname enp0s3
    inet 10.0.2.15/24 metric 100 brd 10.0.2.255 scope global dynamic eth0
       valid_lft 76631sec preferred_lft 76631sec
    inet6 fd17:625c:f037:2:a00:27ff:fe6b:69c9/64 scope global dynamic mngtmpaddr noprefixroute
       valid_lft 86026sec preferred_lft 14026sec
    inet6 fe80::a00:27ff:fe6b:69c9/64 scope link
       valid_lft forever preferred_lft forever
3: eth1: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 08:00:27:8f:41:1f brd ff:ff:ff:ff:ff:ff
    altname enp0s8
    inet 192.168.10.100/24 brd 192.168.10.255 scope global eth1
       valid_lft forever preferred_lft forever
    inet6 fe80::a00:27ff:fe8f:411f/64 scope link
       valid_lft forever preferred_lft forever
7: cilium_net@cilium_host: &amp;lt;BROADCAST,MULTICAST,NOARP,UP,LOWER_UP&amp;gt; mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether aa:17:85:8a:b8:37 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::a817:85ff:fe8a:b837/64 scope link
       valid_lft forever preferred_lft forever
8: cilium_host@cilium_net: &amp;lt;BROADCAST,MULTICAST,NOARP,UP,LOWER_UP&amp;gt; mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether de:9b:6c:2e:37:30 brd ff:ff:ff:ff:ff:ff
    inet 172.20.2.243/32 scope global cilium_host
       valid_lft forever preferred_lft forever
    inet6 fe80::dc9b:6cff:fe2e:3730/64 scope link
       valid_lft forever preferred_lft forever
10: lxc_health@if9: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 12:c1:f0:5f:26:f3 brd ff:ff:ff:ff:ff:ff link-netnsid 1
    inet6 fe80::10c1:f0ff:fe5f:26f3/64 scope link
       valid_lft forever preferred_lft forever
12: lxc016a621522d5@if11: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether f6:f1:82:c0:5f:34 brd ff:ff:ff:ff:ff:ff link-netns cni-0e3d3059-2216-292c-fe61-8273101bd4d8
    inet6 fe80::f4f1:82ff:fec0:5f34/64 scope link
       valid_lft forever preferred_lft forever&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;cilium_net, cilium_host, lxc_health 와 같은 인터페이스가 추가 되었습니다. 그리고 lxc@if와 같은 형태의 veth 인터페이스가 확인됩니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;먼저 cilium_net과 cilium_host는 아래와 같이 구성됩니다. 각 호스트에는 cilium agent가 실행되어 local IPAM 역할을 합니다. 이후 cilium_host &amp;lt;---&amp;gt; cilium_net 같의 veth pair 를 생성하고, 첫번째 IP를 cilium_host에 할당하여 노드가 가진 CIDR에 대한 gateway 을 합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;파드가 실행되면, CNI plugin은 IP를 할당하고, veth pair을 생성하고, IP와 gateway 정보를 파드에 설정합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;결론적으로 아래와 같은 구성이 되지만 다른 CNI와는 달리 OVS나 Linux Bridge 가 구성되지는 않고 별도의 ARP 엔트리가 있지도 않습니다. 결국 모든 통신은 BPF 코드로 이뤄지며, 각 구성 요소 간의 통신을 위해 BPF 프로그램이 삽입되고 실행되어 통신이 이뤄집니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;550&quot; data-origin-height=&quot;440&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/wbXUk/btsPp7oJ56A/kTSymzJ8yvJnBsP5TxuxhK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/wbXUk/btsPp7oJ56A/kTSymzJ8yvJnBsP5TxuxhK/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/wbXUk/btsPp7oJ56A/kTSymzJ8yvJnBsP5TxuxhK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FwbXUk%2FbtsPp7oJ56A%2FkTSymzJ8yvJnBsP5TxuxhK%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;550&quot; height=&quot;440&quot; data-origin-width=&quot;550&quot; data-origin-height=&quot;440&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;출처: &lt;a href=&quot;https://arthurchiao.art/blog/ctrip-network-arch-evolution/&quot;&gt;https://arthurchiao.art/blog/ctrip-network-arch-evolution/&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이 중 lxc_health 인터페이스는 &lt;code&gt;cilium-dbg status --verbose&lt;/code&gt; 로 확인한 정보 중 172.20.0.0 대역의 IP가 바로 health 인터페이스입니다. Cilium 노드는 모든 노드를 인식하는 Full Mesh로 구성되어 각 노드의 health check를 통해 클러스터 전체의 health status를 파악하는데, 이 때 해당 인터페이스가 사용됩니다.&lt;/p&gt;
&lt;pre class=&quot;routeros&quot;&gt;&lt;code&gt;Name              IP              Node   Endpoints
  k8s-w1 (localhost):
    Host connectivity to 192.168.10.101:
      ICMP to stack:   OK, RTT=22.277984ms
      HTTP to agent:   OK, RTT=33.90763ms
    Endpoint connectivity to 172.20.0.178:
      ICMP to stack:   OK, RTT=1.595806ms
      HTTP to agent:   OK, RTT=4.287581ms
  k8s-ctr:
    Host connectivity to 192.168.10.100:
      ICMP to stack:   OK, RTT=4.257795ms
      HTTP to agent:   OK, RTT=11.35249ms
    Endpoint connectivity to 172.20.2.230:
      ICMP to stack:   OK, RTT=6.307589ms
      HTTP to agent:   OK, RTT=4.986504ms
  k8s-w2:
    Host connectivity to 192.168.10.102:
      ICMP to stack:   OK, RTT=1.68667ms
      HTTP to agent:   OK, RTT=2.677373ms
    Endpoint connectivity to 172.20.1.164:
      ICMP to stack:   OK, RTT=3.207928ms
      HTTP to agent:   OK, RTT=5.125123ms
      ..

# cilium endpoint 에서 확인 가능함
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it -n kube-system ds/cilium -c cilium-agent -- cilium-dbg endpoint list | grep health
162        Disabled           Disabled          4          reserved:health                                                                     172.20.0.178   ready

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it -n kube-system ds/cilium -c cilium-agent -- cilium-dbg status --all-addresses
KVStore:                Disabled
Kubernetes:             Ok         1.33 (v1.33.2) [linux/amd64]
Kubernetes APIs:        [&quot;EndpointSliceOrEndpoint&quot;, &quot;cilium/v2::CiliumClusterwideNetworkPolicy&quot;, &quot;cilium/v2::CiliumEndpoint&quot;, &quot;cilium/v2::CiliumNetworkPolicy&quot;, &quot;cilium/v2::CiliumNode&quot;, &quot;cilium/v2alpha1::CiliumCIDRGroup&quot;, &quot;core/v1::Namespace&quot;, &quot;core/v1::Pods&quot;, &quot;core/v1::Service&quot;, &quot;networking.k8s.io/v1::NetworkPolicy&quot;]
KubeProxyReplacement:   True   [eth0    fe80::a00:27ff:fe6b:69c9 10.0.2.15 fd17:625c:f037:2:a00:27ff:fe6b:69c9, eth1   192.168.10.101 fe80::a00:27ff:fe6d:8e42 (Direct Routing)]
Host firewall:          Disabled
SRv6:                   Disabled
CNI Chaining:           none
CNI Config file:        successfully wrote CNI configuration file to /host/etc/cni/net.d/05-cilium.conflist
Cilium:                 Ok   1.17.5 (v1.17.5-69aab28c)
NodeMonitor:            Listening for events on 2 CPUs with 64x4096 of shared memory
Cilium health daemon:   Ok
IPAM:                   IPv4: 5/254 allocated from 172.20.0.0/24,
Allocated addresses:
  172.20.0.157 (kube-system/coredns-674b8bbfcf-kw84j)
  172.20.0.178 (health)  # &amp;lt;-- heatlh 인터페이스
  172.20.0.243 (kube-system/coredns-674b8bbfcf-2xwg2)
  172.20.0.89 (default/webpod-6c6d676d8c-m6x7k)
  172.20.0.95 (router)
IPv4 BIG TCP:            Disabled
IPv6 BIG TCP:            Disabled
BandwidthManager:        Disabled
Routing:                 Network: Native   Host: BPF
Attach Mode:             TCX
Device Mode:             veth
Masquerading:            BPF   [eth0, eth1]   172.20.0.0/16 [IPv4: Enabled, IPv6: Disabled]
Controller Status:       37/37 healthy
Proxy Status:            OK, ip 172.20.0.95, 0 redirects active on ports 10000-20000, Envoy: external
Global Identity Range:   min 256, max 65535
Hubble:                  Ok              Current/Max Flows: 4095/4095 (100.00%), Flows/s: 5.51   Metrics: Disabled
Encryption:              Disabled
Cluster health:          3/3 reachable   (2025-07-19T05:24:46Z)
Name                     IP              Node   Endpoints
Modules Health:          Stopped(0) Degraded(0) OK(61)&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Cilium health check에 대한 상세한 내용은 아래의 문서를 참고 하실 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;참고: &lt;a href=&quot;https://arthurchiao.art/blog/cilium-code-health-probe/&quot;&gt;https://arthurchiao.art/blog/cilium-code-health-probe/&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;라우팅 정보를 확인해 보겠습니다. Native Routing 을 사용하고 &lt;code&gt;autoDirectNodeRoutes=true&lt;/code&gt;이 설정되어 각 노드에 할당된 PodCIDR로 라우팅이 등록된 것을 확인할 수 있습니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;(⎈|HomeLab:N/A) root@k8s-ctr:~# ip -c route | grep 172.20 | grep eth1
172.20.0.0/24 via 192.168.10.101 dev eth1 proto kernel
172.20.1.0/24 via 192.168.10.102 dev eth1 proto kernel

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get no -owide
NAME      STATUS   ROLES           AGE     VERSION   INTERNAL-IP      EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION     CONTAINER-RUNTIME
k8s-ctr   Ready    control-plane   5d19h   v1.33.2   192.168.10.100   &amp;lt;none&amp;gt;        Ubuntu 24.04.2 LTS   6.8.0-53-generic   containerd://1.7.27
k8s-w1    Ready    &amp;lt;none&amp;gt;          5d19h   v1.33.2   192.168.10.101   &amp;lt;none&amp;gt;        Ubuntu 24.04.2 LTS   6.8.0-53-generic   containerd://1.7.27
k8s-w2    Ready    &amp;lt;none&amp;gt;          5d19h   v1.33.2   192.168.10.102   &amp;lt;none&amp;gt;        Ubuntu 24.04.2 LTS   6.8.0-53-generic   containerd://1.7.27

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get po -owide
NAME                      READY   STATUS    RESTARTS   AGE   IP            NODE      NOMINATED NODE   READINESS GATES
curl-pod                  1/1     Running   0          14h   172.20.2.94   k8s-ctr   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
webpod-6c6d676d8c-m6x7k   1/1     Running   0          14h   172.20.0.89   k8s-w1    &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
webpod-6c6d676d8c-mjf7m   1/1     Running   0          14h   172.20.1.33   k8s-w2    &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;추가로 확인해볼 사항은 hostNetwork 아닌 파드들은 모두 ciliumendpoint가 구성되어 있습니다. 이러한 파드들은 lxc 인터페이스로 구성되고 또한 &lt;code&gt;endpointRoutes.enabled=true&lt;/code&gt; 이 설정되어 모두 라우팅이 등록됩니다.&lt;/p&gt;
&lt;pre class=&quot;routeros&quot;&gt;&lt;code&gt;(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get ciliumendpoints -A
NAMESPACE     NAME                       SECURITY IDENTITY   ENDPOINT STATE   IPV4           IPV6
default       curl-pod                   63464               ready            172.20.2.94
default       webpod-6c6d676d8c-m6x7k    55697               ready            172.20.0.89
default       webpod-6c6d676d8c-mjf7m    55697               ready            172.20.1.33
kube-system   coredns-674b8bbfcf-2xwg2   53192               ready            172.20.0.243
kube-system   coredns-674b8bbfcf-kw84j   53192               ready            172.20.0.157

(⎈|HomeLab:N/A) root@k8s-ctr:~# ip -c route | grep lxc
172.20.2.94 dev lxc016a621522d5 proto kernel scope link
172.20.2.230 dev lxc_health proto kernel scope link&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;마지막으로 cilium 명령으로 확인 가능한 정보를 살펴보겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이때 cilium CLI를 설치해서 사용하는 명령과, 노드에 실행 중인 DaemonSet에서 실행하는 cilium 명령 그리고 cilium-dbg에는 차이가 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;외부에서 CLI를 설치한 &lt;code&gt;cilium&lt;/code&gt;는 Cilium Cluster의 상태 확인, 설정 변경, 업그레이드와 같은데 사용되며, DaemonSet에서 실행하는 &lt;code&gt;cilium&lt;/code&gt;은 보통 해당 노드의 Cilium Agent 상태를 확인하고, 또한 해당 노드에 관점에서 BPF Map이나 endpoint 정보 등을 확인하는데 사용됩니다. 반면 cilium-dbg는 보다 디버깅 목적의 옵션으로 생각할 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아래와 같이 각 변수를 지정하여 노드에서 확인 가능한 cilium 정보를 확인합니다.&lt;/p&gt;
&lt;pre class=&quot;routeros&quot;&gt;&lt;code&gt;# cilium 파드 이름
export CILIUMPOD0=$(kubectl get -l k8s-app=cilium pods -n kube-system --field-selector spec.nodeName=k8s-ctr -o jsonpath='{.items[0].metadata.name}')
export CILIUMPOD1=$(kubectl get -l k8s-app=cilium pods -n kube-system --field-selector spec.nodeName=k8s-w1  -o jsonpath='{.items[0].metadata.name}')
export CILIUMPOD2=$(kubectl get -l k8s-app=cilium pods -n kube-system --field-selector spec.nodeName=k8s-w2  -o jsonpath='{.items[0].metadata.name}')
echo $CILIUMPOD0 $CILIUMPOD1 $CILIUMPOD2

# 단축키(alias) 지정
alias c0=&quot;kubectl exec -it $CILIUMPOD0 -n kube-system -c cilium-agent -- cilium&quot;
alias c1=&quot;kubectl exec -it $CILIUMPOD1 -n kube-system -c cilium-agent -- cilium&quot;
alias c2=&quot;kubectl exec -it $CILIUMPOD2 -n kube-system -c cilium-agent -- cilium&quot;

alias c0bpf=&quot;kubectl exec -it $CILIUMPOD0 -n kube-system -c cilium-agent -- bpftool&quot;
alias c1bpf=&quot;kubectl exec -it $CILIUMPOD1 -n kube-system -c cilium-agent -- bpftool&quot;
alias c2bpf=&quot;kubectl exec -it $CILIUMPOD2 -n kube-system -c cilium-agent -- bpftool&quot;


# endpoint
(⎈|HomeLab:N/A) root@k8s-ctr:~# c0 endpoint list
ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                              IPv6   IPv4           STATUS
           ENFORCEMENT        ENFORCEMENT
400        Disabled           Disabled          4          reserved:health                                                                 172.20.2.230   ready
1621       Disabled           Disabled          63464      k8s:app=curl                                                                    172.20.2.94    ready
                                                           k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default                             
                                                           k8s:io.cilium.k8s.policy.cluster=default                                                           
                                                           k8s:io.cilium.k8s.policy.serviceaccount=default                                                    
                                                           k8s:io.kubernetes.pod.namespace=default                                                            
3459       Disabled           Disabled          1          k8s:node-role.kubernetes.io/control-plane                                                      ready
                                                           k8s:node.kubernetes.io/exclude-from-external-load-balancers                                        
                                                           reserved:host                                                 
# c0 endpoint get &amp;lt;id&amp;gt;
(⎈|HomeLab:N/A) root@k8s-ctr:~# c0 endpoint get 1621
[
  {
    &quot;id&quot;: 1621,
    &quot;spec&quot;: {
      &quot;label-configuration&quot;: {},
      &quot;options&quot;: {
        &quot;ConntrackAccounting&quot;: &quot;Disabled&quot;,
        &quot;ConntrackLocal&quot;: &quot;Disabled&quot;,
        &quot;Debug&quot;: &quot;Disabled&quot;,
        &quot;DebugLB&quot;: &quot;Disabled&quot;,
        &quot;DebugPolicy&quot;: &quot;Disabled&quot;,
        &quot;DropNotification&quot;: &quot;Enabled&quot;,
        &quot;MonitorAggregationLevel&quot;: &quot;Medium&quot;,
        &quot;PolicyAccounting&quot;: &quot;Enabled&quot;,
        &quot;PolicyAuditMode&quot;: &quot;Disabled&quot;,
        &quot;PolicyVerdictNotification&quot;: &quot;Enabled&quot;,
        &quot;SourceIPVerification&quot;: &quot;Enabled&quot;,
        &quot;TraceNotification&quot;: &quot;Enabled&quot;
      }
    },
    &quot;status&quot;: {
      &quot;controllers&quot;: [
        {
          &quot;configuration&quot;: {
            &quot;error-retry&quot;: true,
            &quot;error-retry-base&quot;: &quot;2s&quot;,
            &quot;interval&quot;: &quot;1s&quot;
          },
          &quot;name&quot;: &quot;endpoint-1621-regeneration-recovery&quot;,
          &quot;status&quot;: {
            &quot;last-failure-timestamp&quot;: &quot;0001-01-01T00:00:00.000Z&quot;,
            &quot;last-success-timestamp&quot;: &quot;0001-01-01T00:00:00.000Z&quot;
          },
          &quot;uuid&quot;: &quot;a52ea6f2-652f-4757-8680-4a687d0f6376&quot;
        },
... 

# c1 endpoint log &amp;lt;id&amp;gt;
(⎈|HomeLab:N/A) root@k8s-ctr:~# c0 endpoint log 1621
Timestamp              Status   State                   Message
2025-07-19T07:12:44Z   OK       ready                   Successfully regenerated endpoint program (Reason: periodic endpoint regeneration)
2025-07-19T07:12:44Z   OK       ready                   Completed endpoint regeneration with no pending regeneration requests
2025-07-19T07:12:44Z   OK       regenerating            Regenerating endpoint: periodic endpoint regeneration
2025-07-19T07:12:44Z   OK       waiting-to-regenerate   Triggering endpoint regeneration due to periodic endpoint regeneration
2025-07-19T07:10:44Z   OK       ready                   Successfully regenerated endpoint program (Reason: periodic endpoint regeneration)
2025-07-19T07:10:44Z   OK       ready                   Completed endpoint regeneration with no pending regeneration requests
2025-07-19T07:10:44Z   OK       regenerating            Regenerating endpoint: periodic endpoint regeneration
...

## Enable debugging output on the cilium-dbg monitor for this endpoint
c1 endpoint config &amp;lt;id&amp;gt; Debug=true


# monitor : 노드의 cilium 명령을 통해서 endpoint id 기반의 모니터링이 가능하다.
# c1 monitor
(⎈|HomeLab:N/A) root@k8s-ctr:~# c1 monitor
Listening for events on 2 CPUs with 64x4096 of shared memory
Press Ctrl-C to quit
time=&quot;2025-07-19T07:13:08.693252977Z&quot; level=info msg=&quot;Initializing dissection cache...&quot; subsys=monitor
-&amp;gt; network flow 0x467e89fe , identity host-&amp;gt;unknown state established ifindex eth1 orig-ip 192.168.10.101: 192.168.10.101:47636 -&amp;gt; 192.168.10.100:6443 tcp ACK
-&amp;gt; endpoint 162 flow 0xe359a112 , identity host-&amp;gt;health state established ifindex lxc_health orig-ip 10.0.2.15: 10.0.2.15:50108 -&amp;gt; 172.20.0.178:4240 tcp ACK
-&amp;gt; stack flow 0x3dc83a6b , identity health-&amp;gt;host state reply ifindex 0 orig-ip 0.0.0.0: 172.20.0.178:4240 -&amp;gt; 10.0.2.15:50108 tcp ACK
-&amp;gt; network flow 0x941ecd38 , identity host-&amp;gt;unknown state established ifindex eth1 orig-ip 192.168.10.101: 192.168.10.101:33218 -&amp;gt; 192.168.10.100:6443 tcp ACK
^C
Received an interrupt, disconnecting from monitor...

(⎈|HomeLab:N/A) root@k8s-ctr:~# c1 monitor -v
Listening for events on 2 CPUs with 64x4096 of shared memory
Press Ctrl-C to quit
CPU 01: [pre-xlate-rev] cgroup_id: 31139 sock_cookie: 93312, dst [127.0.0.1]:8080 tcp
CPU 01: [pre-xlate-rev] cgroup_id: 31139 sock_cookie: 93312, dst [127.0.0.1]:8080 tcp
CPU 01: [pre-xlate-rev] cgroup_id: 31139 sock_cookie: 93312, dst [127.0.0.1]:8080 tcp
CPU 01: [pre-xlate-rev] cgroup_id: 31139 sock_cookie: 96621, dst [127.0.0.1]:37478 tcp
CPU 01: [pre-xlate-rev] cgroup_id: 31139 sock_cookie: 96621, dst [127.0.0.1]:37478 tcp
CPU 01: [pre-xlate-rev] cgroup_id: 31139 sock_cookie: 96621, dst [127.0.0.1]:37478 tcp
CPU 01: [pre-xlate-rev] cgroup_id: 31139 sock_cookie: 93312, dst [127.0.0.1]:8080 tcp
CPU 01: [pre-xlate-rev] cgroup_id: 31139 sock_cookie: 93312, dst [127.0.0.1]:8080 tcp
CPU 01: [pre-xlate-rev] cgroup_id: 31139 sock_cookie: 96621, dst [127.0.0.1]:37478 tcp
CPU 01: [pre-xlate-rev] cgroup_id: 31139 sock_cookie: 96621, dst [127.0.0.1]:37478 tcp
CPU 01: [pre-xlate-rev] cgroup_id: 31067 sock_cookie: 93313, dst [127.0.0.1]:37486 tcp
CPU 01: [pre-xlate-rev] cgroup_id: 31067 sock_cookie: 96622, dst [127.0.0.1]:8080 tcp
CPU 01: [pre-xlate-rev] cgroup_id: 31067 sock_cookie: 96622, dst [127.0.0.1]:8080 tcp
CPU 01: [pre-xlate-rev] cgroup_id: 31067 sock_cookie: 96622, dst [127.0.0.1]:8080 tcp
CPU 01: [pre-xlate-rev] cgroup_id: 31067 sock_cookie: 96622, dst [127.0.0.1]:8080 tcp
CPU 01: [pre-xlate-rev] cgroup_id: 31067 sock_cookie: 93313, dst [127.0.0.1]:37486 tcp
CPU 01: [pre-xlate-rev] cgroup_id: 31067 sock_cookie: 93313, dst [127.0.0.1]:37486 tcp
CPU 01: [pre-xlate-rev] cgroup_id: 31067 sock_cookie: 96622, dst [127.0.0.1]:8080 tcp
CPU 01: [pre-xlate-rev] cgroup_id: 31067 sock_cookie: 93313, dst [127.0.0.1]:37486 tcp
CPU 01: [pre-xlate-rev] cgroup_id: 31067 sock_cookie: 93313, dst [127.0.0.1]:37486 tcp
^C
Received an interrupt, disconnecting from monitor...

(⎈|HomeLab:N/A) root@k8s-ctr:~# c1 monitor -v -v
Listening for events on 2 CPUs with 64x4096 of shared memory
Press Ctrl-C to quit
------------------------------------------------------------------------------
time=&quot;2025-07-19T07:13:32.741985494Z&quot; level=info msg=&quot;Initializing dissection cache...&quot; subsys=monitor
Ethernet        {Contents=[..14..] Payload=[..54..] SrcMAC=08:00:27:6d:8e:42 DstMAC=08:00:27:8f:41:1f EthernetType=IPv4 Length=0}
IPv4    {Contents=[..20..] Payload=[..32..] Version=4 IHL=5 TOS=0 Length=52 Id=23074 Flags=DF FragOffset=0 TTL=64 Protocol=TCP Checksum=26266 SrcIP=192.168.10.101 DstIP=172.20.2.230 Options=[] Padding=[]}
TCP     {Contents=[..32..] Payload=[] SrcPort=35882 DstPort=4240 Seq=4118593165 Ack=299035729 DataOffset=8 FIN=false SYN=false RST=false PSH=false ACK=true URG=false ECE=false CWR=false NS=false Window=502 Checksum=31278 Urgent=0 Options=[TCPOption(NOP:), TCPOption(NOP:), TCPOption(Timestamps:3658499571/800153387 0xda1045f32fb15f2b)] Padding=[] Multipath=false}
CPU 01: MARK 0x62940d61 FROM 349 to-network: 66 bytes (66 captured), state established, interface eth1, , identity host-&amp;gt;unknown, orig-ip 192.168.10.101
------------------------------------------------------------------------------
Ethernet        {Contents=[..14..] Payload=[..118..] SrcMAC=02:4d:41:f0:66:b2 DstMAC=1e:e0:81:b2:25:81 EthernetType=IPv4 Length=0}
IPv4    {Contents=[..20..] Payload=[..98..] Version=4 IHL=5 TOS=0 Length=158 Id=50875 Flags=DF FragOffset=0 TTL=63 Protocol=TCP Checksum=64650 SrcIP=192.168.10.100 DstIP=172.20.0.243 Options=[] Padding=[]}
TCP     {Contents=[..32..] Payload=[..66..] SrcPort=6443(sun-sr-https) DstPort=43252 Seq=1858951582 Ack=2089994716 DataOffset=8 FIN=false SYN=false RST=false PSH=true ACK=true URG=false ECE=false CWR=false NS=false Window=543 Checksum=3855 Urgent=0 Options=[TCPOption(NOP:), TCPOption(NOP:), TCPOption(Timestamps:1378041042/2778563329 0x52233cd2a59d8301)] Padding=[] Multipath=false}
  Packet has been truncated
CPU 01: MARK 0x0 FROM 703 to-endpoint: 172 bytes (128 captured), state reply, interface lxca73c3d5d01af, , identity kube-apiserver-&amp;gt;53192, orig-ip 192.168.10.100, to endpoint 703
------------------------------------------------------------------------------
Ethernet        {Contents=[..14..] Payload=[..54..] SrcMAC=08:00:27:6d:8e:42 DstMAC=08:00:27:8f:41:1f EthernetType=IPv4 Length=0}
IPv4    {Contents=[..20..] Payload=[..32..] Version=4 IHL=5 TOS=0 Length=52 Id=1716 Flags=DF FragOffset=0 TTL=63 Protocol=TCP Checksum=48380 SrcIP=172.20.0.243 DstIP=192.168.10.100 Options=[] Padding=[]}
TCP     {Contents=[..32..] Payload=[] SrcPort=43252 DstPort=6443(sun-sr-https) Seq=2089994716 Ack=1858951688 DataOffset=8 FIN=false SYN=false RST=false PSH=false ACK=true URG=false ECE=false CWR=false NS=false Window=868 Checksum=30778 Urgent=0 Options=[TCPOption(NOP:), TCPOption(NOP:), TCPOption(Timestamps:2778581330/1378041042 0xa59dc95252233cd2)] Padding=[] Multipath=false}
CPU 01: MARK 0x2fbf3494 FROM 703 to-network: 66 bytes (66 captured), state established, interface eth1, , identity 53192-&amp;gt;kube-apiserver, orig-ip 0.0.0.0
------------------------------------------------------------------------------
Ethernet        {Contents=[..14..] Payload=[..54..] SrcMAC=5a:fc:8d:8c:b1:6a DstMAC=6e:13:17:b1:bb:69 EthernetType=IPv4 Length=0}
IPv4    {Contents=[..20..] Payload=[..32..] Version=4 IHL=5 TOS=0 Length=52 Id=967 Flags=DF FragOffset=0 TTL=63 Protocol=TCP Checksum=49192 SrcIP=192.168.10.102 DstIP=172.20.0.178 Options=[] Padding=[]}
TCP     {Contents=[..32..] Payload=[] SrcPort=37072 DstPort=4240 Seq=4056607986 Ack=1404168453 DataOffset=8 FIN=false SYN=false RST=false PSH=false ACK=true URG=false ECE=false CWR=false NS=false Window=502 Checksum=40580 Urgent=0 Options=[TCPOption(NOP:), TCPOption(NOP:), TCPOption(Timestamps:2447361289/888346044 0x91dfc50934f315bc)] Padding=[] Multipath=false}
CPU 01: MARK 0x0 FROM 162 to-endpoint: 66 bytes (66 captured), state established, interface lxc_health, , identity remote-node-&amp;gt;health, orig-ip 192.168.10.102, to endpoint 162
------------------------------------------------------------------------------
Ethernet        {Contents=[..14..] Payload=[..54..] SrcMAC=08:00:27:6d:8e:42 DstMAC=08:00:27:99:d7:56 EthernetType=IPv4 Length=0}
IPv4    {Contents=[..20..] Payload=[..32..] Version=4 IHL=5 TOS=0 Length=52 Id=50595 Flags=DF FragOffset=0 TTL=63 Protocol=TCP Checksum=65099 SrcIP=172.20.0.178 DstIP=192.168.10.102 Options=[] Padding=[]}
TCP     {Contents=[..32..] Payload=[] SrcPort=4240 DstPort=37072 Seq=1404168453 Ack=4056607987 DataOffset=8 FIN=false SYN=false RST=false PSH=false ACK=true URG=false ECE=false CWR=false NS=false Window=509 Checksum=30715 Urgent=0 Options=[TCPOption(NOP:), TCPOption(NOP:), TCPOption(Timestamps:888376357/2447346174 0x34f38c2591df89fe)] Padding=[] Multipath=false}
CPU 01: MARK 0xe62cf042 FROM 162 to-network: 66 bytes (66 captured), state reply, interface eth1, , identity health-&amp;gt;remote-node, orig-ip 0.0.0.0
^C
Received an interrupt, disconnecting from monitor...

## Filter for only the events related to endpoint
c1 monitor --related-to=&amp;lt;id&amp;gt;

## Show notifications only for dropped packet events
c1 monitor --type drop

## Don&amp;rsquo;t dissect packet payload, display payload in hex information
c1 monitor -v -v --hex

## Layer7
c1 monitor -v --type l7


# Manage IP addresses and associated information - IP List
# c0 ip list
(⎈|HomeLab:N/A) root@k8s-ctr:~# c0 ip list
IP                  IDENTITY                                                                     SOURCE
0.0.0.0/0           reserved:world
10.0.2.15/32        reserved:host
                    reserved:kube-apiserver
172.20.0.89/32      k8s:app=webpod                                                               custom-resource
                    k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default
                    k8s:io.cilium.k8s.policy.cluster=default
                    k8s:io.cilium.k8s.policy.serviceaccount=default
                    k8s:io.kubernetes.pod.namespace=default
172.20.0.95/32      reserved:remote-node
172.20.0.157/32     k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system   custom-resource
                    k8s:io.cilium.k8s.policy.cluster=default
                    k8s:io.cilium.k8s.policy.serviceaccount=coredns
                    k8s:io.kubernetes.pod.namespace=kube-system
                    k8s:k8s-app=kube-dns
172.20.0.178/32     reserved:health
172.20.0.243/32     k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system   custom-resource
                    k8s:io.cilium.k8s.policy.cluster=default
                    k8s:io.cilium.k8s.policy.serviceaccount=coredns
                    k8s:io.kubernetes.pod.namespace=kube-system
                    k8s:k8s-app=kube-dns
172.20.1.17/32      reserved:remote-node
172.20.1.33/32      k8s:app=webpod                                                               custom-resource
                    k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default
                    k8s:io.cilium.k8s.policy.cluster=default
                    k8s:io.cilium.k8s.policy.serviceaccount=default
                    k8s:io.kubernetes.pod.namespace=default
172.20.1.164/32     reserved:health
172.20.2.94/32      k8s:app=curl                                                                 custom-resource
                    k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default
                    k8s:io.cilium.k8s.policy.cluster=default
                    k8s:io.cilium.k8s.policy.serviceaccount=default
                    k8s:io.kubernetes.pod.namespace=default
172.20.2.230/32     reserved:health
172.20.2.243/32     reserved:host
                    reserved:kube-apiserver
192.168.10.100/32   reserved:host
                    reserved:kube-apiserver
192.168.10.101/32   reserved:remote-node
192.168.10.102/32   reserved:remote-node

# IDENTITY :  1(host), 2(world), 4(health), 6(remote), 파드마다 개별 ID
# c0 ip list -n
(⎈|HomeLab:N/A) root@k8s-ctr:~# c0 ip list -n
IP                  IDENTITY   SOURCE
0.0.0.0/0           2
10.0.2.15/32        1
172.20.0.89/32      55697      custom-resource
172.20.0.95/32      6
172.20.0.157/32     53192      custom-resource
172.20.0.178/32     4
172.20.0.243/32     53192      custom-resource
172.20.1.17/32      6
172.20.1.33/32      55697      custom-resource
172.20.1.164/32     4
172.20.2.94/32      63464      custom-resource
172.20.2.230/32     4
172.20.2.243/32     1
192.168.10.100/32   1
192.168.10.101/32   6
192.168.10.102/32   6

# Retrieve information about an identity
# c0 identity list
# cilium에서는 파드의 Label 기반으로 보안 식별자(identity)를 가지고 식별한다. NetworkPolicy에서는 endpointSelector 와 같은 형태로 파드를 식별하는데, 이후에 identity로 인식하고 이를 바탕으로 통신을 허용하는 방식으로 동작한다.
(⎈|HomeLab:N/A) root@k8s-ctr:~# c0 identity list
ID      LABELS
1       reserved:host
        reserved:kube-apiserver
2       reserved:world
3       reserved:unmanaged
4       reserved:health
5       reserved:init
6       reserved:remote-node
7       reserved:kube-apiserver
        reserved:remote-node
8       reserved:ingress
9       reserved:world-ipv4
10      reserved:world-ipv6
53192   k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system
        k8s:io.cilium.k8s.policy.cluster=default
        k8s:io.cilium.k8s.policy.serviceaccount=coredns
        k8s:io.kubernetes.pod.namespace=kube-system
        k8s:k8s-app=kube-dns
55697   k8s:app=webpod
        k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default
        k8s:io.cilium.k8s.policy.cluster=default
        k8s:io.cilium.k8s.policy.serviceaccount=default
        k8s:io.kubernetes.pod.namespace=default
63464   k8s:app=curl
        k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default
        k8s:io.cilium.k8s.policy.cluster=default
        k8s:io.cilium.k8s.policy.serviceaccount=default
        k8s:io.kubernetes.pod.namespace=default


# 엔드포인트 기준 ID
# c0 identity list --endpoints
(⎈|HomeLab:N/A) root@k8s-ctr:~# c0 identity list --endpoints
ID      LABELS                                                                   REFCOUNT
1       k8s:node-role.kubernetes.io/control-plane                                1
        k8s:node.kubernetes.io/exclude-from-external-load-balancers
        reserved:host
4       reserved:health                                                          1
63464   k8s:app=curl                                                             1
        k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default
        k8s:io.cilium.k8s.policy.cluster=default
        k8s:io.cilium.k8s.policy.serviceaccount=default
        k8s:io.kubernetes.pod.namespace=default

# 엔드포인트 설정 확인 및 변경
c0 endpoint config &amp;lt;엔트포인트ID&amp;gt;

# 엔드포인트 상세 정보 확인
c0 endpoint get &amp;lt;엔트포인트ID&amp;gt;

# 엔드포인트 로그 확인
c0 endpoint log &amp;lt;엔트포인트ID&amp;gt;

# BPF 관련 확인
# Show bpf filesystem mount details
# c0 bpf fs show
(⎈|HomeLab:N/A) root@k8s-ctr:~# c0 bpf fs show
MountID:          1001
ParentID:         982
Mounted State:    true
MountPoint:       /sys/fs/bpf
MountOptions:     rw,relatime
OptionFields:     [master:11]
FilesystemType:   bpf
MountSource:      bpf
SuperOptions:     rw,mode=700

# bfp 마운트 폴더 확인
# tree /sys/fs/bpf
(⎈|HomeLab:N/A) root@k8s-ctr:~# tree /sys/fs/bpf
/sys/fs/bpf
├── cilium
│   ├── devices
│   │   ├── cilium_host
│   │   │   └── links
│   │   │       ├── cil_from_host
│   │   │       └── cil_to_host
│   │   ├── cilium_net
│   │   │   └── links
│   │   │       └── cil_to_host
│   │   ├── eth0
│   │   │   └── links
│   │   │       ├── cil_from_netdev
│   │   │       └── cil_to_netdev
│   │   └── eth1
│   │       └── links
│   │           ├── cil_from_netdev
│   │           └── cil_to_netdev
│   ├── endpoints
│   │   ├── 1621
│   │   │   └── links
│   │   │       ├── cil_from_container
│   │   │       └── cil_to_container
│   │   └── 400
│   │       └── links
│   │           ├── cil_from_container
│   │           └── cil_to_container
│   └── socketlb
│       └── links
│           └── cgroup
│               ├── cil_sock4_connect
│               ├── cil_sock4_getpeername
│               ├── cil_sock4_post_bind
│               ├── cil_sock4_recvmsg
│               ├── cil_sock4_sendmsg
│               ├── cil_sock6_connect
│               ├── cil_sock6_getpeername
│               ├── cil_sock6_post_bind
│               ├── cil_sock6_recvmsg
│               └── cil_sock6_sendmsg
└── tc
    └── globals
        ├── cilium_auth_map
        ├── cilium_call_policy
        ├── cilium_calls_00400
        ├── cilium_calls_01621
        ├── cilium_calls_hostns_03459
        ├── cilium_calls_netdev_00002
        ├── cilium_calls_netdev_00003
        ├── cilium_calls_netdev_00007
        ├── cilium_ct4_global
        ├── cilium_ct_any4_global
        ├── cilium_egresscall_policy
        ├── cilium_events
        ├── cilium_ipcache
        ├── cilium_ipv4_frag_datagrams
        ├── cilium_l2_responder_v4
        ├── cilium_lb4_affinity
        ├── cilium_lb4_backends_v3
        ├── cilium_lb4_reverse_nat
        ├── cilium_lb4_reverse_sk
        ├── cilium_lb4_services_v2
        ├── cilium_lb4_source_range
        ├── cilium_lb_affinity_match
        ├── cilium_lxc
        ├── cilium_metrics
        ├── cilium_node_map
        ├── cilium_node_map_v2
        ├── cilium_nodeport_neigh4
        ├── cilium_policy_v2_00400
        ├── cilium_policy_v2_01621
        ├── cilium_policy_v2_03459
        ├── cilium_ratelimit
        ├── cilium_ratelimit_metrics
        ├── cilium_runtime_config
        ├── cilium_signals
        ├── cilium_skip_lb4
        └── cilium_snat_v4_external

21 directories, 57 files
(⎈|HomeLab:N/A) root@k8s-ctr:~#

# Get list of loadbalancer services
c0 service list
c1 service list
c2 service list

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get svc
NAME         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1      &amp;lt;none&amp;gt;        443/TCP   5d21h
webpod       ClusterIP   10.96.39.159   &amp;lt;none&amp;gt;        80/TCP    3d16h
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get endpointslices
NAME           ADDRESSTYPE   PORTS   ENDPOINTS                 AGE
kubernetes     IPv4          6443    192.168.10.100            5d21h
webpod-7sbh8   IPv4          80      172.20.1.33,172.20.0.89   3d16h

# Service와 Endpointslices에 대한 정보도 관리한다.
(⎈|HomeLab:N/A) root@k8s-ctr:~# c0 service list
ID   Frontend                Service Type   Backend
1    10.96.0.1:443/TCP       ClusterIP      1 =&amp;gt; 192.168.10.100:6443/TCP (active)
2    10.96.39.159:80/TCP     ClusterIP      1 =&amp;gt; 172.20.0.89:80/TCP (active)
                                            2 =&amp;gt; 172.20.1.33:80/TCP (active)
3    10.96.190.173:443/TCP   ClusterIP      1 =&amp;gt; 192.168.10.100:4244/TCP (active)
4    10.96.0.10:53/UDP       ClusterIP      1 =&amp;gt; 172.20.0.243:53/UDP (active)
                                            2 =&amp;gt; 172.20.0.157:53/UDP (active)
5    10.96.0.10:53/TCP       ClusterIP      1 =&amp;gt; 172.20.0.243:53/TCP (active)
                                            2 =&amp;gt; 172.20.0.157:53/TCP (active)
6    10.96.0.10:9153/TCP     ClusterIP      1 =&amp;gt; 172.20.0.243:9153/TCP (active)
                                            2 =&amp;gt; 172.20.0.157:9153/TCP (active)

## Or you can get the loadbalancer information using bpf list
c0 bpf lb list
c1 bpf lb list
c2 bpf lb list

(⎈|HomeLab:N/A) root@k8s-ctr:~# c0 bpf lb list
SERVICE ADDRESS             BACKEND ADDRESS (REVNAT_ID) (SLOT)
10.96.0.10:53/UDP (2)       172.20.0.157:53/UDP (4) (2)
10.96.39.159:80/TCP (2)     172.20.1.33:80/TCP (2) (2)
10.96.0.1:443/TCP (0)       0.0.0.0:0 (1) (0) [ClusterIP, non-routable]
10.96.0.10:53/TCP (1)       172.20.0.243:53/TCP (5) (1)
10.96.0.10:9153/TCP (1)     172.20.0.243:9153/TCP (6) (1)
10.96.0.10:53/UDP (1)       172.20.0.243:53/UDP (4) (1)
10.96.0.10:53/UDP (0)       0.0.0.0:0 (4) (0) [ClusterIP, non-routable]
10.96.190.173:443/TCP (1)   192.168.10.100:4244/TCP (3) (1)
10.96.39.159:80/TCP (0)     0.0.0.0:0 (2) (0) [ClusterIP, non-routable]
10.96.190.173:443/TCP (0)   0.0.0.0:0 (3) (0) [ClusterIP, InternalLocal, non-routable]
10.96.0.10:9153/TCP (2)     172.20.0.157:9153/TCP (6) (2)
10.96.0.1:443/TCP (1)       192.168.10.100:6443/TCP (1) (1)
10.96.39.159:80/TCP (1)     172.20.0.89:80/TCP (2) (1)
10.96.0.10:53/TCP (0)       0.0.0.0:0 (5) (0) [ClusterIP, non-routable]
10.96.0.10:9153/TCP (0)     0.0.0.0:0 (6) (0) [ClusterIP, non-routable]
10.96.0.10:53/TCP (2)       172.20.0.157:53/TCP (5) (2)

## List reverse NAT entries
c1 bpf lb list --revnat
c2 bpf lb list --revnat

(⎈|HomeLab:N/A) root@k8s-ctr:~# c1 bpf lb list --revnat
ID   BACKEND ADDRESS (REVNAT_ID) (SLOT)
5    10.96.0.10:53
1    10.96.0.1:443
6    10.96.0.10:53
2    10.96.39.159:80
4    10.96.0.10:9153
3    10.96.190.173:443

# List connection tracking entries
c0 bpf ct list global
c1 bpf ct list global
c2 bpf ct list global

# Flush connection tracking entries
c0 bpf ct flush
c1 bpf ct flush
c2 bpf ct flush


# List all NAT mapping entries
c0 bpf nat list
c1 bpf nat list
c2 bpf nat list

# Flush all NAT mapping entries
c0 bpf nat flush
c1 bpf nat flush
c2 bpf nat flush

# Manage the IPCache mappings for IP/CIDR &amp;lt;-&amp;gt; Identity
c0 bpf ipcache list# Display cgroup metadata maintained by Cilium
c0 cgroups list
c1 cgroups list
c2 cgroups list

# BPF MAP 정보와 entry 확인
# List all open BPF maps
c0 map list
c1 map list --verbose
c2 map list --verbose

(⎈|HomeLab:N/A) root@k8s-ctr:~# c0 map list
Name                       Num entries   Num errors   Cache enabled
cilium_ratelimit_metrics   0             0            true
cilium_lb4_reverse_nat     6             0            true
cilium_lb_affinity_match   0             0            true
cilium_lb4_source_range    0             0            true
cilium_lxc                 5             0            true
cilium_lb4_reverse_sk      2             0            true
cilium_lb4_services_v2     16            0            true
cilium_policy_v2_00400     3             0            true
cilium_policy_v2_01621     3             0            true
cilium_lb4_backends_v3     10            0            true
cilium_runtime_config      256           0            true
cilium_lb4_affinity        0             0            true
cilium_policy_v2_03459     2             0            true
cilium_ipcache             16            0            true
cilium_ratelimit           0             0            true
cilium_skip_lb4            0             0            false
cilium_l2_responder_v4     0             0            false
cilium_node_map            0             0            false
cilium_node_map_v2         0             0            false
cilium_auth_map            0             0            false
cilium_metrics             0             0            false

c1 map events cilium_lb4_services_v2
c1 map events cilium_lb4_reverse_nat
c1 map events cilium_lxc
c1 map events cilium_ipcache


# List all metrics
c1 metrics list


# List contents of a policy BPF map : Dump all policy maps
c0 bpf policy get --all
c1 bpf policy get --all -n
c2 bpf policy get --all -n


# Dump StateDB contents as JSON
c0 statedb dump


#
c0 shell -- db/show devices
c1 shell -- db/show devices
c2 shell -- db/show devices
&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;각 명령에 대한 이해는 아래의 Cilium cheat sheet를 확인하실 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;참고: &lt;a href=&quot;https://file.notion.so/f/f/a6af158e-5b0f-4e31-9d12-0d0b2805956a/5f872eba-dc1e-43e3-b8d9-5f7078132612/Isovalent_-_Cilium_Cheat_Sheet.pdf?table=block&amp;amp;id=22f50aec-5edf-80bb-b216-d7be0c3bb7a8&amp;amp;spaceId=a6af158e-5b0f-4e31-9d12-0d0b2805956a&amp;amp;expirationTimestamp=1752969600000&amp;amp;signature=DK1G0bT1YndvKzJVpHFb614i4E3R3fnyg2MIu1HKhE8&amp;amp;downloadName=Isovalent+-+Cilium+Cheat+Sheet.pdf&quot;&gt;https://file.notion.so/f/f/a6af158e-5b0f-4e31-9d12-0d0b2805956a/5f872eba-dc1e-43e3-b8d9-5f7078132612/Isovalent_-_Cilium_Cheat_Sheet.pdf?table=block&amp;amp;id=22f50aec-5edf-80bb-b216-d7be0c3bb7a8&amp;amp;spaceId=a6af158e-5b0f-4e31-9d12-0d0b2805956a&amp;amp;expirationTimestamp=1752969600000&amp;amp;signature=DK1G0bT1YndvKzJVpHFb614i4E3R3fnyg2MIu1HKhE8&amp;amp;downloadName=Isovalent+-+Cilium+Cheat+Sheet.pdf&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;6. Cilium 통신 확인&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이제 Cilium 통신을 확인해보고 이전과 어떻게 달라졌는지 확인해보겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;먼저 다른 노드에 배포된 파드 간 통신 확인해 보겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이때 파드에서 다른 노드의 파드(egress from endpoint)로 흐름은 아래의 그림과 같습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1386&quot; data-origin-height=&quot;758&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/tplsB/btsPrc3RQb0/jNRWo51lZ46KJb2CZZfAd0/tfile.svg&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/tplsB/btsPrc3RQb0/jNRWo51lZ46KJb2CZZfAd0/tfile.svg&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/tplsB/btsPrc3RQb0/jNRWo51lZ46KJb2CZZfAd0/tfile.svg&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FtplsB%2FbtsPrc3RQb0%2FjNRWo51lZ46KJb2CZZfAd0%2Ftfile.svg&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1386&quot; height=&quot;758&quot; data-origin-width=&quot;1386&quot; data-origin-height=&quot;758&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;span style=&quot;color: #333333; text-align: start;&quot;&gt;출처:&lt;span&gt;&amp;nbsp;&lt;/span&gt;&lt;/span&gt; &lt;a href=&quot;https://docs.cilium.io/en/stable/network/ebpf/lifeofapacket/#egress-from-endpoint&quot;&gt;https://docs.cilium.io/en/stable/network/ebpf/lifeofapacket/#egress-from-endpoint&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;좌측 상단의 범례를 보면, &lt;code&gt;bpf_lxc&lt;/code&gt;는 Cilium 컴포넌트로 Pod의 interface로 이해하실 수 있으며, &lt;code&gt;TC@Endoint&lt;/code&gt; 와 같이 빨간 상자가 Kernel Hookpoint라는 것을 알 수 있습니다. 이 hook point에 특정 BPF 프로그램이 삽입되어 통신을 흐름에 제어를 합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아래는 인그레스 관점에서 파드(Ingress to Endpoint)로 들어오는 흐름입니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1626&quot; data-origin-height=&quot;783&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/cMVaRP/btsPqJunV4F/aurSXC9N8I4kJh0w5OSNkk/tfile.svg&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/cMVaRP/btsPqJunV4F/aurSXC9N8I4kJh0w5OSNkk/tfile.svg&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/cMVaRP/btsPqJunV4F/aurSXC9N8I4kJh0w5OSNkk/tfile.svg&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FcMVaRP%2FbtsPqJunV4F%2FaurSXC9N8I4kJh0w5OSNkk%2Ftfile.svg&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1626&quot; height=&quot;783&quot; data-origin-width=&quot;1626&quot; data-origin-height=&quot;783&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;출처: &lt;a href=&quot;https://docs.cilium.io/en/stable/network/ebpf/lifeofapacket/#egress-from-endpoint&quot;&gt;https://docs.cilium.io/en/stable/network/ebpf/lifeofapacket/#egress-from-endpoint&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아래와 같이 실습 환경에서 확인해보겠습니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;# 엔드포인트 정보 확인
kubectl get pod -owide
kubectl get svc,ep webpod
WEBPOD1IP=172.20.0.89

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get pod -owide
NAME                      READY   STATUS    RESTARTS   AGE   IP            NODE      NOMINATED NODE   READINESS GATES
curl-pod                  1/1     Running   0          16h   172.20.2.94   k8s-ctr   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
webpod-6c6d676d8c-m6x7k   1/1     Running   0          16h   172.20.0.89   k8s-w1    &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
webpod-6c6d676d8c-mjf7m   1/1     Running   0          16h   172.20.1.33   k8s-w2    &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get svc,ep webpod
Warning: v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
NAME             TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE
service/webpod   ClusterIP   10.96.39.159   &amp;lt;none&amp;gt;        80/TCP    3d16h

NAME               ENDPOINTS                       AGE
endpoints/webpod   172.20.0.89:80,172.20.1.33:80   3d16h
(⎈|HomeLab:N/A) root@k8s-ctr:~# WEBPOD1IP=172.20.0.89

# BPF maps : 목적지 파드와 통신 시 어느곳으로 보내야 될지 확인할 수 있다
c0 map get cilium_ipcache
c0 map get cilium_ipcache | grep $WEBPOD1IP

(⎈|HomeLab:N/A) root@k8s-ctr:~# c0 map get cilium_ipcache
Key                 Value                                                                    State   Error
192.168.10.102/32   identity=6 encryptkey=0 tunnelendpoint=0.0.0.0 flags=&amp;lt;none&amp;gt;              sync
172.20.2.243/32     identity=1 encryptkey=0 tunnelendpoint=0.0.0.0 flags=&amp;lt;none&amp;gt;              sync
172.20.1.33/32      identity=55697 encryptkey=0 tunnelendpoint=192.168.10.102 flags=&amp;lt;none&amp;gt;   sync
172.20.2.94/32      identity=63464 encryptkey=0 tunnelendpoint=0.0.0.0 flags=&amp;lt;none&amp;gt;          sync
172.20.1.17/32      identity=6 encryptkey=0 tunnelendpoint=192.168.10.102 flags=&amp;lt;none&amp;gt;       sync
172.20.0.95/32      identity=6 encryptkey=0 tunnelendpoint=192.168.10.101 flags=&amp;lt;none&amp;gt;       sync
172.20.0.178/32     identity=4 encryptkey=0 tunnelendpoint=192.168.10.101 flags=&amp;lt;none&amp;gt;       sync
172.20.0.89/32      identity=55697 encryptkey=0 tunnelendpoint=192.168.10.101 flags=&amp;lt;none&amp;gt;   sync
172.20.0.157/32     identity=53192 encryptkey=0 tunnelendpoint=192.168.10.101 flags=&amp;lt;none&amp;gt;   sync
172.20.2.230/32     identity=4 encryptkey=0 tunnelendpoint=0.0.0.0 flags=&amp;lt;none&amp;gt;              sync
10.0.2.15/32        identity=1 encryptkey=0 tunnelendpoint=0.0.0.0 flags=&amp;lt;none&amp;gt;              sync
192.168.10.100/32   identity=1 encryptkey=0 tunnelendpoint=0.0.0.0 flags=&amp;lt;none&amp;gt;              sync
172.20.0.243/32     identity=53192 encryptkey=0 tunnelendpoint=192.168.10.101 flags=&amp;lt;none&amp;gt;   sync
172.20.1.164/32     identity=4 encryptkey=0 tunnelendpoint=192.168.10.102 flags=&amp;lt;none&amp;gt;       sync
0.0.0.0/0           identity=2 encryptkey=0 tunnelendpoint=0.0.0.0 flags=&amp;lt;none&amp;gt;              sync
192.168.10.101/32   identity=6 encryptkey=0 tunnelendpoint=0.0.0.0 flags=&amp;lt;none&amp;gt;              sync

# webpod와 통신하기 위해서 tunnelendpoint가 노드 1번으로 확인됨
(⎈|HomeLab:N/A) root@k8s-ctr:~# c0 map get cilium_ipcache | grep $WEBPOD1IP
172.20.0.89/32      identity=55697 encryptkey=0 tunnelendpoint=192.168.10.101 flags=&amp;lt;none&amp;gt;   sync


# curl-pod 의 LXC 변수 지정
LXC=&amp;lt;k8s-ctr의 가장 나중에 lxc 이름&amp;gt;
LXC=lxc016a621522d5

(⎈|HomeLab:N/A) root@k8s-ctr:~# ip -br -c a
lo               UNKNOWN        127.0.0.1/8 ::1/128
eth0             UP             10.0.2.15/24 metric 100 fd17:625c:f037:2:a00:27ff:fe6b:69c9/64 fe80::a00:27ff:fe6b:69c9/64
eth1             UP             192.168.10.100/24 fe80::a00:27ff:fe8f:411f/64
cilium_net@cilium_host UP             fe80::a817:85ff:fe8a:b837/64
cilium_host@cilium_net UP             172.20.2.243/32 fe80::dc9b:6cff:fe2e:3730/64
lxc_health@if9   UP             fe80::10c1:f0ff:fe5f:26f3/64
lxc016a621522d5@if11 UP             fe80::f4f1:82ff:fec0:5f34/64

# Node&amp;rsquo;s eBPF programs
## list of eBPF programs
c0bpf net show
c0bpf net show | grep $LXC 

## Use bpftool prog show id to view additional information about a program, including a list of attached eBPF maps:
c0bpf prog show id &amp;lt;출력된 prog id 입력&amp;gt;
c0bpf prog show id 1584
c0bpf map list

# BPF에 tc에 정보가 있다
(⎈|HomeLab:N/A) root@k8s-ctr:~# c0bpf net show
xdp:

tc:
eth0(2) tcx/ingress cil_from_netdev prog_id 1393 link_id 18
eth0(2) tcx/egress cil_to_netdev prog_id 1395 link_id 19
eth1(3) tcx/ingress cil_from_netdev prog_id 1401 link_id 20
eth1(3) tcx/egress cil_to_netdev prog_id 1404 link_id 21
cilium_net(7) tcx/ingress cil_to_host prog_id 1379 link_id 17
cilium_host(8) tcx/ingress cil_to_host prog_id 1365 link_id 15
cilium_host(8) tcx/egress cil_from_host prog_id 1370 link_id 16
lxc_health(10) tcx/ingress cil_from_container prog_id 1417 link_id 13
lxc_health(10) tcx/egress cil_to_container prog_id 1419 link_id 14
lxc016a621522d5(12) tcx/ingress cil_from_container prog_id 1423 link_id 22
lxc016a621522d5(12) tcx/egress cil_to_container prog_id 1426 link_id 23

flow_dissector:

netfilter:

# LXC와 연결된 pbf 프로그램 id를 확인 할 수 있음 (1423, 1426)
(⎈|HomeLab:N/A) root@k8s-ctr:~# c0bpf net show | grep $LXC
lxc016a621522d5(12) tcx/ingress cil_from_container prog_id 1423 link_id 22
lxc016a621522d5(12) tcx/egress cil_to_container prog_id 1426 link_id 23

# bpf 프로그램에 맵핑된 map 정보 확인
(⎈|HomeLab:N/A) root@k8s-ctr:~# c0bpf prog show id 1426
1426: sched_cls  name cil_to_container  tag 0b3125767ba1861c  gpl
        loaded_at 2025-07-18T14:59:09+0000  uid 0
        xlated 1448B  jited 928B  memlock 4096B  map_ids 209,104,208
        btf_id 362

# Map 확인
(⎈|HomeLab:N/A) root@k8s-ctr:~# c0bpf map list
..
104: percpu_hash  name cilium_metrics  flags 0x1
        key 8B  value 16B  max_entries 1024  memlock 19024B
..
208: prog_array  name cilium_calls_01  flags 0x0
        key 4B  value 4B  max_entries 50  memlock 720B
        owner_prog_type sched_cls  owner jited
209: array  name .rodata.config  flags 0x480
        key 4B  value 64B  max_entries 1  memlock 8192B
        btf_id 354  frozen
..&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;파드 통신을 아래와 같이 &lt;code&gt;ngrep&lt;/code&gt; 으로 체크해 보겠습니다. &lt;code&gt;ngrep&lt;/code&gt;은 &lt;code&gt;tcpdump&lt;/code&gt;와 같은 정보를 보여주는 툴입니다.&lt;/p&gt;
&lt;pre class=&quot;yaml&quot;&gt;&lt;code&gt;# 엔드포인트 정보 확인
kubectl get pod -owide
kubectl get svc,ep webpod
WEBPOD1IP=172.20.0.89

# 현재 배포된 정보
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get po -owide
NAME                      READY   STATUS    RESTARTS   AGE   IP            NODE      NOMINATED NODE   READINESS GATES
curl-pod                  1/1     Running   0          16h   172.20.2.94   k8s-ctr   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
webpod-6c6d676d8c-m6x7k   1/1     Running   0          16h   172.20.0.89   k8s-w1    &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
webpod-6c6d676d8c-mjf7m   1/1     Running   0          16h   172.20.1.33   k8s-w2    &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;

# vagrant ssh k8s-w1 , # vagrant ssh k8s-w2 각각 터미널 접속 후 아래 실행
ngrep -tW byline -d eth1 '' 'tcp port 80'

# [k8s-ctr] curl-pod 에서 curl 요청 시도
kubectl exec -it curl-pod -- curl $WEBPOD1IP

# 각각 터미널에서 출력 확인 : 
# 파드의 소스 IP와 목적지 IP가 다른 노드의 서버 NIC에서 확인된다
# 172.20.2.94:55554 -&amp;gt; 172.20.0.89:80 가 확인 됨 -&amp;gt; Native-Routung 
root@k8s-w1:~# ngrep -tW byline -d eth1 '' 'tcp port 80'
interface: eth1 (192.168.10.0/255.255.255.0)
filter: ( tcp port 80 ) and ((ip || ip6) || (vlan &amp;amp;&amp;amp; (ip || ip6)))
####
T 2025/07/19 17:03:54.152732 172.20.2.94:55554 -&amp;gt; 172.20.0.89:80 [AP] #4
GET / HTTP/1.1.
Host: 172.20.0.89.
User-Agent: curl/8.14.1.
Accept: */*.
.

##
T 2025/07/19 17:03:54.189768 172.20.0.89:80 -&amp;gt; 172.20.2.94:55554 [AP] #6
HTTP/1.1 200 OK.
Date: Sat, 19 Jul 2025 08:03:54 GMT.
Content-Length: 207.
Content-Type: text/plain; charset=utf-8.
.
Hostname: webpod-6c6d676d8c-m6x7k
IP: 127.0.0.1
IP: ::1
IP: 172.20.0.89
IP: fe80::4c15:82ff:fe93:b97e
RemoteAddr: 172.20.2.94:55554
GET / HTTP/1.1.
Host: 172.20.0.89.
User-Agent: curl/8.14.1.
Accept: */*.
.

####
...&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이제 파드 -&amp;gt; 서비스 통신 확인해보겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Cilium의 서비스 통신은 socket-based load-balancing으로 이해할 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;https://cilium.io/static/11a92067cd7068ec607f1dba1c68bdf0/42ccd/socket_based_lb.png&quot;&gt;&amp;nbsp;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;897&quot; data-origin-height=&quot;378&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bRqz0h/btsProbVU1v/wqPrFoVHZI71XtEKU4BB1k/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bRqz0h/btsProbVU1v/wqPrFoVHZI71XtEKU4BB1k/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bRqz0h/btsProbVU1v/wqPrFoVHZI71XtEKU4BB1k/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbRqz0h%2FbtsProbVU1v%2FwqPrFoVHZI71XtEKU4BB1k%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;897&quot; height=&quot;378&quot; data-origin-width=&quot;897&quot; data-origin-height=&quot;378&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;출처: &lt;a href=&quot;https://cilium.io/blog/2019/08/20/cilium-16/&quot;&gt;https://cilium.io/blog/2019/08/20/cilium-16/&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;좌측 Network-based load-balancing은 backend 서비스를 찾아가기 위해서 Iptables가 중간에 개입하고 DNAT를 통해서 통신이 이뤄집니다. 통신 중간 과정을 보면 10.0.0.1-&amp;gt;192.168.0.1로 요청하고 DNAT를 통해 10.0.0.1-&amp;gt;10.0.0.2로 변환됩니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;반면 socket-based load-balancing은 client-side load-balancing이 이뤄지며, 직접 목적지 endpoint를 가져와 통신합니다. 즉 Client는 직접 10.0.0.1-&amp;gt;10.0.0.2로 통신하게 됩니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아래는 참고 블로그로 이미 BPF 프로그램(bpf_sock)에서는 어떤 목적지로 통신을 할지 알고 있기 때문에, 직접 통신을 하도록 중간 과정이 생략됩니다. 이 과정은 파드의 eth0 인터페이스 이전에 완료되기 때문에, eth0에서 &lt;code&gt;tcpdump&lt;/code&gt;를 수행해도 ClusterIP가 확인되지 않습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;937&quot; data-origin-height=&quot;477&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/edsGUI/btsPpqpn9CI/uqUZobs4wTShHzo9twZRfk/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/edsGUI/btsPpqpn9CI/uqUZobs4wTShHzo9twZRfk/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/edsGUI/btsPpqpn9CI/uqUZobs4wTShHzo9twZRfk/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FedsGUI%2FbtsPpqpn9CI%2FuqUZobs4wTShHzo9twZRfk%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;937&quot; height=&quot;477&quot; data-origin-width=&quot;937&quot; data-origin-height=&quot;477&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;출처: &lt;a href=&quot;https://velog.io/@haruband/K8SCilium-Socket-Based-LoadBalancing-%EA%B8%B0%EB%B2%95&quot;&gt;https://velog.io/@haruband/K8SCilium-Socket-Based-LoadBalancing-%EA%B8%B0%EB%B2%95&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;kube-proxy replacement를 사용하면 socket LB가 기본적으로 활성화 됩니다.&lt;/p&gt;
&lt;pre class=&quot;yaml&quot;&gt;&lt;code&gt;(⎈|HomeLab:N/A) root@k8s-ctr:~# c0 status --verbose
KVStore:                Disabled
...
KubeProxyReplacement Details:
  Status:                 True
  Socket LB:              Enabled
  Socket LB Tracing:      Enabled
  Socket LB Coverage:     Full
  Devices:                eth0    10.0.2.15 fd17:625c:f037:2:a00:27ff:fe6b:69c9 fe80::a00:27ff:fe6b:69c9, eth1   192.168.10.100 fe80::a00:27ff:fe8f:411f (Direct Routing)
  Mode:                   SNAT
...&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;curl 파드에서 &lt;code&gt;tcpdump&lt;/code&gt;를 통해 직접 확인해보겠습니다.&lt;/p&gt;
&lt;pre class=&quot;yaml&quot;&gt;&lt;code&gt;# 현재 파드 상태
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get svc -A
NAMESPACE     NAME           TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                  AGE
default       kubernetes     ClusterIP   10.96.0.1       &amp;lt;none&amp;gt;        443/TCP                  5d22h
default       webpod         ClusterIP   10.96.39.159    &amp;lt;none&amp;gt;        80/TCP                   3d16h
kube-system   cilium-envoy   ClusterIP   None            &amp;lt;none&amp;gt;        9964/TCP                 17h
kube-system   hubble-peer    ClusterIP   10.96.190.173   &amp;lt;none&amp;gt;        443/TCP                  17h
kube-system   kube-dns       ClusterIP   10.96.0.10      &amp;lt;none&amp;gt;        53/UDP,53/TCP,9153/TCP   5d22h

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get po -A -owide
NAMESPACE     NAME                               READY   STATUS    RESTARTS       AGE     IP               NODE      NOMINATED NODE   READINESS GATES
default       curl-pod                           1/1     Running   0              17h     172.20.2.94      k8s-ctr   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
default       webpod-6c6d676d8c-m6x7k            1/1     Running   0              17h     172.20.0.89      k8s-w1    &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
default       webpod-6c6d676d8c-mjf7m            1/1     Running   0              17h     172.20.1.33      k8s-w2    &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system   cilium-envoy-c2j8j                 1/1     Running   0              17h     192.168.10.102   k8s-w2    &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system   cilium-envoy-hfdjd                 1/1     Running   0              17h     192.168.10.100   k8s-ctr   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system   cilium-envoy-s5cfw                 1/1     Running   0              17h     192.168.10.101   k8s-w1    &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system   cilium-fbqsn                       1/1     Running   0              17h     192.168.10.100   k8s-ctr   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system   cilium-k6r7m                       1/1     Running   0              17h     192.168.10.102   k8s-w2    &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system   cilium-operator-865bc7f457-qgk29   1/1     Running   2 (92m ago)    17h     192.168.10.102   k8s-w2    &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system   cilium-operator-865bc7f457-v5qt2   1/1     Running   2 (3h6m ago)   17h     192.168.10.101   k8s-w1    &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system   cilium-t225k                       1/1     Running   0              17h     192.168.10.101   k8s-w1    &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system   coredns-674b8bbfcf-2xwg2           1/1     Running   0              17h     172.20.0.243     k8s-w1    &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system   coredns-674b8bbfcf-kw84j           1/1     Running   0              17h     172.20.0.157     k8s-w1    &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;

# curl 호출
kubectl exec -it curl-pod -- curl webpod

# 신규 터미널 : 파드에서 SVC(ClusterIP) 접속 시 tcpdump 로 확인
# 172.20.2.94.41136 &amp;gt; 172.20.0.89.80 로 바로 통신: ClusterIP가 소켓 레벨에서 Endpoint 로 변경
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl exec curl-pod -- tcpdump -enni any -q
tcpdump: WARNING: any: That device doesn't support promiscuous mode
(Promiscuous mode not supported on the &quot;any&quot; device)
tcpdump: verbose output suppressed, use -v[v]... for full protocol decode
listening on any, link-type LINUX_SLL2 (Linux cooked v2), snapshot length 262144 bytes
# webpod를 찾기 위한 core-dns 질의 (이 구간은 ClusterIP가 확인된다)
08:22:39.344439 eth0  Out ifindex 11 82:36:23:1f:43:be 172.20.2.94.38129 &amp;gt; 
172.20.0.157.53: UDP, length 73
08:22:39.345719 eth0  Out ifindex 11 82:36:23:1f:43:be 172.20.2.94.38129 &amp;gt; 172.20.0.157.53: UDP, length 73
08:22:39.356508 eth0  In  ifindex 11 f6:f1:82:c0:5f:34 172.20.0.157.53 &amp;gt; 172.20.2.94.38129: UDP, length 121
08:22:39.359864 eth0  In  ifindex 11 f6:f1:82:c0:5f:34 172.20.0.157.53 &amp;gt; 172.20.2.94.38129: UDP, length 166
# webpod1로 통신 (이 구간은 ClusterIP가 없이 바로 통신된다)
08:22:39.366063 eth0  Out ifindex 11 82:36:23:1f:43:be 172.20.2.94.41136 &amp;gt; 172.20.0.89.80: tcp 0
08:22:39.371877 eth0  In  ifindex 11 f6:f1:82:c0:5f:34 172.20.0.89.80 &amp;gt; 172.20.2.94.41136: tcp 0
08:22:39.372905 eth0  Out ifindex 11 82:36:23:1f:43:be 172.20.2.94.41136 &amp;gt; 172.20.0.89.80: tcp 0
08:22:39.372990 eth0  Out ifindex 11 82:36:23:1f:43:be 172.20.2.94.41136 &amp;gt; 172.20.0.89.80: tcp 70
08:22:39.380045 eth0  In  ifindex 11 f6:f1:82:c0:5f:34 172.20.0.89.80 &amp;gt; 172.20.2.94.41136: tcp 0
08:22:39.408389 eth0  In  ifindex 11 f6:f1:82:c0:5f:34 172.20.0.89.80 &amp;gt; 172.20.2.94.41136: tcp 320
08:22:39.408948 eth0  Out ifindex 11 82:36:23:1f:43:be 172.20.2.94.41136 &amp;gt; 172.20.0.89.80: tcp 0
08:22:39.412495 eth0
...&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;client pod에서 확인해본 tcpdump에서는 ClusterIP가 없이 바로 파드 IP로 요청을 하는 것을 알 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이상 실습을 마무리하고 아래 명령으로 실습 환경 정리하도록 하겠습니다.&lt;/p&gt;
&lt;pre class=&quot;1c&quot;&gt;&lt;code&gt;vagrant destroy -f &amp;amp;&amp;amp; rm -rf .vagrant&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;마치며&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;해당 포스팅은 CloudNet에서 진행하는 &lt;code&gt;Cilium 스터디&lt;/code&gt;를 참여하면서, 제공해주신 가이드를 바탕으로 학습한 내용을 정리한 내용입니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;다음 포스팅에서는 Cilium에서 제공하는 Observability 기능에 대해서 알아보겠습니다.&lt;/p&gt;</description>
      <category>Cilium</category>
      <category>cilium</category>
      <category>kubernetes</category>
      <author>한명</author>
      <guid isPermaLink="true">https://a-person.tistory.com/49</guid>
      <comments>https://a-person.tistory.com/49#entry49comment</comments>
      <pubDate>Sat, 19 Jul 2025 18:44:05 +0900</pubDate>
    </item>
    <item>
      <title>[1-1] Cilium 개요와 설치</title>
      <link>https://a-person.tistory.com/48</link>
      <description>&lt;p data-ke-size=&quot;size16&quot;&gt;이번 포스팅에서는 Cilium CNI Plugin에 대해서 살펴보겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;먼저 실습 환경을 구성하고 Flannel을 설치하여 Kubernetes에서 제공되는 네트워킹이 어떻게 구현되는지 살펴보겠습니다. 이후 Cilium에 대한 배경 설명과 Cilium 설치 이후 환경이 어떻게 달라지는지를 실습을 통해 살펴보겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;본 포스팅은 CloudNet에서 진행하는 &lt;code&gt;Cilium 스터디&lt;/code&gt;를 참여하면서, 제공해주신 가이드를 바탕으로 학습한 내용을 정리한 내용입니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;목차&lt;/h2&gt;
&lt;ol style=&quot;list-style-type: decimal;&quot; data-ke-list-type=&quot;decimal&quot;&gt;
&lt;li&gt;실습 환경 구성&lt;/li&gt;
&lt;li&gt;Flannel 설치&lt;/li&gt;
&lt;li&gt;Cilium 소개&lt;/li&gt;
&lt;li&gt;Cilium 설치&lt;/li&gt;
&lt;li&gt;Cilium 환경 확인&lt;/li&gt;
&lt;li&gt;Cilium 통신 확인&lt;/li&gt;
&lt;/ol&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;1. 실습 환경 구성&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;본 실습은 Windows 11 환경에서 virtualbox, vagrant를 설치하고 진행하였습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;VirtualBox: &lt;a href=&quot;https://www.virtualbox.org/wiki/Downloads&quot;&gt;https://www.virtualbox.org/wiki/Downloads&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Vagrant: &lt;a href=&quot;https://developer.hashicorp.com/vagrant/install#windows&quot;&gt;https://developer.hashicorp.com/vagrant/install#windows&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이후 실습에 필요한 &lt;code&gt;Vagrantfile&lt;/code&gt;을 다운받고 &lt;code&gt;vagrant up&lt;/code&gt;으로 실습 환경을 구성합니다. &lt;code&gt;Vagrantfile&lt;/code&gt;은 실행할 vagrant 환경에 대한 명세서입니다.&lt;/p&gt;
&lt;pre class=&quot;awk&quot;&gt;&lt;code&gt;mkdir cilium-lab &amp;amp;&amp;amp; cd cilium-lab

curl -O https://raw.githubusercontent.com/gasida/vagrant-lab/refs/heads/main/cilium-study/1w/Vagrantfile

vagrant up&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;명령이 완료되고 모든 환경이 구성되면 아래와 같이 3대의 VM이 runnig 상태로 확인되어야 합니다.&lt;/p&gt;
&lt;pre class=&quot;applescript&quot;&gt;&lt;code&gt;PS C:\Users\chuir\projects\cilium-lab\w1&amp;gt; vagrant status
Current machine states:

k8s-ctr                   running (virtualbox)
k8s-w1                    running (virtualbox)
k8s-w2                    running (virtualbox)

This environment represents multiple VMs. The VMs are all listed
above with their current state. For more information about a specific
VM, run `vagrant status NAME`.&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;br /&gt;&lt;code&gt;Vagrantfile&lt;/code&gt;을 확인해보면 각 VM을 생성하고 각 노드별 shell을 수행하도록 되어 있어, 각 shell을 수행과 정에서 Kubernetes 관련 구성 요소들이 설치됩니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이후 확인해보면 컨트롤 플레인과 2대의 워커 노드가 생성되어 있습니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get no -owide
NAME      STATUS     ROLES           AGE    VERSION   INTERNAL-IP      EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION     CONTAINER-RUNTIME
k8s-ctr   NotReady   control-plane   2d4h   v1.33.2   192.168.10.100   &amp;lt;none&amp;gt;        Ubuntu 24.04.2 LTS   6.8.0-53-generic   containerd://1.7.27
k8s-w1    NotReady   &amp;lt;none&amp;gt;          2d4h   v1.33.2   10.0.2.15        &amp;lt;none&amp;gt;        Ubuntu 24.04.2 LTS   6.8.0-53-generic   containerd://1.7.27
k8s-w2    NotReady   &amp;lt;none&amp;gt;          2d3h   v1.33.2   10.0.2.15        &amp;lt;none&amp;gt;        Ubuntu 24.04.2 LTS   6.8.0-53-generic   containerd://1.7.27

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get po -A
NAMESPACE     NAME                              READY   STATUS    RESTARTS      AGE
kube-system   coredns-674b8bbfcf-bqxft          0/1     Pending   0             2d4h
kube-system   coredns-674b8bbfcf-dnghp          0/1     Pending   0             2d4h
kube-system   etcd-k8s-ctr                      1/1     Running   0             2d4h
kube-system   kube-apiserver-k8s-ctr            1/1     Running   0             2d4h
kube-system   kube-controller-manager-k8s-ctr   1/1     Running   8 (38h ago)   2d4h
kube-system   kube-proxy-6kmfl                  1/1     Running   0             2d4h
kube-system   kube-proxy-drbwf                  1/1     Running   0             2d4h
kube-system   kube-proxy-kqpf7                  1/1     Running   0             2d4h
kube-system   kube-scheduler-k8s-ctr            1/1     Running   8 (38h ago)   2d4h&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Vagrant로 생성된 VM은 외부 통신이 필요한 경우 NAT를 위한 &lt;code&gt;eth0&lt;/code&gt;와 호스트(PC)의 Gateway 인터페이스로 연결되는 192.168.10.0/24 네트워크를 가진 &lt;code&gt;eth1&lt;/code&gt;이 구성되어 있습니다. 다만 워커 노드는 kubeadm join이 되어 있으나 eth0로 IP(10.0.2.15)가 확인됩니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;모든 노드가 동일한 IP를 가지기 때문에 노드가 이 IP를 사용하면 클러스터가 정상적으로 구성될 수 없으며, 이를 수정하기 위해서 각 노드에서 Kubenet argument를 옵션으로 --node-ip를 설정하고 kubelet을 재시작합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;참고로 Vagrant로 생성된 VM은 Vagrantfile이 있는 위치에서 &lt;code&gt;vagrant ssh &amp;lt;node-name&amp;gt;&lt;/code&gt; 으로 접근할 수 있습니다.&lt;/p&gt;
&lt;pre class=&quot;crystal&quot;&gt;&lt;code&gt;# INTERNAL-IP 변경 설정
NODEIP=$(ip -4 addr show eth1 | grep -oP '(?&amp;lt;=inet\s)\d+(\.\d+){3}')
sed -i &quot;s/^\(KUBELET_KUBEADM_ARGS=\&quot;\)/\1--node-ip=${NODEIP} /&quot; /var/lib/kubelet/kubeadm-flags.env
systemctl daemon-reexec &amp;amp;&amp;amp; systemctl restart kubelet&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이후 노드 IP가 변경된 것을 확인합니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get no -owide
NAME      STATUS     ROLES           AGE    VERSION   INTERNAL-IP      EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION     CONTAINER-RUNTIME
k8s-ctr   NotReady   control-plane   2d5h   v1.33.2   192.168.10.100   &amp;lt;none&amp;gt;        Ubuntu 24.04.2 LTS   6.8.0-53-generic   containerd://1.7.27
k8s-w1    NotReady   &amp;lt;none&amp;gt;          2d5h   v1.33.2   192.168.10.101   &amp;lt;none&amp;gt;        Ubuntu 24.04.2 LTS   6.8.0-53-generic   containerd://1.7.27
k8s-w2    NotReady   &amp;lt;none&amp;gt;          2d5h   v1.33.2   192.168.10.102   &amp;lt;none&amp;gt;        Ubuntu 24.04.2 LTS   6.8.0-53-generic   containerd://1.7.27&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;2. Flannel 설치&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;실습 환경에서는 CNI Plugin이 설치되지 않았기 때문에 노드 &lt;code&gt;STATUS&lt;/code&gt;가 &lt;code&gt;NotReady&lt;/code&gt;이며, 또한 coredns 파드들이 &lt;code&gt;Pending&lt;/code&gt; 상태로 머물러 있습니다.&lt;/p&gt;
&lt;pre class=&quot;routeros&quot;&gt;&lt;code&gt;(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl cluster-info dump | grep -m 2 -E &quot;cluster-cidr|service-cluster-ip-range&quot;
                            &quot;--service-cluster-ip-range=10.96.0.0/16&quot;,
                            &quot;--cluster-cidr=10.244.0.0/16&quot;,
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get pod -n kube-system -l k8s-app=kube-dns -owide
NAME                       READY   STATUS    RESTARTS   AGE    IP       NODE     NOMINATED NODE   READINESS GATES
coredns-674b8bbfcf-bqxft   0/1     Pending   0          2d5h   &amp;lt;none&amp;gt;   &amp;lt;none&amp;gt;   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
coredns-674b8bbfcf-dnghp   0/1     Pending   0          2d5h   &amp;lt;none&amp;gt;   &amp;lt;none&amp;gt;   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;노드가 &lt;code&gt;NotReady&lt;/code&gt;이기 때문에 taint가 지정되었고, coredns 를 describe 해보면 스케줄링이 불가한 것을 알 수 잇씁니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;Events:
  Type     Reason            Age                 From               Message
  ----     ------            ----                ----               -------
  Warning  FailedScheduling  25h (x30 over 38h)  default-scheduler  0/3 nodes are available: 3 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling.
  Warning  FailedScheduling  17m (x6 over 17h)   default-scheduler  0/3 nodes are available: 3 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling.&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;기본 통신을 확인하고 Cilium과의 비교를 위해 Flannel CNI Plugin을 설치하겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;먼저 Flannel 설치 이후 비교를 위해 기본적인 구성을 확인합니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;# lo, eth0, eth1 만 생성되어 있음
(⎈|HomeLab:N/A) root@k8s-ctr:~# ip -c link
1: lo: &amp;lt;LOOPBACK,UP,LOWER_UP&amp;gt; mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 08:00:27:6b:69:c9 brd ff:ff:ff:ff:ff:ff
    altname enp0s3
3: eth1: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 08:00:27:8f:41:1f brd ff:ff:ff:ff:ff:ff
    altname enp0s8
# 추가 route 없음
(⎈|HomeLab:N/A) root@k8s-ctr:~# ip -c route
default via 10.0.2.2 dev eth0 proto dhcp src 10.0.2.15 metric 100
10.0.2.0/24 dev eth0 proto kernel scope link src 10.0.2.15 metric 100
10.0.2.2 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100
10.0.2.3 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100
192.168.10.0/24 dev eth1 proto kernel scope link src 192.168.10.100
# bridge 없음
(⎈|HomeLab:N/A) root@k8s-ctr:~# brctl show
(⎈|HomeLab:N/A) root@k8s-ctr:~# iptables-save
# Generated by iptables-save v1.8.10 (nf_tables) on Wed Jul 16 00:11:35 2025
*mangle
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
:KUBE-IPTABLES-HINT - [0:0]
:KUBE-KUBELET-CANARY - [0:0]
:KUBE-PROXY-CANARY - [0:0]
COMMIT
# Completed on Wed Jul 16 00:11:35 2025
# Generated by iptables-save v1.8.10 (nf_tables) on Wed Jul 16 00:11:35 2025
*filter
:INPUT ACCEPT [5106429:946518575]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [5096701:937261419]
:KUBE-EXTERNAL-SERVICES - [0:0]
:KUBE-FIREWALL - [0:0]
:KUBE-FORWARD - [0:0]
:KUBE-KUBELET-CANARY - [0:0]
:KUBE-NODEPORTS - [0:0]
:KUBE-PROXY-CANARY - [0:0]
:KUBE-PROXY-FIREWALL - [0:0]
:KUBE-SERVICES - [0:0]
-A INPUT -m conntrack --ctstate NEW -m comment --comment &quot;kubernetes load balancer firewall&quot; -j KUBE-PROXY-FIREWALL
-A INPUT -m comment --comment &quot;kubernetes health check service ports&quot; -j KUBE-NODEPORTS
-A INPUT -m conntrack --ctstate NEW -m comment --comment &quot;kubernetes externally-visible service portals&quot; -j KUBE-EXTERNAL-SERVICES
-A INPUT -j KUBE-FIREWALL
-A FORWARD -m conntrack --ctstate NEW -m comment --comment &quot;kubernetes load balancer firewall&quot; -j KUBE-PROXY-FIREWALL
-A FORWARD -m comment --comment &quot;kubernetes forwarding rules&quot; -j KUBE-FORWARD
-A FORWARD -m conntrack --ctstate NEW -m comment --comment &quot;kubernetes service portals&quot; -j KUBE-SERVICES
-A FORWARD -m conntrack --ctstate NEW -m comment --comment &quot;kubernetes externally-visible service portals&quot; -j KUBE-EXTERNAL-SERVICES
-A OUTPUT -m conntrack --ctstate NEW -m comment --comment &quot;kubernetes load balancer firewall&quot; -j KUBE-PROXY-FIREWALL
-A OUTPUT -m conntrack --ctstate NEW -m comment --comment &quot;kubernetes service portals&quot; -j KUBE-SERVICES
-A OUTPUT -j KUBE-FIREWALL
-A KUBE-FIREWALL ! -s 127.0.0.0/8 -d 127.0.0.0/8 -m comment --comment &quot;block incoming localnet connections&quot; -m conntrack ! --ctstate RELATED,ESTABLISHED,DNAT -j DROP
-A KUBE-FORWARD -m conntrack --ctstate INVALID -m nfacct --nfacct-name  ct_state_invalid_dropped_pkts -j DROP
-A KUBE-FORWARD -m comment --comment &quot;kubernetes forwarding rules&quot; -m mark --mark 0x4000/0x4000 -j ACCEPT
-A KUBE-FORWARD -m comment --comment &quot;kubernetes forwarding conntrack rule&quot; -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A KUBE-SERVICES -d 10.96.0.10/32 -p tcp -m comment --comment &quot;kube-system/kube-dns:metrics has no endpoints&quot; -m tcp --dport 9153 -j REJECT --reject-with icmp-port-unreachable
-A KUBE-SERVICES -d 10.96.0.10/32 -p udp -m comment --comment &quot;kube-system/kube-dns:dns has no endpoints&quot; -m udp --dport 53 -j REJECT --reject-with icmp-port-unreachable
-A KUBE-SERVICES -d 10.96.0.10/32 -p tcp -m comment --comment &quot;kube-system/kube-dns:dns-tcp has no endpoints&quot; -m tcp --dport 53 -j REJECT --reject-with icmp-port-unreachable
COMMIT
# Completed on Wed Jul 16 00:11:35 2025
# Generated by iptables-save v1.8.10 (nf_tables) on Wed Jul 16 00:11:35 2025
*nat
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
:KUBE-KUBELET-CANARY - [0:0]
:KUBE-MARK-MASQ - [0:0]
:KUBE-NODEPORTS - [0:0]
:KUBE-POSTROUTING - [0:0]
:KUBE-PROXY-CANARY - [0:0]
:KUBE-SEP-ETI7FUQQE3BS2IXE - [0:0]
:KUBE-SERVICES - [0:0]
:KUBE-SVC-NPX46M4PTMTKRN6Y - [0:0]
-A PREROUTING -m comment --comment &quot;kubernetes service portals&quot; -j KUBE-SERVICES
-A OUTPUT -m comment --comment &quot;kubernetes service portals&quot; -j KUBE-SERVICES
-A POSTROUTING -m comment --comment &quot;kubernetes postrouting rules&quot; -j KUBE-POSTROUTING
-A KUBE-MARK-MASQ -j MARK --set-xmark 0x4000/0x4000
-A KUBE-POSTROUTING -m mark ! --mark 0x4000/0x4000 -j RETURN
-A KUBE-POSTROUTING -j MARK --set-xmark 0x4000/0x0
-A KUBE-POSTROUTING -m comment --comment &quot;kubernetes service traffic requiring SNAT&quot; -j MASQUERADE --random-fully
-A KUBE-SEP-ETI7FUQQE3BS2IXE -s 192.168.10.100/32 -m comment --comment &quot;default/kubernetes:https&quot; -j KUBE-MARK-MASQ
-A KUBE-SEP-ETI7FUQQE3BS2IXE -p tcp -m comment --comment &quot;default/kubernetes:https&quot; -m tcp -j DNAT --to-destination 192.168.10.100:6443
-A KUBE-SERVICES -d 10.96.0.1/32 -p tcp -m comment --comment &quot;default/kubernetes:https cluster IP&quot; -m tcp --dport 443 -j KUBE-SVC-NPX46M4PTMTKRN6Y
-A KUBE-SERVICES -m comment --comment &quot;kubernetes service nodeports; NOTE: this must be the last rule in this chain&quot; -m addrtype --dst-type LOCAL -j KUBE-NODEPORTS
-A KUBE-SVC-NPX46M4PTMTKRN6Y ! -s 10.244.0.0/16 -d 10.96.0.1/32 -p tcp -m comment --comment &quot;default/kubernetes:https cluster IP&quot; -m tcp --dport 443 -j KUBE-MARK-MASQ
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment &quot;default/kubernetes:https -&amp;gt; 192.168.10.100:6443&quot; -j KUBE-SEP-ETI7FUQQE3BS2IXE
COMMIT
# Completed on Wed Jul 16 00:11:35 2025
(⎈|HomeLab:N/A) root@k8s-ctr:~# iptables -t nat -S
-P PREROUTING ACCEPT
-P INPUT ACCEPT
-P OUTPUT ACCEPT
-P POSTROUTING ACCEPT
-N KUBE-KUBELET-CANARY
-N KUBE-MARK-MASQ
-N KUBE-NODEPORTS
-N KUBE-POSTROUTING
-N KUBE-PROXY-CANARY
-N KUBE-SEP-ETI7FUQQE3BS2IXE
-N KUBE-SERVICES
-N KUBE-SVC-NPX46M4PTMTKRN6Y
-A PREROUTING -m comment --comment &quot;kubernetes service portals&quot; -j KUBE-SERVICES
-A OUTPUT -m comment --comment &quot;kubernetes service portals&quot; -j KUBE-SERVICES
-A POSTROUTING -m comment --comment &quot;kubernetes postrouting rules&quot; -j KUBE-POSTROUTING
-A KUBE-MARK-MASQ -j MARK --set-xmark 0x4000/0x4000
-A KUBE-POSTROUTING -m mark ! --mark 0x4000/0x4000 -j RETURN
-A KUBE-POSTROUTING -j MARK --set-xmark 0x4000/0x0
-A KUBE-POSTROUTING -m comment --comment &quot;kubernetes service traffic requiring SNAT&quot; -j MASQUERADE --random-fully
-A KUBE-SEP-ETI7FUQQE3BS2IXE -s 192.168.10.100/32 -m comment --comment &quot;default/kubernetes:https&quot; -j KUBE-MARK-MASQ
-A KUBE-SEP-ETI7FUQQE3BS2IXE -p tcp -m comment --comment &quot;default/kubernetes:https&quot; -m tcp -j DNAT --to-destination 192.168.10.100:6443
-A KUBE-SERVICES -d 10.96.0.1/32 -p tcp -m comment --comment &quot;default/kubernetes:https cluster IP&quot; -m tcp --dport 443 -j KUBE-SVC-NPX46M4PTMTKRN6Y
-A KUBE-SERVICES -m comment --comment &quot;kubernetes service nodeports; NOTE: this must be the last rule in this chain&quot; -m addrtype --dst-type LOCAL -j KUBE-NODEPORTS
-A KUBE-SVC-NPX46M4PTMTKRN6Y ! -s 10.244.0.0/16 -d 10.96.0.1/32 -p tcp -m comment --comment &quot;default/kubernetes:https cluster IP&quot; -m tcp --dport 443 -j KUBE-MARK-MASQ
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment &quot;default/kubernetes:https -&amp;gt; 192.168.10.100:6443&quot; -j KUBE-SEP-ETI7FUQQE3BS2IXE
(⎈|HomeLab:N/A) root@k8s-ctr:~# iptables -t filter -S
-P INPUT ACCEPT
-P FORWARD ACCEPT
-P OUTPUT ACCEPT
-N KUBE-EXTERNAL-SERVICES
-N KUBE-FIREWALL
-N KUBE-FORWARD
-N KUBE-KUBELET-CANARY
-N KUBE-NODEPORTS
-N KUBE-PROXY-CANARY
-N KUBE-PROXY-FIREWALL
-N KUBE-SERVICES
-A INPUT -m conntrack --ctstate NEW -m comment --comment &quot;kubernetes load balancer firewall&quot; -j KUBE-PROXY-FIREWALL
-A INPUT -m comment --comment &quot;kubernetes health check service ports&quot; -j KUBE-NODEPORTS
-A INPUT -m conntrack --ctstate NEW -m comment --comment &quot;kubernetes externally-visible service portals&quot; -j KUBE-EXTERNAL-SERVICES
-A INPUT -j KUBE-FIREWALL
-A FORWARD -m conntrack --ctstate NEW -m comment --comment &quot;kubernetes load balancer firewall&quot; -j KUBE-PROXY-FIREWALL
-A FORWARD -m comment --comment &quot;kubernetes forwarding rules&quot; -j KUBE-FORWARD
-A FORWARD -m conntrack --ctstate NEW -m comment --comment &quot;kubernetes service portals&quot; -j KUBE-SERVICES
-A FORWARD -m conntrack --ctstate NEW -m comment --comment &quot;kubernetes externally-visible service portals&quot; -j KUBE-EXTERNAL-SERVICES
-A OUTPUT -m conntrack --ctstate NEW -m comment --comment &quot;kubernetes load balancer firewall&quot; -j KUBE-PROXY-FIREWALL
-A OUTPUT -m conntrack --ctstate NEW -m comment --comment &quot;kubernetes service portals&quot; -j KUBE-SERVICES
-A OUTPUT -j KUBE-FIREWALL
-A KUBE-FIREWALL ! -s 127.0.0.0/8 -d 127.0.0.0/8 -m comment --comment &quot;block incoming localnet connections&quot; -m conntrack ! --ctstate RELATED,ESTABLISHED,DNAT -j DROP
-A KUBE-FORWARD -m conntrack --ctstate INVALID -m nfacct --nfacct-name  ct_state_invalid_dropped_pkts -j DROP
-A KUBE-FORWARD -m comment --comment &quot;kubernetes forwarding rules&quot; -m mark --mark 0x4000/0x4000 -j ACCEPT
-A KUBE-FORWARD -m comment --comment &quot;kubernetes forwarding conntrack rule&quot; -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A KUBE-SERVICES -d 10.96.0.10/32 -p tcp -m comment --comment &quot;kube-system/kube-dns:metrics has no endpoints&quot; -m tcp --dport 9153 -j REJECT --reject-with icmp-port-unreachable
-A KUBE-SERVICES -d 10.96.0.10/32 -p udp -m comment --comment &quot;kube-system/kube-dns:dns has no endpoints&quot; -m udp --dport 53 -j REJECT --reject-with icmp-port-unreachable
-A KUBE-SERVICES -d 10.96.0.10/32 -p tcp -m comment --comment &quot;kube-system/kube-dns:dns-tcp has no endpoints&quot; -m tcp --dport 53 -j REJECT --reject-with icmp-port-unreachable
(⎈|HomeLab:N/A) root@k8s-ctr:~# iptables -t mangle -S
-P PREROUTING ACCEPT
-P INPUT ACCEPT
-P FORWARD ACCEPT
-P OUTPUT ACCEPT
-P POSTROUTING ACCEPT
-N KUBE-IPTABLES-HINT
-N KUBE-KUBELET-CANARY
-N KUBE-PROXY-CANARY
(⎈|HomeLab:N/A) root@k8s-ctr:~# tree /etc/cni/net.d/
/etc/cni/net.d/

0 directories, 0 files
(⎈|HomeLab:N/A) root@k8s-ctr:~#&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Helm을 통하여 Flannel을 설치합니다. 아래 작업은 컨트롤 플레인에서 진행합니다.&lt;/p&gt;
&lt;pre class=&quot;sql&quot;&gt;&lt;code&gt;# Needs manual creation of namespace to avoid helm error
kubectl create ns kube-flannel
kubectl label --overwrite ns kube-flannel pod-security.kubernetes.io/enforce=privileged

helm repo add flannel https://flannel-io.github.io/flannel/
helm repo list
helm search repo flannel
helm show values flannel/flannel

# k8s 관련 트래픽 통신 동작하는 nic 지정 (--iface=eth1 지정)
cat &amp;lt;&amp;lt; EOF &amp;gt; flannel-values.yaml
podCidr: &quot;10.244.0.0/16&quot;

flannel:
  args:
  - &quot;--ip-masq&quot;
  - &quot;--kube-subnet-mgr&quot;
  - &quot;--iface=eth1&quot;  
EOF

# helm 설치
(⎈|HomeLab:N/A) root@k8s-ctr:~# helm install flannel --namespace kube-flannel flannel/flannel -f flannel-values.yaml
NAME: flannel
LAST DEPLOYED: Wed Jul 16 00:15:06 2025
NAMESPACE: kube-flannel
STATUS: deployed
REVISION: 1
TEST SUITE: None

(⎈|HomeLab:N/A) root@k8s-ctr:~# helm list -A
NAME    NAMESPACE       REVISION        UPDATED                                 STATUS          CHART           APP VERSION
flannel kube-flannel    1               2025-07-16 00:15:06.577593785 +0900 KST deployed        flannel-v0.27.1 v0.27.1

# 확인 : install-cni-plugin, install-cni
(⎈|HomeLab:N/A) root@k8s-ctr:~# kc describe pod -n kube-flannel -l app=flannel
&amp;lt;생략&amp;gt;
Init Containers:
  install-cni-plugin:
    Container ID:  containerd://78cc66330b94d3db3bca8621539ff8bee2c384083da03f245d52bee899d2e39f
    Image:         ghcr.io/flannel-io/flannel-cni-plugin:v1.7.1-flannel1
    Image ID:      ghcr.io/flannel-io/flannel-cni-plugin@sha256:cb3176a2c9eae5fa0acd7f45397e706eacb4577dac33cad89f93b775ff5611df
    Port:          &amp;lt;none&amp;gt;
    Host Port:     &amp;lt;none&amp;gt;
    Command:
      cp
    Args:
      -f
      /flannel
      /opt/cni/bin/flannel
&amp;lt;생략&amp;gt;
Containers:
  kube-flannel:
    Container ID:  containerd://93ed6dd7c34c6e0e71b0c1556dd52ff65636cc9bbac610690e4a3f55c6c7aa51
    Image:         ghcr.io/flannel-io/flannel:v0.27.1
    Image ID:      ghcr.io/flannel-io/flannel@sha256:0c95c822b690f83dc827189d691015f92ab7e249e238876b56442b580c492d85
    Port:          &amp;lt;none&amp;gt;
    Host Port:     &amp;lt;none&amp;gt;
    Command:
      /opt/bin/flanneld
      --ip-masq
      --kube-subnet-mgr
      --iface=eth1


# flannel 바이너리가 추가 됨
(⎈|HomeLab:N/A) root@k8s-ctr:~# tree /opt/cni/bin/
/opt/cni/bin/
├── bandwidth
├── bridge
├── dhcp
├── dummy
├── firewall
├── flannel
├── host-device
├── host-local
├── ipvlan
├── LICENSE
├── loopback
├── macvlan
├── portmap
├── ptp
├── README.md
├── sbr
├── static
├── tap
├── tuning
├── vlan
└── vrf

1 directory, 21 files
(⎈|HomeLab:N/A) root@k8s-ctr:~# tree /etc/cni/net.d/
/etc/cni/net.d/
└── 10-flannel.conflist

1 directory, 1 file
(⎈|HomeLab:N/A) root@k8s-ctr:~# cat /etc/cni/net.d/10-flannel.conflist | jq
{
  &quot;name&quot;: &quot;cbr0&quot;,
  &quot;cniVersion&quot;: &quot;0.3.1&quot;,
  &quot;plugins&quot;: [
    {
      &quot;type&quot;: &quot;flannel&quot;,
      &quot;delegate&quot;: {
        &quot;hairpinMode&quot;: true,
        &quot;isDefaultGateway&quot;: true
      }
    },
    {
      &quot;type&quot;: &quot;portmap&quot;,
      &quot;capabilities&quot;: {
        &quot;portMappings&quot;: true
      }
    }
  ]
}
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl describe cm -n kube-flannel kube-flannel-cfg
Name:         kube-flannel-cfg
Namespace:    kube-flannel
Labels:       app=flannel
              app.kubernetes.io/managed-by=Helm
              tier=node
Annotations:  meta.helm.sh/release-name: flannel
              meta.helm.sh/release-namespace: kube-flannel

Data
====
cni-conf.json:
----
{
  &quot;name&quot;: &quot;cbr0&quot;,
  &quot;cniVersion&quot;: &quot;0.3.1&quot;,
  &quot;plugins&quot;: [
    {
      &quot;type&quot;: &quot;flannel&quot;,
      &quot;delegate&quot;: {
        &quot;hairpinMode&quot;: true,
        &quot;isDefaultGateway&quot;: true
      }
    },
    {
      &quot;type&quot;: &quot;portmap&quot;,
      &quot;capabilities&quot;: {
        &quot;portMappings&quot;: true
      }
    }
  ]
}


net-conf.json:
----
{
  &quot;Network&quot;: &quot;10.244.0.0/16&quot;,
  &quot;Backend&quot;: {
    &quot;Type&quot;: &quot;vxlan&quot;
  }
}



BinaryData
====

Events:  &amp;lt;none&amp;gt;

&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;설치 이후 노드에서 명령 수행하여 변경 사항을 확인합니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;# 설치 전과 비교 -&amp;gt; flannel.1 인터페이스 추가 됨
(⎈|HomeLab:N/A) root@k8s-ctr:~# ip -c link
1: lo: &amp;lt;LOOPBACK,UP,LOWER_UP&amp;gt; mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 08:00:27:6b:69:c9 brd ff:ff:ff:ff:ff:ff
    altname enp0s3
3: eth1: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 08:00:27:8f:41:1f brd ff:ff:ff:ff:ff:ff
    altname enp0s8
4: flannel.1: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1450 qdisc noqueue state UNKNOWN mode DEFAULT group default
    link/ether 22:21:40:78:7f:bd brd ff:ff:ff:ff:ff:ff

# 각 노드의 podCIDR에 대한 route 추가
(⎈|HomeLab:N/A) root@k8s-ctr:~# ip -c route | grep 10.244.
10.244.1.0/24 via 10.244.1.0 dev flannel.1 onlink
10.244.2.0/24 via 10.244.2.0 dev flannel.1 onlink
(⎈|HomeLab:N/A) root@k8s-ctr:~# brctl show

# 아래 정보는 파드가 존재하는 노드에서 확인
root@k8s-w2:~# ip -c route |grep 10.244
10.244.0.0/24 via 10.244.0.0 dev flannel.1 onlink
10.244.1.0/24 via 10.244.1.0 dev flannel.1 onlink
10.244.2.0/24 dev cni0 proto kernel scope link src 10.244.2.1

# 파드가 생성되면 bridge에 veth 인터페이스 연결이 확인됨
root@k8s-w2:~# brctl show
bridge name     bridge id               STP enabled     interfaces
cni0            8000.c29ee9c135b8       no              veth63c703f8
                                                        veth81fbb526

(⎈|HomeLab:N/A) root@k8s-ctr:~# iptables-save
# Generated by iptables-save v1.8.10 (nf_tables) on Wed Jul 16 00:21:51 2025
*mangle
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
:KUBE-IPTABLES-HINT - [0:0]
:KUBE-KUBELET-CANARY - [0:0]
:KUBE-PROXY-CANARY - [0:0]
COMMIT
# Completed on Wed Jul 16 00:21:51 2025
# Generated by iptables-save v1.8.10 (nf_tables) on Wed Jul 16 00:21:51 2025
*filter
:INPUT ACCEPT [5223288:1009002872]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [5209058:963011309]
:FLANNEL-FWD - [0:0]
:KUBE-EXTERNAL-SERVICES - [0:0]
:KUBE-FIREWALL - [0:0]
:KUBE-FORWARD - [0:0]
:KUBE-KUBELET-CANARY - [0:0]
:KUBE-NODEPORTS - [0:0]
:KUBE-PROXY-CANARY - [0:0]
:KUBE-PROXY-FIREWALL - [0:0]
:KUBE-SERVICES - [0:0]
-A INPUT -m conntrack --ctstate NEW -m comment --comment &quot;kubernetes load balancer firewall&quot; -j KUBE-PROXY-FIREWALL
-A INPUT -m comment --comment &quot;kubernetes health check service ports&quot; -j KUBE-NODEPORTS
-A INPUT -m conntrack --ctstate NEW -m comment --comment &quot;kubernetes externally-visible service portals&quot; -j KUBE-EXTERNAL-SERVICES
-A INPUT -j KUBE-FIREWALL
-A FORWARD -m conntrack --ctstate NEW -m comment --comment &quot;kubernetes load balancer firewall&quot; -j KUBE-PROXY-FIREWALL
-A FORWARD -m comment --comment &quot;kubernetes forwarding rules&quot; -j KUBE-FORWARD
-A FORWARD -m conntrack --ctstate NEW -m comment --comment &quot;kubernetes service portals&quot; -j KUBE-SERVICES
-A FORWARD -m conntrack --ctstate NEW -m comment --comment &quot;kubernetes externally-visible service portals&quot; -j KUBE-EXTERNAL-SERVICES
-A FORWARD -m comment --comment &quot;flanneld forward&quot; -j FLANNEL-FWD
-A OUTPUT -m conntrack --ctstate NEW -m comment --comment &quot;kubernetes load balancer firewall&quot; -j KUBE-PROXY-FIREWALL
-A OUTPUT -m conntrack --ctstate NEW -m comment --comment &quot;kubernetes service portals&quot; -j KUBE-SERVICES
-A OUTPUT -j KUBE-FIREWALL
-A FLANNEL-FWD -s 10.244.0.0/16 -m comment --comment &quot;flanneld forward&quot; -j ACCEPT
-A FLANNEL-FWD -d 10.244.0.0/16 -m comment --comment &quot;flanneld forward&quot; -j ACCEPT
-A KUBE-FIREWALL ! -s 127.0.0.0/8 -d 127.0.0.0/8 -m comment --comment &quot;block incoming localnet connections&quot; -m conntrack ! --ctstate RELATED,ESTABLISHED,DNAT -j DROP
-A KUBE-FORWARD -m conntrack --ctstate INVALID -m nfacct --nfacct-name  ct_state_invalid_dropped_pkts -j DROP
-A KUBE-FORWARD -m comment --comment &quot;kubernetes forwarding rules&quot; -m mark --mark 0x4000/0x4000 -j ACCEPT
-A KUBE-FORWARD -m comment --comment &quot;kubernetes forwarding conntrack rule&quot; -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
COMMIT
# Completed on Wed Jul 16 00:21:51 2025
# Generated by iptables-save v1.8.10 (nf_tables) on Wed Jul 16 00:21:51 2025
*nat
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
:FLANNEL-POSTRTG - [0:0]
:KUBE-KUBELET-CANARY - [0:0]
:KUBE-MARK-MASQ - [0:0]
:KUBE-NODEPORTS - [0:0]
:KUBE-POSTROUTING - [0:0]
:KUBE-PROXY-CANARY - [0:0]
:KUBE-SEP-CLAGU7VMF4VCXE4X - [0:0]
:KUBE-SEP-DLP2S2N3HX5UKLVP - [0:0]
:KUBE-SEP-ETI7FUQQE3BS2IXE - [0:0]
:KUBE-SEP-H7FN6LU3RSH6CC2T - [0:0]
:KUBE-SEP-TCIZBYBD3WWXNWF5 - [0:0]
:KUBE-SEP-TFTZVOJFQDTMM5AB - [0:0]
:KUBE-SEP-ZHICQ2ODADGCY7DS - [0:0]
:KUBE-SERVICES - [0:0]
:KUBE-SVC-ERIFXISQEP7F7OF4 - [0:0]
:KUBE-SVC-JD5MR3NA4I4DYORP - [0:0]
:KUBE-SVC-NPX46M4PTMTKRN6Y - [0:0]
:KUBE-SVC-TCOU7JCQXEZGVUNU - [0:0]
-A PREROUTING -m comment --comment &quot;kubernetes service portals&quot; -j KUBE-SERVICES
-A OUTPUT -m comment --comment &quot;kubernetes service portals&quot; -j KUBE-SERVICES
-A POSTROUTING -m comment --comment &quot;kubernetes postrouting rules&quot; -j KUBE-POSTROUTING
-A POSTROUTING -m comment --comment &quot;flanneld masq&quot; -j FLANNEL-POSTRTG
-A FLANNEL-POSTRTG -m mark --mark 0x4000/0x4000 -m comment --comment &quot;flanneld masq&quot; -j RETURN
-A FLANNEL-POSTRTG -s 10.244.0.0/24 -d 10.244.0.0/16 -m comment --comment &quot;flanneld masq&quot; -j RETURN
-A FLANNEL-POSTRTG -s 10.244.0.0/16 -d 10.244.0.0/24 -m comment --comment &quot;flanneld masq&quot; -j RETURN
-A FLANNEL-POSTRTG ! -s 10.244.0.0/16 -d 10.244.0.0/24 -m comment --comment &quot;flanneld masq&quot; -j RETURN
-A FLANNEL-POSTRTG -s 10.244.0.0/16 ! -d 224.0.0.0/4 -m comment --comment &quot;flanneld masq&quot; -j MASQUERADE --random-fully
-A FLANNEL-POSTRTG ! -s 10.244.0.0/16 -d 10.244.0.0/16 -m comment --comment &quot;flanneld masq&quot; -j MASQUERADE --random-fully
-A KUBE-MARK-MASQ -j MARK --set-xmark 0x4000/0x4000
-A KUBE-POSTROUTING -m mark ! --mark 0x4000/0x4000 -j RETURN
-A KUBE-POSTROUTING -j MARK --set-xmark 0x4000/0x0
-A KUBE-POSTROUTING -m comment --comment &quot;kubernetes service traffic requiring SNAT&quot; -j MASQUERADE --random-fully
-A KUBE-SEP-CLAGU7VMF4VCXE4X -s 10.244.2.2/32 -m comment --comment &quot;kube-system/kube-dns:metrics&quot; -j KUBE-MARK-MASQ
-A KUBE-SEP-CLAGU7VMF4VCXE4X -p tcp -m comment --comment &quot;kube-system/kube-dns:metrics&quot; -m tcp -j DNAT --to-destination 10.244.2.2:9153
-A KUBE-SEP-DLP2S2N3HX5UKLVP -s 10.244.2.3/32 -m comment --comment &quot;kube-system/kube-dns:dns-tcp&quot; -j KUBE-MARK-MASQ
-A KUBE-SEP-DLP2S2N3HX5UKLVP -p tcp -m comment --comment &quot;kube-system/kube-dns:dns-tcp&quot; -m tcp -j DNAT --to-destination 10.244.2.3:53
-A KUBE-SEP-ETI7FUQQE3BS2IXE -s 192.168.10.100/32 -m comment --comment &quot;default/kubernetes:https&quot; -j KUBE-MARK-MASQ
-A KUBE-SEP-ETI7FUQQE3BS2IXE -p tcp -m comment --comment &quot;default/kubernetes:https&quot; -m tcp -j DNAT --to-destination 192.168.10.100:6443
-A KUBE-SEP-H7FN6LU3RSH6CC2T -s 10.244.2.2/32 -m comment --comment &quot;kube-system/kube-dns:dns-tcp&quot; -j KUBE-MARK-MASQ
-A KUBE-SEP-H7FN6LU3RSH6CC2T -p tcp -m comment --comment &quot;kube-system/kube-dns:dns-tcp&quot; -m tcp -j DNAT --to-destination 10.244.2.2:53
-A KUBE-SEP-TCIZBYBD3WWXNWF5 -s 10.244.2.2/32 -m comment --comment &quot;kube-system/kube-dns:dns&quot; -j KUBE-MARK-MASQ
-A KUBE-SEP-TCIZBYBD3WWXNWF5 -p udp -m comment --comment &quot;kube-system/kube-dns:dns&quot; -m udp -j DNAT --to-destination 10.244.2.2:53
-A KUBE-SEP-TFTZVOJFQDTMM5AB -s 10.244.2.3/32 -m comment --comment &quot;kube-system/kube-dns:metrics&quot; -j KUBE-MARK-MASQ
-A KUBE-SEP-TFTZVOJFQDTMM5AB -p tcp -m comment --comment &quot;kube-system/kube-dns:metrics&quot; -m tcp -j DNAT --to-destination 10.244.2.3:9153
-A KUBE-SEP-ZHICQ2ODADGCY7DS -s 10.244.2.3/32 -m comment --comment &quot;kube-system/kube-dns:dns&quot; -j KUBE-MARK-MASQ
-A KUBE-SEP-ZHICQ2ODADGCY7DS -p udp -m comment --comment &quot;kube-system/kube-dns:dns&quot; -m udp -j DNAT --to-destination 10.244.2.3:53
-A KUBE-SERVICES -d 10.96.0.1/32 -p tcp -m comment --comment &quot;default/kubernetes:https cluster IP&quot; -m tcp --dport 443 -j KUBE-SVC-NPX46M4PTMTKRN6Y
-A KUBE-SERVICES -d 10.96.0.10/32 -p tcp -m comment --comment &quot;kube-system/kube-dns:metrics cluster IP&quot; -m tcp --dport 9153 -j KUBE-SVC-JD5MR3NA4I4DYORP
-A KUBE-SERVICES -d 10.96.0.10/32 -p udp -m comment --comment &quot;kube-system/kube-dns:dns cluster IP&quot; -m udp --dport 53 -j KUBE-SVC-TCOU7JCQXEZGVUNU
-A KUBE-SERVICES -d 10.96.0.10/32 -p tcp -m comment --comment &quot;kube-system/kube-dns:dns-tcp cluster IP&quot; -m tcp --dport 53 -j KUBE-SVC-ERIFXISQEP7F7OF4
-A KUBE-SERVICES -m comment --comment &quot;kubernetes service nodeports; NOTE: this must be the last rule in this chain&quot; -m addrtype --dst-type LOCAL -j KUBE-NODEPORTS
-A KUBE-SVC-ERIFXISQEP7F7OF4 ! -s 10.244.0.0/16 -d 10.96.0.10/32 -p tcp -m comment --comment &quot;kube-system/kube-dns:dns-tcp cluster IP&quot; -m tcp --dport 53 -j KUBE-MARK-MASQ
-A KUBE-SVC-ERIFXISQEP7F7OF4 -m comment --comment &quot;kube-system/kube-dns:dns-tcp -&amp;gt; 10.244.2.2:53&quot; -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-H7FN6LU3RSH6CC2T
-A KUBE-SVC-ERIFXISQEP7F7OF4 -m comment --comment &quot;kube-system/kube-dns:dns-tcp -&amp;gt; 10.244.2.3:53&quot; -j KUBE-SEP-DLP2S2N3HX5UKLVP
-A KUBE-SVC-JD5MR3NA4I4DYORP ! -s 10.244.0.0/16 -d 10.96.0.10/32 -p tcp -m comment --comment &quot;kube-system/kube-dns:metrics cluster IP&quot; -m tcp --dport 9153 -j KUBE-MARK-MASQ
-A KUBE-SVC-JD5MR3NA4I4DYORP -m comment --comment &quot;kube-system/kube-dns:metrics -&amp;gt; 10.244.2.2:9153&quot; -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-CLAGU7VMF4VCXE4X
-A KUBE-SVC-JD5MR3NA4I4DYORP -m comment --comment &quot;kube-system/kube-dns:metrics -&amp;gt; 10.244.2.3:9153&quot; -j KUBE-SEP-TFTZVOJFQDTMM5AB
-A KUBE-SVC-NPX46M4PTMTKRN6Y ! -s 10.244.0.0/16 -d 10.96.0.1/32 -p tcp -m comment --comment &quot;default/kubernetes:https cluster IP&quot; -m tcp --dport 443 -j KUBE-MARK-MASQ
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment &quot;default/kubernetes:https -&amp;gt; 192.168.10.100:6443&quot; -j KUBE-SEP-ETI7FUQQE3BS2IXE
-A KUBE-SVC-TCOU7JCQXEZGVUNU ! -s 10.244.0.0/16 -d 10.96.0.10/32 -p udp -m comment --comment &quot;kube-system/kube-dns:dns cluster IP&quot; -m udp --dport 53 -j KUBE-MARK-MASQ
-A KUBE-SVC-TCOU7JCQXEZGVUNU -m comment --comment &quot;kube-system/kube-dns:dns -&amp;gt; 10.244.2.2:53&quot; -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-TCIZBYBD3WWXNWF5
-A KUBE-SVC-TCOU7JCQXEZGVUNU -m comment --comment &quot;kube-system/kube-dns:dns -&amp;gt; 10.244.2.3:53&quot; -j KUBE-SEP-ZHICQ2ODADGCY7DS
COMMIT
# Completed on Wed Jul 16 00:21:51 2025
(⎈|HomeLab:N/A) root@k8s-ctr:~# iptables -t nat -S
-P PREROUTING ACCEPT
-P INPUT ACCEPT
-P OUTPUT ACCEPT
-P POSTROUTING ACCEPT
-N FLANNEL-POSTRTG
-N KUBE-KUBELET-CANARY
-N KUBE-MARK-MASQ
-N KUBE-NODEPORTS
-N KUBE-POSTROUTING
-N KUBE-PROXY-CANARY
-N KUBE-SEP-CLAGU7VMF4VCXE4X
-N KUBE-SEP-DLP2S2N3HX5UKLVP
-N KUBE-SEP-ETI7FUQQE3BS2IXE
-N KUBE-SEP-H7FN6LU3RSH6CC2T
-N KUBE-SEP-TCIZBYBD3WWXNWF5
-N KUBE-SEP-TFTZVOJFQDTMM5AB
-N KUBE-SEP-ZHICQ2ODADGCY7DS
-N KUBE-SERVICES
-N KUBE-SVC-ERIFXISQEP7F7OF4
-N KUBE-SVC-JD5MR3NA4I4DYORP
-N KUBE-SVC-NPX46M4PTMTKRN6Y
-N KUBE-SVC-TCOU7JCQXEZGVUNU
-A PREROUTING -m comment --comment &quot;kubernetes service portals&quot; -j KUBE-SERVICES
-A OUTPUT -m comment --comment &quot;kubernetes service portals&quot; -j KUBE-SERVICES
-A POSTROUTING -m comment --comment &quot;kubernetes postrouting rules&quot; -j KUBE-POSTROUTING
-A POSTROUTING -m comment --comment &quot;flanneld masq&quot; -j FLANNEL-POSTRTG
-A FLANNEL-POSTRTG -m mark --mark 0x4000/0x4000 -m comment --comment &quot;flanneld masq&quot; -j RETURN
-A FLANNEL-POSTRTG -s 10.244.0.0/24 -d 10.244.0.0/16 -m comment --comment &quot;flanneld masq&quot; -j RETURN
-A FLANNEL-POSTRTG -s 10.244.0.0/16 -d 10.244.0.0/24 -m comment --comment &quot;flanneld masq&quot; -j RETURN
-A FLANNEL-POSTRTG ! -s 10.244.0.0/16 -d 10.244.0.0/24 -m comment --comment &quot;flanneld masq&quot; -j RETURN
-A FLANNEL-POSTRTG -s 10.244.0.0/16 ! -d 224.0.0.0/4 -m comment --comment &quot;flanneld masq&quot; -j MASQUERADE --random-fully
-A FLANNEL-POSTRTG ! -s 10.244.0.0/16 -d 10.244.0.0/16 -m comment --comment &quot;flanneld masq&quot; -j MASQUERADE --random-fully
-A KUBE-MARK-MASQ -j MARK --set-xmark 0x4000/0x4000
-A KUBE-POSTROUTING -m mark ! --mark 0x4000/0x4000 -j RETURN
-A KUBE-POSTROUTING -j MARK --set-xmark 0x4000/0x0
-A KUBE-POSTROUTING -m comment --comment &quot;kubernetes service traffic requiring SNAT&quot; -j MASQUERADE --random-fully
-A KUBE-SEP-CLAGU7VMF4VCXE4X -s 10.244.2.2/32 -m comment --comment &quot;kube-system/kube-dns:metrics&quot; -j KUBE-MARK-MASQ
-A KUBE-SEP-CLAGU7VMF4VCXE4X -p tcp -m comment --comment &quot;kube-system/kube-dns:metrics&quot; -m tcp -j DNAT --to-destination 10.244.2.2:9153
-A KUBE-SEP-DLP2S2N3HX5UKLVP -s 10.244.2.3/32 -m comment --comment &quot;kube-system/kube-dns:dns-tcp&quot; -j KUBE-MARK-MASQ
-A KUBE-SEP-DLP2S2N3HX5UKLVP -p tcp -m comment --comment &quot;kube-system/kube-dns:dns-tcp&quot; -m tcp -j DNAT --to-destination 10.244.2.3:53
-A KUBE-SEP-ETI7FUQQE3BS2IXE -s 192.168.10.100/32 -m comment --comment &quot;default/kubernetes:https&quot; -j KUBE-MARK-MASQ
-A KUBE-SEP-ETI7FUQQE3BS2IXE -p tcp -m comment --comment &quot;default/kubernetes:https&quot; -m tcp -j DNAT --to-destination 192.168.10.100:6443
-A KUBE-SEP-H7FN6LU3RSH6CC2T -s 10.244.2.2/32 -m comment --comment &quot;kube-system/kube-dns:dns-tcp&quot; -j KUBE-MARK-MASQ
-A KUBE-SEP-H7FN6LU3RSH6CC2T -p tcp -m comment --comment &quot;kube-system/kube-dns:dns-tcp&quot; -m tcp -j DNAT --to-destination 10.244.2.2:53
-A KUBE-SEP-TCIZBYBD3WWXNWF5 -s 10.244.2.2/32 -m comment --comment &quot;kube-system/kube-dns:dns&quot; -j KUBE-MARK-MASQ
-A KUBE-SEP-TCIZBYBD3WWXNWF5 -p udp -m comment --comment &quot;kube-system/kube-dns:dns&quot; -m udp -j DNAT --to-destination 10.244.2.2:53
-A KUBE-SEP-TFTZVOJFQDTMM5AB -s 10.244.2.3/32 -m comment --comment &quot;kube-system/kube-dns:metrics&quot; -j KUBE-MARK-MASQ
-A KUBE-SEP-TFTZVOJFQDTMM5AB -p tcp -m comment --comment &quot;kube-system/kube-dns:metrics&quot; -m tcp -j DNAT --to-destination 10.244.2.3:9153
-A KUBE-SEP-ZHICQ2ODADGCY7DS -s 10.244.2.3/32 -m comment --comment &quot;kube-system/kube-dns:dns&quot; -j KUBE-MARK-MASQ
-A KUBE-SEP-ZHICQ2ODADGCY7DS -p udp -m comment --comment &quot;kube-system/kube-dns:dns&quot; -m udp -j DNAT --to-destination 10.244.2.3:53
-A KUBE-SERVICES -d 10.96.0.1/32 -p tcp -m comment --comment &quot;default/kubernetes:https cluster IP&quot; -m tcp --dport 443 -j KUBE-SVC-NPX46M4PTMTKRN6Y
-A KUBE-SERVICES -d 10.96.0.10/32 -p tcp -m comment --comment &quot;kube-system/kube-dns:metrics cluster IP&quot; -m tcp --dport 9153 -j KUBE-SVC-JD5MR3NA4I4DYORP
-A KUBE-SERVICES -d 10.96.0.10/32 -p udp -m comment --comment &quot;kube-system/kube-dns:dns cluster IP&quot; -m udp --dport 53 -j KUBE-SVC-TCOU7JCQXEZGVUNU
-A KUBE-SERVICES -d 10.96.0.10/32 -p tcp -m comment --comment &quot;kube-system/kube-dns:dns-tcp cluster IP&quot; -m tcp --dport 53 -j KUBE-SVC-ERIFXISQEP7F7OF4
-A KUBE-SERVICES -m comment --comment &quot;kubernetes service nodeports; NOTE: this must be the last rule in this chain&quot; -m addrtype --dst-type LOCAL -j KUBE-NODEPORTS
-A KUBE-SVC-ERIFXISQEP7F7OF4 ! -s 10.244.0.0/16 -d 10.96.0.10/32 -p tcp -m comment --comment &quot;kube-system/kube-dns:dns-tcp cluster IP&quot; -m tcp --dport 53 -j KUBE-MARK-MASQ
-A KUBE-SVC-ERIFXISQEP7F7OF4 -m comment --comment &quot;kube-system/kube-dns:dns-tcp -&amp;gt; 10.244.2.2:53&quot; -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-H7FN6LU3RSH6CC2T
-A KUBE-SVC-ERIFXISQEP7F7OF4 -m comment --comment &quot;kube-system/kube-dns:dns-tcp -&amp;gt; 10.244.2.3:53&quot; -j KUBE-SEP-DLP2S2N3HX5UKLVP
-A KUBE-SVC-JD5MR3NA4I4DYORP ! -s 10.244.0.0/16 -d 10.96.0.10/32 -p tcp -m comment --comment &quot;kube-system/kube-dns:metrics cluster IP&quot; -m tcp --dport 9153 -j KUBE-MARK-MASQ
-A KUBE-SVC-JD5MR3NA4I4DYORP -m comment --comment &quot;kube-system/kube-dns:metrics -&amp;gt; 10.244.2.2:9153&quot; -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-CLAGU7VMF4VCXE4X
-A KUBE-SVC-JD5MR3NA4I4DYORP -m comment --comment &quot;kube-system/kube-dns:metrics -&amp;gt; 10.244.2.3:9153&quot; -j KUBE-SEP-TFTZVOJFQDTMM5AB
-A KUBE-SVC-NPX46M4PTMTKRN6Y ! -s 10.244.0.0/16 -d 10.96.0.1/32 -p tcp -m comment --comment &quot;default/kubernetes:https cluster IP&quot; -m tcp --dport 443 -j KUBE-MARK-MASQ
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment &quot;default/kubernetes:https -&amp;gt; 192.168.10.100:6443&quot; -j KUBE-SEP-ETI7FUQQE3BS2IXE
-A KUBE-SVC-TCOU7JCQXEZGVUNU ! -s 10.244.0.0/16 -d 10.96.0.10/32 -p udp -m comment --comment &quot;kube-system/kube-dns:dns cluster IP&quot; -m udp --dport 53 -j KUBE-MARK-MASQ
-A KUBE-SVC-TCOU7JCQXEZGVUNU -m comment --comment &quot;kube-system/kube-dns:dns -&amp;gt; 10.244.2.2:53&quot; -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-TCIZBYBD3WWXNWF5
-A KUBE-SVC-TCOU7JCQXEZGVUNU -m comment --comment &quot;kube-system/kube-dns:dns -&amp;gt; 10.244.2.3:53&quot; -j KUBE-SEP-ZHICQ2ODADGCY7DS
(⎈|HomeLab:N/A) root@k8s-ctr:~# iptables -t filter -S
-P INPUT ACCEPT
-P FORWARD ACCEPT
-P OUTPUT ACCEPT
-N FLANNEL-FWD
-N KUBE-EXTERNAL-SERVICES
-N KUBE-FIREWALL
-N KUBE-FORWARD
-N KUBE-KUBELET-CANARY
-N KUBE-NODEPORTS
-N KUBE-PROXY-CANARY
-N KUBE-PROXY-FIREWALL
-N KUBE-SERVICES
-A INPUT -m conntrack --ctstate NEW -m comment --comment &quot;kubernetes load balancer firewall&quot; -j KUBE-PROXY-FIREWALL
-A INPUT -m comment --comment &quot;kubernetes health check service ports&quot; -j KUBE-NODEPORTS
-A INPUT -m conntrack --ctstate NEW -m comment --comment &quot;kubernetes externally-visible service portals&quot; -j KUBE-EXTERNAL-SERVICES
-A INPUT -j KUBE-FIREWALL
-A FORWARD -m conntrack --ctstate NEW -m comment --comment &quot;kubernetes load balancer firewall&quot; -j KUBE-PROXY-FIREWALL
-A FORWARD -m comment --comment &quot;kubernetes forwarding rules&quot; -j KUBE-FORWARD
-A FORWARD -m conntrack --ctstate NEW -m comment --comment &quot;kubernetes service portals&quot; -j KUBE-SERVICES
-A FORWARD -m conntrack --ctstate NEW -m comment --comment &quot;kubernetes externally-visible service portals&quot; -j KUBE-EXTERNAL-SERVICES
-A FORWARD -m comment --comment &quot;flanneld forward&quot; -j FLANNEL-FWD
-A OUTPUT -m conntrack --ctstate NEW -m comment --comment &quot;kubernetes load balancer firewall&quot; -j KUBE-PROXY-FIREWALL
-A OUTPUT -m conntrack --ctstate NEW -m comment --comment &quot;kubernetes service portals&quot; -j KUBE-SERVICES
-A OUTPUT -j KUBE-FIREWALL
-A FLANNEL-FWD -s 10.244.0.0/16 -m comment --comment &quot;flanneld forward&quot; -j ACCEPT
-A FLANNEL-FWD -d 10.244.0.0/16 -m comment --comment &quot;flanneld forward&quot; -j ACCEPT
-A KUBE-FIREWALL ! -s 127.0.0.0/8 -d 127.0.0.0/8 -m comment --comment &quot;block incoming localnet connections&quot; -m conntrack ! --ctstate RELATED,ESTABLISHED,DNAT -j DROP
-A KUBE-FORWARD -m conntrack --ctstate INVALID -m nfacct --nfacct-name  ct_state_invalid_dropped_pkts -j DROP
-A KUBE-FORWARD -m comment --comment &quot;kubernetes forwarding rules&quot; -m mark --mark 0x4000/0x4000 -j ACCEPT
-A KUBE-FORWARD -m comment --comment &quot;kubernetes forwarding conntrack rule&quot; -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;파드 통신이 가능해졌으며 확인해보면 노드가 Ready 상태로 전환되고, 또한 coredns 파드가 정상으로 실행 중으로 확인됩니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get no -owide
NAME      STATUS   ROLES           AGE    VERSION   INTERNAL-IP      EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION     CONTAINER-RUNTIME
k8s-ctr   Ready    control-plane   2d5h   v1.33.2   192.168.10.100   &amp;lt;none&amp;gt;        Ubuntu 24.04.2 LTS   6.8.0-53-generic   containerd://1.7.27
k8s-w1    Ready    &amp;lt;none&amp;gt;          2d5h   v1.33.2   192.168.10.101   &amp;lt;none&amp;gt;        Ubuntu 24.04.2 LTS   6.8.0-53-generic   containerd://1.7.27
k8s-w2    Ready    &amp;lt;none&amp;gt;          2d5h   v1.33.2   192.168.10.102   &amp;lt;none&amp;gt;        Ubuntu 24.04.2 LTS   6.8.0-53-generic   containerd://1.7.27
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get po -A -owide
NAMESPACE      NAME                              READY   STATUS    RESTARTS      AGE    IP               NODE      NOMINATED NODE   READINESS GATES
kube-flannel   kube-flannel-ds-2r8lg             1/1     Running   0             53s    192.168.10.101   k8s-w1    &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-flannel   kube-flannel-ds-5xn5l             1/1     Running   0             53s    192.168.10.102   k8s-w2    &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-flannel   kube-flannel-ds-hjcnq             1/1     Running   0             53s    192.168.10.100   k8s-ctr   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system    coredns-674b8bbfcf-bqxft          1/1     Running   0             2d5h   10.244.2.2       k8s-w2    &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system    coredns-674b8bbfcf-dnghp          1/1     Running   0             2d5h   10.244.2.3       k8s-w2    &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system    etcd-k8s-ctr                      1/1     Running   0             2d5h   192.168.10.100   k8s-ctr   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system    kube-apiserver-k8s-ctr            1/1     Running   0             2d5h   192.168.10.100   k8s-ctr   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system    kube-controller-manager-k8s-ctr   1/1     Running   8 (39h ago)   2d5h   192.168.10.100   k8s-ctr   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system    kube-proxy-6kmfl                  1/1     Running   0             2d5h   192.168.10.101   k8s-w1    &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system    kube-proxy-drbwf                  1/1     Running   0             2d5h   192.168.10.100   k8s-ctr   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system    kube-proxy-kqpf7                  1/1     Running   0             2d5h   192.168.10.102   k8s-w2    &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system    kube-scheduler-k8s-ctr            1/1     Running   8 (39h ago)   2d5h   192.168.10.100   k8s-ctr   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;통신 테스트를 위해서 샘플 애플리케이션 배포합니다.&lt;/p&gt;
&lt;pre class=&quot;yaml&quot;&gt;&lt;code&gt;# 샘플 애플리케이션 배포
cat &amp;lt;&amp;lt; EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
  name: webpod
spec:
  replicas: 2
  selector:
    matchLabels:
      app: webpod
  template:
    metadata:
      labels:
        app: webpod
    spec:
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchExpressions:
              - key: app
                operator: In
                values:
                - sample-app
            topologyKey: &quot;kubernetes.io/hostname&quot;
      containers:
      - name: webpod
        image: traefik/whoami
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: webpod
  labels:
    app: webpod
spec:
  selector:
    app: webpod
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80
  type: ClusterIP
EOF


# k8s-ctr 노드에 curl-pod 파드 배포
cat &amp;lt;&amp;lt;EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
  name: curl-pod
  labels:
    app: curl
spec:
  nodeName: k8s-ctr
  containers:
    - name: curl
      image: alpine/curl
      command: [&quot;sleep&quot;, &quot;36000&quot;]
EOF&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;파드 배포가 완료되었습니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get po -owide
NAME                      READY   STATUS    RESTARTS   AGE   IP           NODE      NOMINATED NODE   READINESS GATES
curl-pod                  1/1     Running   0          36s   10.244.0.2   k8s-ctr   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
webpod-697b545f57-j28gf   1/1     Running   0          38s   10.244.2.4   k8s-w2    &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
webpod-697b545f57-r528h   1/1     Running   0          38s   10.244.1.2   k8s-w1    &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get svc -owide
NAME         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE    SELECTOR
kubernetes   ClusterIP   10.96.0.1      &amp;lt;none&amp;gt;        443/TCP   2d5h   &amp;lt;none&amp;gt;
webpod       ClusterIP   10.96.39.159   &amp;lt;none&amp;gt;        80/TCP    58s    app=webpod&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;파드 및 서비스 통신을 확인 합니다.&lt;/p&gt;
&lt;pre class=&quot;yaml&quot;&gt;&lt;code&gt;(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get pod -l app=webpod -owide
NAME                      READY   STATUS    RESTARTS   AGE    IP           NODE     NOMINATED NODE   READINESS GATES
webpod-697b545f57-j28gf   1/1     Running   0          113s   10.244.2.4   k8s-w2   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
webpod-697b545f57-r528h   1/1     Running   0          113s   10.244.1.2   k8s-w1   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
(⎈|HomeLab:N/A) root@k8s-ctr:~# POD1IP=10.244.1.2
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it curl-pod -- curl $POD1IP
Hostname: webpod-697b545f57-r528h
IP: 127.0.0.1
IP: ::1
IP: 10.244.1.2
IP: fe80::14a6:c5ff:feec:b067
RemoteAddr: 10.244.0.2:35470
GET / HTTP/1.1
Host: 10.244.1.2
User-Agent: curl/8.14.1
Accept: */*

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get svc,ep webpod
Warning: v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
NAME             TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE
service/webpod   ClusterIP   10.96.39.159   &amp;lt;none&amp;gt;        80/TCP    2m24s

NAME               ENDPOINTS                     AGE
endpoints/webpod   10.244.1.2:80,10.244.2.4:80   2m23s
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it curl-pod -- curl webpod
Hostname: webpod-697b545f57-r528h
IP: 127.0.0.1
IP: ::1
IP: 10.244.1.2
IP: fe80::14a6:c5ff:feec:b067
RemoteAddr: 10.244.0.2:46252
GET / HTTP/1.1
Host: webpod
User-Agent: curl/8.14.1
Accept: */*
&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;쿠버네티스의 Service는 Loadbalancer 처럼 동작합니다. 생성된 Service의 cluster IP로 iptables에 정책을 확인해보면, 관련 정책이 등록된 것을 알 수 있다. 대부분의 CNI plugin은 Iptables 를 통신을 가능하도록 합니다.&lt;/p&gt;
&lt;pre class=&quot;yaml&quot;&gt;&lt;code&gt;(⎈|HomeLab:N/A) rootSVCIP=$(kubectl get svc webpod -o jsonpath=&quot;{.spec.clusterIP}&quot;)clusterIP}&quot;)
(⎈|HomeLab:N/A) root@k8s-ctr:~# iptables -t nat -S | grep $SVCIP
-A KUBE-SERVICES -d 10.96.39.159/32 -p tcp -m comment --comment &quot;default/webpod cluster IP&quot; -m tcp --dport 80 -j KUBE-SVC-CNZCPOCNCNOROALA
-A KUBE-SVC-CNZCPOCNCNOROALA ! -s 10.244.0.0/16 -d 10.96.39.159/32 -p tcp -m comment --comment &quot;default/webpod cluster IP&quot; -m tcp --dport 80 -j KUBE-MARK-MASQ&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;실제로 Service를 만들 때마다 Iptables가 추가되게 되는 구조이기 때문에, 클러스터가 커지고 서비스/파드가 많아지면 Iptables의 특성으로 인해서 비효율성이 생깁니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;일반적으로 작은 규모의 클러스터에서는 Iptables 기반의 CNI plugin에서 문제가 없지만, 클러스터의 규모가 커진다면 성능 저하가 발생하는 것으로 알려져 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;노드 및 파드가 증가한 경우 성능이 저하되는 사례를 아래에서 확인할 수 있습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1106&quot; data-origin-height=&quot;618&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/cA4Ccl/btsPqzSOKYf/7N8O4ujQPviSuJf8frfhqk/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/cA4Ccl/btsPqzSOKYf/7N8O4ujQPviSuJf8frfhqk/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/cA4Ccl/btsPqzSOKYf/7N8O4ujQPviSuJf8frfhqk/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FcA4Ccl%2FbtsPqzSOKYf%2F7N8O4ujQPviSuJf8frfhqk%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1106&quot; height=&quot;618&quot; data-origin-width=&quot;1106&quot; data-origin-height=&quot;618&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;출처: &lt;a href=&quot;https://www.youtube.com/watch?v=cKPW67D7X10&quot;&gt;https://www.youtube.com/watch?v=cKPW67D7X10&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;3. Cilium 소개&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Cilium은 eBPF 기반으로 파드 네트워킹 환경과 보안을 제공하는 CNI Plugin 입니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;여기서 eBPF(extended Berkeley Packet Filter)는 커널을 수정하지 않고도 다양한 커널 이벤트에 BPF 프로그램을 삽입하도록 하여, 특정 커널 이벤트가 발생할 때 사용자 정의 코드를 동적으로 실행할 수 있도록 지원하는 기술입니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아래는 일반적인 Container Networking과 Cilum의 eBPF를 통한 컨테이너 네트워킹의 차이를 설명한 그림입니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;좌측에서는 iptables를 통해서 통신을 처리하는 반면, Cilium에서는 eBPF 프로그램을 통해서 리눅스 커널의 hook에서 동작하도록 하기 때문에 보다 효율적으로 트래픽을 처리할 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;https://cilium.io/static/7b77faac1700b51b5612abb7ec0c8f40/0bb32/ebpf_hostrouting.png&quot;&gt;&amp;nbsp;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1008&quot; data-origin-height=&quot;565&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/5UMHU/btsPrp2ZJIC/1LYkjqGiGhjMnwckcZI4Sk/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/5UMHU/btsPrp2ZJIC/1LYkjqGiGhjMnwckcZI4Sk/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/5UMHU/btsPrp2ZJIC/1LYkjqGiGhjMnwckcZI4Sk/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2F5UMHU%2FbtsPrp2ZJIC%2F1LYkjqGiGhjMnwckcZI4Sk%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1008&quot; height=&quot;565&quot; data-origin-width=&quot;1008&quot; data-origin-height=&quot;565&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;span&gt;출처: &lt;/span&gt;&lt;span&gt;&lt;a href=&quot;https://cilium.io/blog/2021/05/11/cni-benchmark/&quot;&gt;https://cilium.io/blog/2021/05/11/cni-benchmark/&lt;/a&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Cilium의 주요 구성요소는 아래와 같습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1619&quot; data-origin-height=&quot;1443&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/yClPI/btsPqAKW8e8/F8ygzVK7nL71zCLP1hFyW1/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/yClPI/btsPqAKW8e8/F8ygzVK7nL71zCLP1hFyW1/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/yClPI/btsPqAKW8e8/F8ygzVK7nL71zCLP1hFyW1/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FyClPI%2FbtsPqAKW8e8%2FF8ygzVK7nL71zCLP1hFyW1%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1619&quot; height=&quot;1443&quot; data-origin-width=&quot;1619&quot; data-origin-height=&quot;1443&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;span&gt;출처: &lt;/span&gt;&lt;span&gt;&lt;a href=&quot;https://docs.cilium.io/en/stable/overview/component-overview/&quot;&gt;https://docs.cilium.io/en/stable/overview/component-overview/&lt;/a&gt;&lt;/span&gt;&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;Cilium &lt;b&gt;Operator&lt;/b&gt; : K8S 클러스터에 대한 한 번씩 처리해야 하는 작업을 관리.&lt;/li&gt;
&lt;li&gt;Cilium &lt;b&gt;Agent&lt;/b&gt; : 데몬셋으로 실행, K8S API 설정으로 부터 '네트워크 설정, 네트워크 정책, 서비스 부하분산, 모니터링' 등을 수행하며, eBPF 프로그램을 관리한다.&lt;/li&gt;
&lt;li&gt;Cilium &lt;b&gt;Client&lt;/b&gt; (CLI) : Cilium 커멘드툴, eBPF maps 에 직접 접속하여 상태를 확인할 수 있다.&lt;/li&gt;
&lt;li&gt;&lt;b&gt;Hubble&lt;/b&gt; : 네트워크와 보안 모니터링 플랫폼 역할을 하여, 'Server, Relay, Client, Graphical UI' 로 구성되어 있다.&lt;/li&gt;
&lt;li&gt;&lt;b&gt;Data Store&lt;/b&gt; : Cilium Agent 간의 상태를 저장하고 전파하는 데이터 저장소, 2가지 종류 중 선택(K8S CRDs, Key-Value Store)&lt;/li&gt;
&lt;/ul&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Cilium의 CNI Plugin에서 IPAM과 라우팅을 어떻게 처리하는지 상세히 알아보겠습니다.&lt;/p&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;Cilium IPAM 모드&lt;/h4&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Cilium의 IPAM(IP Address Management, 파드에 IP를 제공하는 방법)은 Kubernetes Host Scope 혹은 Cluster Scope(Default), Multi-Pool(Beta)을 사용할 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;b&gt;Kubernetes Host Scope&lt;/b&gt;은 Kubernetes에 의해서 PodCIDR이 각 노드에 할당 됩니다. Cilium 에이전트는 &lt;code&gt;v1.Node&lt;/code&gt; 오브젝트에 PodCIDR이 할당될 때가지 startup 하지 않고 대기하며, 이후에 PodCIDR에 따라, host-scope allocator가 파드 IP를 할당합니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1488&quot; data-origin-height=&quot;454&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bFjxtR/btsPqNiXv7Z/QbIoXj22mJWwqLE6e3iB1K/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bFjxtR/btsPqNiXv7Z/QbIoXj22mJWwqLE6e3iB1K/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bFjxtR/btsPqNiXv7Z/QbIoXj22mJWwqLE6e3iB1K/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbFjxtR%2FbtsPqNiXv7Z%2FQbIoXj22mJWwqLE6e3iB1K%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1488&quot; height=&quot;454&quot; data-origin-width=&quot;1488&quot; data-origin-height=&quot;454&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;출처: &lt;a href=&quot;https://docs.cilium.io/en/stable/network/concepts/ipam/kubernetes/&quot;&gt;https://docs.cilium.io/en/stable/network/concepts/ipam/kubernetes/&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;b&gt;Cluster Scope&lt;/b&gt; IPAM 모드에서는 각 노드에 노드별 PodCIDR을 할당하고, 노드의 Cilium Agent의 host-scope allocator를 통해서 파드IP를 할당합니다. Kubernetes Host Scope는 &lt;code&gt;v1.Node&lt;/code&gt; 리소스에 노드별 PodCIDR을 할당하지만, cluster Scope에서는 Cilium Operator가 v2.CiliumNode 리소스에 노드별 PodCIDR을 할당한다는 차이가 있습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;714&quot; data-origin-height=&quot;175&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bSu9cK/btsPqzeb5Su/K2EN1IjBRZ3C3kK4k0rZg0/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bSu9cK/btsPqzeb5Su/K2EN1IjBRZ3C3kK4k0rZg0/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bSu9cK/btsPqzeb5Su/K2EN1IjBRZ3C3kK4k0rZg0/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbSu9cK%2FbtsPqzeb5Su%2FK2EN1IjBRZ3C3kK4k0rZg0%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;714&quot; height=&quot;175&quot; data-origin-width=&quot;714&quot; data-origin-height=&quot;175&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;출처: &lt;a href=&quot;https://docs.cilium.io/en/stable/network/concepts/ipam/cluster-pool/&quot;&gt;https://docs.cilium.io/en/stable/network/concepts/ipam/cluster-pool/&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;b&gt;Multi-Pool&lt;/b&gt;은 현재(2025.07.16) beta feature로, 여러 다른 IPAM pools에서 PodCIDR을 할당하는 것을 지원합니다. 이를 통해서 동일 노드에 다른 IP 대역을 할당하거나, 동적으로 PodCIDR을 추가하는 기능을 제공합니다. 문서를 살펴보면 Cluster Scope에서도 다른 IP대역을 할당할 수 있지만 동적으로 추가할 수 없으며, Multi-pool에서는 파드 IP 할당을 &lt;code&gt;ipam.cilium.io/ip-pool: mars&lt;/code&gt;와 같이 파드 스펙에 지정하여 원하는 IP 대역을 정의할 수 있습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1672&quot; data-origin-height=&quot;680&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bhlPwg/btsPqFyGmPu/q5rQA3KzrePmptcA6Vtx40/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bhlPwg/btsPqFyGmPu/q5rQA3KzrePmptcA6Vtx40/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bhlPwg/btsPqFyGmPu/q5rQA3KzrePmptcA6Vtx40/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbhlPwg%2FbtsPqFyGmPu%2Fq5rQA3KzrePmptcA6Vtx40%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1672&quot; height=&quot;680&quot; data-origin-width=&quot;1672&quot; data-origin-height=&quot;680&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;출처: &lt;a href=&quot;https://docs.cilium.io/en/stable/network/concepts/ipam/multi-pool/&quot;&gt;https://docs.cilium.io/en/stable/network/concepts/ipam/multi-pool/&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;각 IPAM 모드에 대한 설명을 아래 문서를 참고하실 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;참고: &lt;a href=&quot;https://isovalent.com/blog/post/overcoming-kubernetes-ip-address-exhaustion-with-cilium/&quot;&gt;https://isovalent.com/blog/post/overcoming-kubernetes-ip-address-exhaustion-with-cilium/&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;그외 각 CSP에서 IPAM을 제공하고, Cilium을 사용할 수 있으므로 아래 문서를 참고하시기 바랍니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;참고: &lt;a href=&quot;https://docs.cilium.io/en/stable/network/concepts/ipam/&quot;&gt;https://docs.cilium.io/en/stable/network/concepts/ipam/&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;참고로 이후 실습에서는 기본 모드인 Cluster Scope를 사용하겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;Cilium 라우팅 모드&lt;/h4&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Cilium의 라우팅 모드(파드 통신을 제공하기 위한 라우팅 방법)에는 Encapsulation 모드와 Native Routing 모드가 있습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1939&quot; data-origin-height=&quot;826&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bqL4SX/btsPpzfdBuf/2yvE9V2mRUR6oYBGnB2aqk/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bqL4SX/btsPpzfdBuf/2yvE9V2mRUR6oYBGnB2aqk/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bqL4SX/btsPpzfdBuf/2yvE9V2mRUR6oYBGnB2aqk/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbqL4SX%2FbtsPpzfdBuf%2F2yvE9V2mRUR6oYBGnB2aqk%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1939&quot; height=&quot;826&quot; data-origin-width=&quot;1939&quot; data-origin-height=&quot;826&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;출처: Cilium (현재는 해당 이미지를 사용하지 않음)&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Encapsulation 모드는 노드 간 Tunnel을 구성하여 UDP 기반의 encapsulation 프로토콜인 VXLAN이나 Geneve를 사용합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아래는 Native Routing 모드를 설명하는 그림으로, encapsulation이 없이 각 노드에 대한 파드 IP를 직접 라우팅으로 처리하여 효율적입니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;882&quot; data-origin-height=&quot;375&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/CgZVo/btsPp0i6bmm/D8GhIAmO7a2LF6NFENVaTk/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/CgZVo/btsPp0i6bmm/D8GhIAmO7a2LF6NFENVaTk/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/CgZVo/btsPp0i6bmm/D8GhIAmO7a2LF6NFENVaTk/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FCgZVo%2FbtsPp0i6bmm%2FD8GhIAmO7a2LF6NFENVaTk%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;882&quot; height=&quot;375&quot; data-origin-width=&quot;882&quot; data-origin-height=&quot;375&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;출처: &lt;a href=&quot;https://docs.cilium.io/en/stable/network/concepts/routing/&quot;&gt;https://docs.cilium.io/en/stable/network/concepts/routing/&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;4. Cilium 설치&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Cilium을 사용하기 위해서 OS의 요구사항이 있습니다. 주요한 내용으로 리눅스 커널 버전이 5.4 이상이어야 하며, BPF 에 필요한 커널 옵션이 활성화 되어 있어야 합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;상세한 요구사항에 대해서 아래의 문서를 참고하시기 바랍니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;참고: &lt;a href=&quot;https://docs.cilium.io/en/stable/operations/system_requirements/&quot;&gt;https://docs.cilium.io/en/stable/operations/system_requirements/&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;설치 과정은 아래 문서를 바탕으로 진행할 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;https://docs.cilium.io/en/stable/network/kubernetes/kubeproxy-free/&quot;&gt;https://docs.cilium.io/en/stable/network/kubernetes/kubeproxy-free/&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;먼저 앞서 설치한 Flannel 제거하겠습니다.&lt;/p&gt;
&lt;pre class=&quot;routeros&quot;&gt;&lt;code&gt;# Flannel 삭제
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get po -A -owide
NAMESPACE      NAME                              READY   STATUS    RESTARTS       AGE     IP               NODE      NOMINATED NODE   READINESS GATES
default        curl-pod                          1/1     Running   5 (8h ago)     2d22h   10.244.0.2       k8s-ctr   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
default        webpod-697b545f57-j28gf           1/1     Running   0              2d22h   10.244.2.4       k8s-w2    &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
default        webpod-697b545f57-r528h           1/1     Running   0              2d22h   10.244.1.2       k8s-w1    &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-flannel   kube-flannel-ds-2r8lg             1/1     Running   0              2d22h   192.168.10.101   k8s-w1    &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-flannel   kube-flannel-ds-5xn5l             1/1     Running   0              2d22h   192.168.10.102   k8s-w2    &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-flannel   kube-flannel-ds-hjcnq             1/1     Running   0              2d22h   192.168.10.100   k8s-ctr   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system    coredns-674b8bbfcf-bqxft          1/1     Running   0              5d4h    10.244.2.2       k8s-w2    &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system    coredns-674b8bbfcf-dnghp          1/1     Running   0              5d4h    10.244.2.3       k8s-w2    &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system    etcd-k8s-ctr                      1/1     Running   0              5d4h    192.168.10.100   k8s-ctr   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system    kube-apiserver-k8s-ctr            1/1     Running   0              5d4h    192.168.10.100   k8s-ctr   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system    kube-controller-manager-k8s-ctr   1/1     Running   27 (45m ago)   5d4h    192.168.10.100   k8s-ctr   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system    kube-proxy-6kmfl                  1/1     Running   0              5d4h    192.168.10.101   k8s-w1    &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system    kube-proxy-drbwf                  1/1     Running   0              5d4h    192.168.10.100   k8s-ctr   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system    kube-proxy-kqpf7                  1/1     Running   0              5d4h    192.168.10.102   k8s-w2    &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system    kube-scheduler-k8s-ctr            1/1     Running   31 (45m ago)   5d4h    192.168.10.100   k8s-ctr   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
(⎈|HomeLab:N/A) root@k8s-ctr:~# helm uninstall -n kube-flannel flannel
release &quot;flannel&quot; uninstalled
(⎈|HomeLab:N/A) root@k8s-ctr:~# helm list -A
NAME    NAMESPACE       REVISION        UPDATED STATUS  CHART   APP VERSION

# NS 삭제
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get all -n kube-flannel
NAME                        READY   STATUS        RESTARTS   AGE
pod/kube-flannel-ds-2r8lg   1/1     Terminating   0          2d22h
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl delete ns kube-flannel
namespace &quot;kube-flannel&quot; deleted

# 파드 확인 (flannel이 삭제되었지만 파드IP가 여전히 이전 PodCIDR로 확인됨)
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get pod -A -owide
NAMESPACE     NAME                              READY   STATUS    RESTARTS       AGE     IP               NODE      NOMINATED NODE   READINESS GATES
default       curl-pod                          1/1     Running   5 (8h ago)     2d22h   10.244.0.2       k8s-ctr   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
default       webpod-697b545f57-j28gf           1/1     Running   0              2d22h   10.244.2.4       k8s-w2    &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
default       webpod-697b545f57-r528h           1/1     Running   0              2d22h   10.244.1.2       k8s-w1    &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system   coredns-674b8bbfcf-bqxft          1/1     Running   0              5d4h    10.244.2.2       k8s-w2    &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system   coredns-674b8bbfcf-dnghp          1/1     Running   0              5d4h    10.244.2.3       k8s-w2    &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system   etcd-k8s-ctr                      1/1     Running   0              5d4h    192.168.10.100   k8s-ctr   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system   kube-apiserver-k8s-ctr            1/1     Running   0              5d4h    192.168.10.100   k8s-ctr   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system   kube-controller-manager-k8s-ctr   1/1     Running   27 (46m ago)   5d4h    192.168.10.100   k8s-ctr   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system   kube-proxy-6kmfl                  1/1     Running   0              5d4h    192.168.10.101   k8s-w1    &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system   kube-proxy-drbwf                  1/1     Running   0              5d4h    192.168.10.100   k8s-ctr   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system   kube-proxy-kqpf7                  1/1     Running   0              5d4h    192.168.10.102   k8s-w2    &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system   kube-scheduler-k8s-ctr            1/1     Running   31 (46m ago)   5d4h    192.168.10.100   k8s-ctr   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;

# 인터페이스도 여전히 남아 있다.
(⎈|HomeLab:N/A) root@k8s-ctr:~# ip -c link
1: lo: &amp;lt;LOOPBACK,UP,LOWER_UP&amp;gt; mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 08:00:27:6b:69:c9 brd ff:ff:ff:ff:ff:ff
    altname enp0s3
3: eth1: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 08:00:27:8f:41:1f brd ff:ff:ff:ff:ff:ff
    altname enp0s8
4: flannel.1: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1450 qdisc noqueue state UNKNOWN mode DEFAULT group default
    link/ether 22:21:40:78:7f:bd brd ff:ff:ff:ff:ff:ff
5: cni0: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1450 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether be:e2:9b:bc:fb:b6 brd ff:ff:ff:ff:ff:ff
6: veth01f07be9@if2: &amp;lt;BROADCAST,MULTICAST,UP,LOWER_UP&amp;gt; mtu 1450 qdisc noqueue master cni0 state UP mode DEFAULT group default qlen 1000
    link/ether c6:3e:36:55:a2:63 brd ff:ff:ff:ff:ff:ff link-netns cni-7d860ae2-7c43-61c3-17fb-8f77adcf27d7

# 남아있는 설정을 날리기 위해서 각 노드의 인터페이스를 삭제한다.
ip link del flannel.1
ip link del cni0

for i in w1 w2 ; do echo &quot;&amp;gt;&amp;gt; node : k8s-$i &amp;lt;&amp;lt;&quot;; sshpass -p 'vagrant' ssh -o StrictHostKeyChecking=no vagrant@k8s-$i sudo ip link del flannel.1 ; echo; done
for i in w1 w2 ; do echo &quot;&amp;gt;&amp;gt; node : k8s-$i &amp;lt;&amp;lt;&quot;; sshpass -p 'vagrant' ssh -o StrictHostKeyChecking=no vagrant@k8s-$i sudo ip link del cni0 ; echo; done

# 제거 확인
ip -c link
for i in w1 w2 ; do echo &quot;&amp;gt;&amp;gt; node : k8s-$i &amp;lt;&amp;lt;&quot;; sshpass -p 'vagrant' ssh -o StrictHostKeyChecking=no vagrant@k8s-$i ip -c link ; echo; done

brctl show
for i in w1 w2 ; do echo &quot;&amp;gt;&amp;gt; node : k8s-$i &amp;lt;&amp;lt;&quot;; sshpass -p 'vagrant' ssh -o StrictHostKeyChecking=no vagrant@k8s-$i brctl show ; echo; done

ip -c route
for i in w1 w2 ; do echo &quot;&amp;gt;&amp;gt; node : k8s-$i &amp;lt;&amp;lt;&quot;; sshpass -p 'vagrant' ssh -o StrictHostKeyChecking=no vagrant@k8s-$i ip -c route ; echo; done
&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;그리고 kube-proxy replacement로 사용하기 위해서 kube-proxy를 제거합니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl -n kube-system delete ds kube-proxy
kubectl -n kube-system delete cm kube-proxy
daemonset.apps &quot;kube-proxy&quot; deleted
configmap &quot;kube-proxy&quot; deleted

# 배포된 파드의 IP는 남겨져 있음
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get pod -A -owide
NAMESPACE     NAME                              READY   STATUS    RESTARTS       AGE     IP               NODE      NOMINATED NODE   READINESS GATES
default       curl-pod                          1/1     Running   5 (8h ago)     2d23h   10.244.0.2       k8s-ctr   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
default       webpod-697b545f57-j28gf           1/1     Running   0              2d23h   10.244.2.4       k8s-w2    &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
default       webpod-697b545f57-r528h           1/1     Running   0              2d23h   10.244.1.2       k8s-w1    &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system   coredns-674b8bbfcf-bqxft          0/1     Running   1 (78s ago)    5d4h    10.244.2.2       k8s-w2    &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system   coredns-674b8bbfcf-dnghp          0/1     Running   1 (78s ago)    5d4h    10.244.2.3       k8s-w2    &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system   etcd-k8s-ctr                      1/1     Running   0              5d4h    192.168.10.100   k8s-ctr   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system   kube-apiserver-k8s-ctr            1/1     Running   0              5d4h    192.168.10.100   k8s-ctr   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system   kube-controller-manager-k8s-ctr   1/1     Running   27 (73m ago)   5d4h    192.168.10.100   k8s-ctr   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system   kube-scheduler-k8s-ctr            1/1     Running   31 (73m ago)   5d4h    192.168.10.100   k8s-ctr   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
(⎈|HomeLab:N/A) root@k8s-ctr:~#

# CNI plugin이 삭제되어 더이상 파드 통신이 되지 않음
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it curl-pod -- curl -m 2 webpod
curl: (28) Resolving timed out after 2001 milliseconds
command terminated with exit code 28

# 정책도 그대로 남아 있음
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get svc
NAME         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1      &amp;lt;none&amp;gt;        443/TCP   5d4h
webpod       ClusterIP   10.96.39.159   &amp;lt;none&amp;gt;        80/TCP    2d23h
(⎈|HomeLab:N/A) root@k8s-ctr:~# iptables-save |grep 10.96.39.159
-A KUBE-SERVICES -d 10.96.39.159/32 -p tcp -m comment --comment &quot;default/webpod cluster IP&quot; -m tcp --dport 80 -j KUBE-SVC-CNZCPOCNCNOROALA
-A KUBE-SVC-CNZCPOCNCNOROALA ! -s 10.244.0.0/16 -d 10.96.39.159/32 -p tcp -m comment --comment &quot;default/webpod cluster IP&quot; -m tcp --dport 80 -j KUBE-MARK-MASQ

# 노드 상태도 Ready (?)
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get no
NAME      STATUS   ROLES           AGE    VERSION
k8s-ctr   Ready    control-plane   5d4h   v1.33.2
k8s-w1    Ready    &amp;lt;none&amp;gt;          5d4h   v1.33.2
k8s-w2    Ready    &amp;lt;none&amp;gt;          5d4h   v1.33.2
(⎈|HomeLab:N/A) root@k8s-ctr:~#&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Flannel을 삭제한다고 해서 iptables나 정보가 날아가지 않는 것을 알 수 있습니다. 아래의 명령으로 iptables도 전체 삭제합니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;# Run on each node with root permissions:
(⎈|HomeLab:N/A) root@k8s-ctr:~# iptables-save | grep -v KUBE | grep -v FLANNEL | iptables-restore
(⎈|HomeLab:N/A) root@k8s-ctr:~# iptables-save |grep 10.96.39.159

(⎈|HomeLab:N/A) root@k8s-ctr:~# sshpass -p 'vagrant' ssh vagrant@k8s-w1 &quot;sudo iptables-save | grep -v KUBE | grep -v FLANNEL | sudo iptables-restore&quot;
(⎈|HomeLab:N/A) root@k8s-ctr:~# sshpass -p 'vagrant' ssh vagrant@k8s-w2 &quot;sudo iptables-save | grep -v KUBE | grep -v FLANNEL | sudo iptables-restore&quot;
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get pod -owide
NAME                      READY   STATUS    RESTARTS     AGE     IP           NODE      NOMINATED NODE   READINESS GATES
curl-pod                  1/1     Running   5 (9h ago)   2d23h   10.244.0.2   k8s-ctr   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
webpod-697b545f57-j28gf   1/1     Running   0            2d23h   10.244.2.4   k8s-w2    &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
webpod-697b545f57-r528h   1/1     Running   0            2d23h   10.244.1.2   k8s-w1    &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;노드별 파드에 할당되는 IPAM(PodCIDR) 정보 확인하면, 이전 Flannel에서 정의한 CIDR이 아직 확인됩니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;#--allocate-node-cidrs=true 로 설정된 kube-controller-manager에서 CIDR을 자동 할당함
kubectl get nodes -o jsonpath='{range .items[*]}{.metadata.name}{&quot;\t&quot;}{.spec.podCIDR}{&quot;\n&quot;}{end}'
k8s-ctr 10.244.0.0/24
k8s-w1  10.244.1.0/24
k8s-w2  10.244.2.0/24

kubectl get pod -owide

# flannel은 kube-controller-manager에 정의된 PodCIDR을 사용한다. (--cluster-cidr)
kubectl describe pod -n kube-system kube-controller-manager-k8s-ctr
...
    Command:
      kube-controller-manager
      --allocate-node-cidrs=true
      --cluster-cidr=10.244.0.0/16
      --service-cluster-ip-range=10.96.0.0/16
...&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이제 Cilium을 설치해보겠습니다.&lt;/p&gt;
&lt;pre class=&quot;sql&quot;&gt;&lt;code&gt;# Cilium 설치 with Helm
(⎈|HomeLab:N/A) root@k8s-ctr:~# helm repo add cilium https://helm.cilium.io/
&quot;cilium&quot; has been added to your repositories

# Cilium 설치 (주요 옵션)
# kubeProxyReplacement=true : kube-proxy replacement
# routingMode=native : native routing 모드
# autoDirectNodeRoutes=true : 상대 노드에 대한 route 등록
# ipam.mode=&quot;cluster-pool&quot; : IPAM 모드 
# ipam.operator.clusterPoolIPv4PodCIDRList={&quot;172.20.0.0/16&quot;} : PodCIDR 정의
# ipv4NativeRoutingCIDR=172.20.0.0/16 : native routing 에서 보통 PodCIDR과 동일하게 정의, 정의하지 않으면, 다른 노드의 파드와 통신할 때 SNAT이 발생함
#  installNoConntrackIptablesRules=true : Contract 관련 Iptables rule 사용 안함
(⎈|HomeLab:N/A) root@k8s-ctr:~# helm install cilium cilium/cilium --version 1.17.5 --namespace kube-system \
--set k8sServiceHost=192.168.10.100 --set k8sServicePort=6443 \
--set kubeProxyReplacement=true \
--set routingMode=native \
--set autoDirectNodeRoutes=true \
--set ipam.mode=&quot;cluster-pool&quot; \
--set ipam.operator.clusterPoolIPv4PodCIDRList={&quot;172.20.0.0/16&quot;} \
--set ipv4NativeRoutingCIDR=172.20.0.0/16 \
--set endpointRoutes.enabled=true \
--set installNoConntrackIptablesRules=true \
--set bpf.masquerade=true \
--set ipv6.enabled=false
NAME: cilium
LAST DEPLOYED: Fri Jul 18 23:36:09 2025
NAMESPACE: kube-system
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
You have successfully installed Cilium with Hubble.

Your release version is 1.17.5.

For any further help, visit https://docs.cilium.io/en/v1.17/gettinghelp

# 확인
(⎈|HomeLab:N/A) root@k8s-ctr:~# helm get values cilium -n kube-system
USER-SUPPLIED VALUES:
autoDirectNodeRoutes: true
bpf:
  masquerade: true
endpointRoutes:
  enabled: true
installNoConntrackIptablesRules: true
ipam:
  mode: cluster-pool
  operator:
    clusterPoolIPv4PodCIDRList:
    - 172.20.0.0/16
ipv4NativeRoutingCIDR: 172.20.0.0/16
ipv6:
  enabled: false
k8sServiceHost: 192.168.10.100
k8sServicePort: 6443
kubeProxyReplacement: true
routingMode: native
(⎈|HomeLab:N/A) root@k8s-ctr:~# helm list -A
NAME    NAMESPACE       REVISION        UPDATED                                 STATUS          CHART           APP VERSION
cilium  kube-system     1               2025-07-18 23:36:09.978048833 +0900 KST deployed        cilium-1.17.5   1.17.5
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get crd
NAME                                         CREATED AT
ciliumcidrgroups.cilium.io                   2025-07-18T14:36:31Z
ciliumclusterwidenetworkpolicies.cilium.io   2025-07-18T14:36:32Z
ciliumendpoints.cilium.io                    2025-07-18T14:36:32Z
ciliumexternalworkloads.cilium.io            2025-07-18T14:36:31Z
ciliumidentities.cilium.io                   2025-07-18T14:36:31Z
ciliuml2announcementpolicies.cilium.io       2025-07-18T14:36:32Z
ciliumloadbalancerippools.cilium.io          2025-07-18T14:36:31Z
ciliumnetworkpolicies.cilium.io              2025-07-18T14:36:32Z
ciliumnodeconfigs.cilium.io                  2025-07-18T14:36:31Z
ciliumnodes.cilium.io                        2025-07-18T14:36:31Z
ciliumpodippools.cilium.io                   2025-07-18T14:36:31Z

# daemonset에서 cilium-dbg 명령 수행 (상세한 속성 확인 가능)
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it -n kube-system ds/cilium -c cilium-agent -- cilium-dbg status --verbose
KVStore:                Disabled
Kubernetes:             Ok         1.33 (v1.33.2) [linux/amd64]
Kubernetes APIs:        [&quot;EndpointSliceOrEndpoint&quot;, &quot;cilium/v2::CiliumClusterwideNetworkPolicy&quot;, &quot;cilium/v2::CiliumEndpoint&quot;, &quot;cilium/v2::CiliumNetworkPolicy&quot;, &quot;cilium/v2::CiliumNode&quot;, &quot;cilium/v2alpha1::CiliumCIDRGroup&quot;, &quot;core/v1::Namespace&quot;, &quot;core/v1::Pods&quot;, &quot;core/v1::Service&quot;, &quot;networking.k8s.io/v1::NetworkPolicy&quot;]
KubeProxyReplacement:   True   [eth0    10.0.2.15 fd17:625c:f037:2:a00:27ff:fe6b:69c9 fe80::a00:27ff:fe6b:69c9, eth1   192.168.10.101 fe80::a00:27ff:fe6d:8e42 (Direct Routing)]
Host firewall:          Disabled
SRv6:                   Disabled
CNI Chaining:           none
CNI Config file:        successfully wrote CNI configuration file to /host/etc/cni/net.d/05-cilium.conflist
Cilium:                 Ok   1.17.5 (v1.17.5-69aab28c)
NodeMonitor:            Listening for events on 2 CPUs with 64x4096 of shared memory
Cilium health daemon:   Ok
IPAM:                   IPv4: 4/254 allocated from 172.20.0.0/24,
Allocated addresses:
  172.20.0.157 (kube-system/coredns-674b8bbfcf-kw84j)
  172.20.0.178 (health)
  172.20.0.243 (kube-system/coredns-674b8bbfcf-2xwg2)
  172.20.0.95 (router)
IPv4 BIG TCP:           Disabled
IPv6 BIG TCP:           Disabled
BandwidthManager:       Disabled
Routing:                Network: Native   Host: BPF
Attach Mode:            TCX
Device Mode:            veth
Masquerading:           BPF   [eth0, eth1]   172.20.0.0/16 [IPv4: Enabled, IPv6: Disabled]
Clock Source for BPF:   ktime
Controller Status:      32/32 healthy
  Name                                                  Last success   Last error   Count   Message
  cilium-health-ep                                      50s ago        never        0       no error
  ct-map-pressure                                       23s ago        never        0       no error
  daemon-validate-config                                35s ago        never        0       no error
  dns-garbage-collector-job                             56s ago        never        0       no error
  endpoint-162-regeneration-recovery                    never          never        0       no error
  endpoint-235-regeneration-recovery                    never          never        0       no error
  endpoint-349-regeneration-recovery                    never          never        0       no error
  endpoint-703-regeneration-recovery                    never          never        0       no error
  endpoint-gc                                           1m58s ago      never        0       no error
  endpoint-periodic-regeneration                        58s ago        never        0       no error
  ep-bpf-prog-watchdog                                  23s ago        never        0       no error
  ipcache-inject-labels                                 54s ago        never        0       no error
  k8s-heartbeat                                         27s ago        never        0       no error
  link-cache                                            12s ago        never        0       no error
  local-identity-checkpoint                             16m43s ago     never        0       no error
  node-neighbor-link-updater                            3s ago         never        0       no error
  proxy-ports-checkpoint                                16m54s ago     never        0       no error
  resolve-identity-162                                  1m52s ago      never        0       no error
  resolve-identity-235                                  1m47s ago      never        0       no error
  resolve-identity-349                                  1m53s ago      never        0       no error
  resolve-identity-703                                  1m47s ago      never        0       no error
  resolve-labels-kube-system/coredns-674b8bbfcf-2xwg2   16m47s ago     never        0       no error
  resolve-labels-kube-system/coredns-674b8bbfcf-kw84j   16m47s ago     never        0       no error
  sync-lb-maps-with-k8s-services                        16m53s ago     never        0       no error
  sync-policymap-162                                    1m40s ago      never        0       no error
  sync-policymap-235                                    1m39s ago      never        0       no error
  sync-policymap-349                                    1m41s ago      never        0       no error
  sync-policymap-703                                    1m39s ago      never        0       no error
  sync-to-k8s-ciliumendpoint (235)                      7s ago         never        0       no error
  sync-to-k8s-ciliumendpoint (703)                      7s ago         never        0       no error
  sync-utime                                            53s ago        never        0       no error
  write-cni-file                                        16m57s ago     never        0       no error
Proxy Status:            OK, ip 172.20.0.95, 0 redirects active on ports 10000-20000, Envoy: external
Global Identity Range:   min 256, max 65535
Hubble:                  Ok   Current/Max Flows: 4095/4095 (100.00%), Flows/s: 25.83   Metrics: Disabled
KubeProxyReplacement Details:
  Status:                 True
  Socket LB:              Enabled
  Socket LB Tracing:      Enabled
  Socket LB Coverage:     Full
  Devices:                eth0    10.0.2.15 fd17:625c:f037:2:a00:27ff:fe6b:69c9 fe80::a00:27ff:fe6b:69c9, eth1   192.168.10.101 fe80::a00:27ff:fe6d:8e42 (Direct Routing)
  Mode:                   SNAT
  Backend Selection:      Random
  Session Affinity:       Enabled
  Graceful Termination:   Enabled
  NAT46/64 Support:       Disabled
  XDP Acceleration:       Disabled
  Services:
  - ClusterIP:      Enabled
  - NodePort:       Enabled (Range: 30000-32767)
  - LoadBalancer:   Enabled
  - externalIPs:    Enabled
  - HostPort:       Enabled
  Annotations:
  - service.cilium.io/node
  - service.cilium.io/src-ranges-policy
  - service.cilium.io/type
BPF Maps:   dynamic sizing: on (ratio: 0.002500)
  Name                          Size
  Auth                          524288
  Non-TCP connection tracking   65536
  TCP connection tracking       131072
  Endpoint policy               65535
  IP cache                      512000
  IPv4 masquerading agent       16384
  IPv6 masquerading agent       16384
  IPv4 fragmentation            8192
  IPv4 service                  65536
  IPv6 service                  65536
  IPv4 service backend          65536
  IPv6 service backend          65536
  IPv4 service reverse NAT      65536
  IPv6 service reverse NAT      65536
  Metrics                       1024
  Ratelimit metrics             64
  NAT                           131072
  Neighbor table                131072
  Global policy                 16384
  Session affinity              65536
  Sock reverse NAT              65536
  Tunnel                        65536
Encryption:       Disabled
Cluster health:   3/3 reachable   (2025-07-18T14:52:46Z)
Name              IP              Node   Endpoints
  k8s-w1 (localhost):
    Host connectivity to 192.168.10.101:
      ICMP to stack:   OK, RTT=1.395272ms
      HTTP to agent:   OK, RTT=1.53953ms
    Endpoint connectivity to 172.20.0.178:
      ICMP to stack:   OK, RTT=2.627302ms
      HTTP to agent:   OK, RTT=673.026&amp;micro;s
  k8s-ctr:
    Host connectivity to 192.168.10.100:
      ICMP to stack:   OK, RTT=4.301411ms
      HTTP to agent:   OK, RTT=4.939448ms
    Endpoint connectivity to 172.20.2.230:
      ICMP to stack:   OK, RTT=7.537182ms
      HTTP to agent:   OK, RTT=12.533939ms
  k8s-w2:
    Host connectivity to 192.168.10.102:
      ICMP to stack:   OK, RTT=1.280707ms
      HTTP to agent:   OK, RTT=9.840675ms
    Endpoint connectivity to 172.20.1.164:
      ICMP to stack:   OK, RTT=4.689173ms
      HTTP to agent:   OK, RTT=8.773393ms
Modules Health:
      agent
      ├── controlplane
      │   ├── auth
      │   │   ├── observer-job-auth-gc-identity-events            [OK] Primed (16m, x1)
      │   │   ├── observer-job-auth-request-authentication        [OK] Primed (16m, x1)
      │   │   └── timer-job-auth-gc-cleanup                       [OK] OK (23.072&amp;micro;s) (117s, x1)
      │   ├── bgp-control-plane
      │   │   └── job-diffstore-events                            [OK] Running (16m, x2)
      │   ├── ciliumenvoyconfig
      │   │   └── experimental
      │   │       ├── job-reconcile                               [OK] OK, 0 object(s) (16m, x2)
      │   │       └── job-refresh                                 [OK] Next refresh in 30m0s (16m, x1)
      │   ├── daemon
      │   │   ├──                                                 [OK] daemon-validate-config (35s, x17)
      │   │   ├── ep-bpf-prog-watchdog
      │   │   │   └── ep-bpf-prog-watchdog                        [OK] ep-bpf-prog-watchdog (23s, x34)
      │   │   └── job-sync-hostips                                [OK] Synchronized (54s, x18)
      │   ├── dynamic-lifecycle-manager
      │   │   ├── job-reconcile                                   [OK] OK, 0 object(s) (16m, x2)
      │   │   └── job-refresh                                     [OK] Next refresh in 30m0s (16m, x1)
      │   ├── enabled-features
      │   │   └── job-update-config-metric                        [OK] Waiting for agent config (16m, x1)
      │   ├── endpoint-manager
      │   │   ├── cilium-endpoint-162 (/)
      │   │   │   ├── datapath-regenerate                         [OK] Endpoint regeneration successful (58s, x11)
      │   │   │   └── policymap-sync                              [OK] sync-policymap-162 (100s, x2)
      │   │   ├── cilium-endpoint-235 (kube-system/coredns-674b8bbfcf-kw84j)
      │   │   │   ├── cep-k8s-sync                                [OK] sync-to-k8s-ciliumendpoint (235) (7s, x102)
      │   │   │   ├── datapath-regenerate                         [OK] Endpoint regeneration successful (58s, x10)
      │   │   │   └── policymap-sync                              [OK] sync-policymap-235 (99s, x2)
      │   │   ├── cilium-endpoint-349 (/)
      │   │   │   ├── datapath-regenerate                         [OK] Endpoint regeneration successful (58s, x11)
      │   │   │   └── policymap-sync                              [OK] sync-policymap-349 (101s, x2)
      │   │   ├── cilium-endpoint-703 (kube-system/coredns-674b8bbfcf-2xwg2)
      │   │   │   ├── cep-k8s-sync                                [OK] sync-to-k8s-ciliumendpoint (703) (7s, x102)
      │   │   │   ├── datapath-regenerate                         [OK] Endpoint regeneration successful (58s, x10)
      │   │   │   └── policymap-sync                              [OK] sync-policymap-703 (99s, x2)
      │   │   └── endpoint-gc                                     [OK] endpoint-gc (118s, x4)
      │   ├── envoy-proxy
      │   │   ├── observer-job-k8s-secrets-resource-events-cilium-secrets    [OK] Primed (16m, x1)
      │   │   └── timer-job-version-check                         [OK] OK (14.469707ms) (114s, x1)
      │   ├── hubble
      │   │   └── job-hubble                                      [OK] Running (16m, x1)
      │   ├── identity
      │   │   └── timer-job-id-alloc-update-policy-maps           [OK] OK (213.51&amp;micro;s) (16m, x1)
      │   ├── l2-announcer
      │   │   └── job-l2-announcer-lease-gc                       [OK] Running (16m, x1)
      │   ├── nat-stats
      │   │   └── timer-job-nat-stats                             [OK] OK (2.504969ms) (24s, x1)
      │   ├── node-manager
      │   │   ├── background-sync                                 [OK] Node validation successful (43s, x12)
      │   │   ├── neighbor-link-updater
      │   │   │   ├── k8s-ctr                                     [OK] Node neighbor link update successful (63s, x14)
      │   │   │   └── k8s-w2                                      [OK] Node neighbor link update successful (43s, x13)
      │   │   ├── node-checkpoint-writer                          [OK] node checkpoint written (14m, x3)
      │   │   ├── nodes-add                                       [OK] Node adds successful (16m, x3)
      │   │   └── nodes-update                                    [OK] Node updates successful (16m, x4)
      │   ├── policy
      │   │   └── observer-job-policy-importer                    [OK] Primed (16m, x1)
      │   ├── service-manager
      │   │   ├── job-health-check-event-watcher                  [OK] Waiting for health check events (16m, x1)
      │   │   └── job-service-reconciler                          [OK] 3 NodePort frontend addresses (16m, x1)
      │   ├── service-resolver
      │   │   └── job-service-reloader-initializer                [OK] Running (16m, x1)
      │   └── stale-endpoint-cleanup
      │       └── job-endpoint-cleanup                            [OK] Running (16m, x1)
      ├── datapath
      │   ├── agent-liveness-updater
      │   │   └── timer-job-agent-liveness-updater                [OK] OK (49.469&amp;micro;s) (0s, x1)
      │   ├── iptables
      │   │   ├── ipset
      │   │   │   ├── job-ipset-init-finalizer                    [OK] Running (16m, x1)
      │   │   │   ├── job-reconcile                               [OK] OK, 0 object(s) (16m, x3)
      │   │   │   └── job-refresh                                 [OK] Next refresh in 30m0s (16m, x1)
      │   │   └── job-iptables-reconciliation-loop                [OK] iptables rules full reconciliation completed (16m, x1)
      │   ├── l2-responder
      │   │   └── job-l2-responder-reconciler                     [OK] Running (16m, x1)
      │   ├── maps
      │   │   └── bwmap
      │   │       └── timer-job-pressure-metric-throttle          [OK] OK (3.398&amp;micro;s) (24s, x1)
      │   ├── mtu
      │   │   ├── job-endpoint-mtu-updater                        [OK] Endpoint MTU updated (16m, x1)
      │   │   └── job-mtu-updater                                 [OK] MTU updated (1500) (16m, x1)
      │   ├── node-address
      │   │   └── job-node-address-update                         [OK] 172.20.0.95 (primary), fe80::2c6e:e7ff:fe46:b9c3 (primary) (16m, x1)
      │   ├── orchestrator
      │   │   └── job-reinitialize                                [OK] OK (16m, x2)
      │   └── sysctl
      │       ├── job-reconcile                                   [OK] OK, 16 object(s) (6m37s, x26)
      │       └── job-refresh                                     [OK] Next refresh in 9m39.139547251s (6m37s, x1)
      └── infra
          ├── k8s-synced-crdsync
          │   └── job-sync-crds                                   [OK] Running (16m, x1)
          ├── metrics
          │   ├── job-collect                                     [OK] Sampled 24 metrics in 2.110086ms, next collection at 2025-07-18 14:57:33.388855106 +0000 UTC m=+1206.945950411 (114s, x1)
          │   └── timer-job-cleanup                               [OK] Primed (16m, x1)
          └── shell
              └── job-listener                                    [OK] Listening on /var/run/cilium/shell.sock (16m, x1)


# 노드에 iptables 확인
iptables -t nat -S
for i in w1 w2 ; do echo &quot;&amp;gt;&amp;gt; node : k8s-$i &amp;lt;&amp;lt;&quot;; sshpass -p 'vagrant' ssh vagrant@k8s-$i sudo iptables -t nat -S ; echo; done

iptables-save
for i in w1 w2 ; do echo &quot;&amp;gt;&amp;gt; node : k8s-$i &amp;lt;&amp;lt;&quot;; sshpass -p 'vagrant' ssh vagrant@k8s-$i sudo iptables-save ; echo; done
&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;설치가 완료된 이후 PodCIDR과 IPAM 확인합니다.&lt;/p&gt;
&lt;pre class=&quot;routeros&quot;&gt;&lt;code&gt;# 파드 IP 확인 (변경 없음)
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get pod -owide
NAME                      READY   STATUS    RESTARTS     AGE     IP           NODE      NOMINATED NODE   READINESS GATES
curl-pod                  1/1     Running   5 (9h ago)   2d23h   10.244.0.2   k8s-ctr   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
webpod-697b545f57-j28gf   1/1     Running   0            2d23h   10.244.2.4   k8s-w2    &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
webpod-697b545f57-r528h   1/1     Running   0            2d23h   10.244.1.2   k8s-w1    &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;

# ciliumnodes에서는 노드의 PodCIDR이 확인됨
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get ciliumnodes
NAME      CILIUMINTERNALIP   INTERNALIP       AGE
k8s-ctr   172.20.2.243       192.168.10.100   19m
k8s-w1    172.20.0.95        192.168.10.101   19m
k8s-w2    172.20.1.17        192.168.10.102   19m
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get ciliumnodes -o json | grep podCIDRs -A2
                    &quot;podCIDRs&quot;: [
                        &quot;172.20.2.0/24&quot;
                    ],
--
                    &quot;podCIDRs&quot;: [
                        &quot;172.20.0.0/24&quot;
                    ],
--
                    &quot;podCIDRs&quot;: [
                        &quot;172.20.1.0/24&quot;
                    ],

# rollout restart로 신규 IP를 받아옴 -&amp;gt; 이제는 cilium agent에서 IPAM 역할을 수행합니다.
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl rollout restart deployment webpod
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get pod -owide
NAME                      READY   STATUS    RESTARTS     AGE     IP            NODE      NOMINATED NODE   READINESS GATES
curl-pod                  1/1     Running   5 (9h ago)   2d23h   10.244.0.2    k8s-ctr   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
webpod-6c6d676d8c-m6x7k   1/1     Running   0            12s     172.20.0.89   k8s-w1    &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
webpod-6c6d676d8c-mjf7m   1/1     Running   0            18s     172.20.1.33   k8s-w2    &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;

# k8s-ctr 노드에 curl-pod 파드 배포
kubectl delete pod curl-pod --grace-period=0

cat &amp;lt;&amp;lt;EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
  name: curl-pod
  labels:
    app: curl
spec:
  nodeName: k8s-ctr
  containers:
  - name: curl
    image: nicolaka/netshoot
    command: [&quot;tail&quot;]
    args: [&quot;-f&quot;, &quot;/dev/null&quot;]
  terminationGracePeriodSeconds: 0
EOF

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get pod -owide
NAME                      READY   STATUS    RESTARTS   AGE     IP            NODE      NOMINATED NODE   READINESS GATES
curl-pod                  1/1     Running   0          67s     172.20.2.94   k8s-ctr   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
webpod-6c6d676d8c-m6x7k   1/1     Running   0          2m5s    172.20.0.89   k8s-w1    &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
webpod-6c6d676d8c-mjf7m   1/1     Running   0          2m11s   172.20.1.33   k8s-w2    &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get ciliumendpoints
NAME                      SECURITY IDENTITY   ENDPOINT STATE   IPV4          IPV6
curl-pod                  63464               ready            172.20.2.94
webpod-6c6d676d8c-m6x7k   55697               ready            172.20.0.89
webpod-6c6d676d8c-mjf7m   55697               ready            172.20.1.33
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it -n kube-system ds/cilium -c cilium-agent -- cilium-dbg endpoint list
ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                  IPv6   IPv4           STATUS
           ENFORCEMENT        ENFORCEMENT
162        Disabled           Disabled          4          reserved:health                                                                     172.20.0.178   ready
235        Disabled           Disabled          53192      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system          172.20.0.157   ready
                                                           k8s:io.cilium.k8s.policy.cluster=default                                                           
                                                           k8s:io.cilium.k8s.policy.serviceaccount=coredns                                                    
                                                           k8s:io.kubernetes.pod.namespace=kube-system                                                        
                                                           k8s:k8s-app=kube-dns                                                                               
349        Disabled           Disabled          1          reserved:host                                                                                      ready
703        Disabled           Disabled          53192      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system          172.20.0.243   ready
                                                           k8s:io.cilium.k8s.policy.cluster=default                                                           
                                                           k8s:io.cilium.k8s.policy.serviceaccount=coredns                                                    
                                                           k8s:io.kubernetes.pod.namespace=kube-system                                                        
                                                           k8s:k8s-app=kube-dns                                                                               
2913       Disabled           Disabled          55697      k8s:app=webpod                                                                      172.20.0.89    ready
                                                           k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default                             
                                                           k8s:io.cilium.k8s.policy.cluster=default                                                           
                                                           k8s:io.cilium.k8s.policy.serviceaccount=default                                                    
                                                           k8s:io.kubernetes.pod.namespace=default                                                            

# 통신 확인 -&amp;gt; iptables가 없이도 정상적으로 통신된다!
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -it curl-pod -- curl webpod | grep Hostname
Hostname: webpod-6c6d676d8c-mjf7m&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Cilium에서 제공하는 CNI plugin 마이그레이션 가이드는 아래 문서를 참고하실 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;참고: &lt;a href=&quot;https://docs.cilium.io/en/stable/installation/k8s-install-migration/&quot;&gt;https://docs.cilium.io/en/stable/installation/k8s-install-migration/&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;포스팅이 길어져 여기에서 마무리하고, 두번째 포스트에서 Cilium 환경 확인부터 설명하겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;</description>
      <category>Cilium</category>
      <category>cilium</category>
      <category>eBPF</category>
      <category>FLANNEL</category>
      <category>kubernetes</category>
      <author>한명</author>
      <guid isPermaLink="true">https://a-person.tistory.com/48</guid>
      <comments>https://a-person.tistory.com/48#entry48comment</comments>
      <pubDate>Sat, 19 Jul 2025 18:35:48 +0900</pubDate>
    </item>
    <item>
      <title>[10] EKS Gateway API와 Amazon VPC Lattice</title>
      <link>https://a-person.tistory.com/47</link>
      <description>&lt;p data-ke-size=&quot;size16&quot;&gt;이번 포스트에서는 Amazon VPC Lattice for Amazon EKS라는 주제로 학습한 내용을 작성해 보겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;먼저 쿠버네티스 환경의 네트워크 변화 과정을 살펴보고 기존 기술의 한계점을 바탕으로 Gateway API의 등장 배경을 알아보겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Amazon EKS에서는 Gateway API의 구현체로 Amazon VPC Lattice를 사용합니다. EKS에서 Amazon VPC Lattice를 활용해 앞서 설명한 복잡한 쿠버네티스 네트워크 환경의 한계점을 어떻게 극복하는지 살펴보겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;목차&lt;/h2&gt;
&lt;ol style=&quot;list-style-type: decimal;&quot; data-ke-list-type=&quot;decimal&quot;&gt;
&lt;li&gt;쿠버네티스 환경의 네트워크 변화&lt;/li&gt;
&lt;li&gt;Gateway API&lt;/li&gt;
&lt;li&gt;Amazon VPC Lattice&lt;/li&gt;
&lt;li&gt;Amazon VPC Lattice - Simple Client to Server Communication 실습&lt;/li&gt;
&lt;/ol&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;1. 쿠버네티스 환경의 네트워크 변화&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;쿠버네티스 환경의 네트워크 환경은 마이크로서비스 아키텍처의 발전과 함께 진화해 왔습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;1.1. 단일 쿠버네티스 클러스터의 네트워킹&lt;/h3&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;단일 쿠버네티스 클러스터에서는 쿠버네티스 Service 리소스와 클러스터 내부 DNS 서비스인 CoreDNS를 통해 내부 서비스 간 통신을 구현하였고, 또한 Ingress 리소스를 통해서 외부 통신을 네이티브 하게 구현하였습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이를 위해서 아래와 같은 방식을 사용하였습니다.&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;Service 리소스로 노출
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;ClusterIP, NodePort, Loadbalancer 타입의 서비스 구현&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Ingress 리소스 활용
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;경로 기반 라우팅으로 서비스 구조화&lt;/li&gt;
&lt;li&gt;SSL/TLS Termination 기능으로 보안 강화&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;초기 단계에서는 대부분 Core 리소스를 통해 클러스터 내부 통신을 가능하게 하고, 또한 서비스를 외부 노출하는 데 중점을 두었습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;1.2. Service Mesh&lt;/h3&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;단일 애플리케이션의 컨테이너화에서 진화해 각 서비스별 마이크로서비스가 점점 늘어나면서 서비스 간 연결이 복잡해게 됩니다. 이에 따른 네트워크 복잡성이 증가하고, 서비스의 공통 기능(Circuit Break, Retry, 타임아웃, 보안, 가시성 등)에 대한 요구사항이 생기게 됩니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;서비스 메시는 이러한 요구사항을 구현한 기술로, 주요 구현체로는 Istio, Linkerd, Consul 등이 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;서비스 메시는 네트워크 레벨에서 &lt;b&gt;서비스 간 통신을 추상화하고 통제&lt;/b&gt;하기 위해 아래와 같은 주요 기능을 제공합니다.&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;&lt;b&gt;트래픽 관리&lt;/b&gt;: 요청 라우팅, 트래픽 분할, Retry/타임아웃, Circuit Break 등을 통해 서비스 간 통신 흐름을 제어&lt;/li&gt;
&lt;li&gt;&lt;b&gt;보안&lt;/b&gt;: mTLS를 통한 암호화 및 인증, 세밀한 접근 제어로 서비스 간 신뢰 보장&lt;/li&gt;
&lt;li&gt;&lt;b&gt;가시성&lt;/b&gt;: 메트릭, 로그, 트레이싱을 통해 서비스 간 통신을 실시간 모니터링&lt;/li&gt;
&lt;li&gt;&lt;b&gt;정책 및 제어&lt;/b&gt;: Rate limiting(서비스당 요청 수 제한), 접근 정책 등으로 네트워크 사용을 통제하고 보호&lt;/li&gt;
&lt;/ul&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;서비스 메시의 동작 방식은 Envoy와 같은 Proxy 컨테이너를 사이드카로 배치하고, 컨트롤 영역에서 정의한 설정을 바탕으로 데이터 영역인 Proxy 컨테이너가 애플리케이션 컨테이너의 진입점으로써 통신을 제어합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아래는 Istio의 Service Mesh에 대한 설명으로, Proxy 사이드가 통신을 처리하게 됩니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;229&quot; data-origin-height=&quot;150&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/blHe68/btsNBtUekgq/I6eK41IcBqvUbVZpb4SpF0/tfile.svg&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/blHe68/btsNBtUekgq/I6eK41IcBqvUbVZpb4SpF0/tfile.svg&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/blHe68/btsNBtUekgq/I6eK41IcBqvUbVZpb4SpF0/tfile.svg&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FblHe68%2FbtsNBtUekgq%2FI6eK41IcBqvUbVZpb4SpF0%2Ftfile.svg&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;585&quot; height=&quot;383&quot; data-origin-width=&quot;229&quot; data-origin-height=&quot;150&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;출처: &lt;a href=&quot;https://istio.io/latest/about/service-mesh/&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://istio.io/latest/about/service-mesh/&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;서비스 메시의 구성 요소는 &lt;b&gt;컨트롤 영역&lt;/b&gt;과 &lt;b&gt;데이터 영역&lt;/b&gt;으로 나뉘며 아래와 같은 역할을 합니다.&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;데이터 영역
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;사이드카가 요청을 가로챔 &amp;rarr; 요청을 별도의 네트워크 연결로 캡슐화 &amp;rarr; 소스 Proxy 와 대상 Proxy 간에 안전하고 암호화된 채널을 설정&lt;/li&gt;
&lt;li&gt;서킷 브레이킹, 요청 재시도와 같은 기능을 구현하여 복원력을 높이고 서비스 성능 저하를 방지&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;제어 영역
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;서비스 메시 내의 모든 서비스를 추적하는 서비스 레지스트리 기능 수행&lt;/li&gt;
&lt;li&gt;신규 서비스에 대한 자동 검색 기능 및 비활성 서비스 제거 기능 수행&lt;/li&gt;
&lt;li&gt;지표, 로그, 분산 추적 정보와 같은 텔레메트리 데이터의 수집 및 집계&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;1.3. 기존 쿠버네티스 환경 네트워크 한계점&lt;/h3&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;애플리케이션이 점차 진화함에 따라 하나의 클러스터가 확장되어 멀티 쿠버네티스 클러스터가 사용될 수도 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;또한 조직별로 담당하는 서비스가 달라지고, 이로 인해 다른 VPC와 쿠버네티스 클러스터를 운영하기 시작하면서, 서로 다른 VPC나 멀티 클러스터 간 애플리케이션의 통신이 문제가 되기도 합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;또한, 클라우드 환경에서 점차 마이크로 서비스가 확장되고 인프라 구성이 복잡해짐에 따라 아래와 같은 요구사항도 생겨났습니다.&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;다양한 팀 간의 책임 분리(인프라 담당자, 애플리케이션 개발자, DevOps 엔지니어 등)가 필요해졌습니다.&lt;/li&gt;
&lt;li&gt;여러 VPC 또는 멀티 클러스터 환경에 걸친 리소스들을 일관되게 관리해야 했습니다.&lt;/li&gt;
&lt;li&gt;다양한 컴퓨팅 형태(인스턴스, 컨테이너, 서버리스 등)를 통합적으로 관리할 수 있는 네트워킹 레이어가 필요했습니다.&lt;/li&gt;
&lt;/ul&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이러한 환경의 변화와 더불어 Ingress나 Service Mesh 또한 한계점이 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;Ingress의 한계&lt;/h4&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Ingress 리소스는 HTTP, HTTPS와 같은 L7 트래픽에 최적화 되어있고, gRPC 및 TCP, UDP와 같은 비 L7 프로토콜에 대한 라우팅 기능은 제공이 어려운 한계점이 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;또한 Ingress에서는 각 구현체별로 제공하는 기능에 차이가 있고 각 제품별 표준화된 기능을 제공하기 어렵습니다. 이로 인해 Ingress의 고급 기능(인증, 속도 제한 정책, 고급 트래픽 관리 등)은 각 제품 별(Nginx, HAProxy 등) 사용자 정의 어노테이션을 통해서 구현을 하는 방식을 사용했습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이로 인해서 점차 Annotation이 복잡해지고 표준화가 어려워지며, 또한 한 구현체에서 다른 구현체로서의 이식성에 한계가 존재했습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;여기서 각 구현체가 제공하는 기능을 Ingress를 생성하는 개발자가 직접 작성을 하게되는 데, 이로인해 인프라 영역의 일부 설정을 개발담당자가 확인해야 하는 점도 불편함이 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;Service Mesh의 한계&lt;/h4&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;서비스 메시는 주로 East-West 트래픽(서비스 간 내부 통신)에 초점을 맞추어 설계되어, North-South 트래픽(외부-내부 통신)에 대한 기능이 제한적이었습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;또한 구분된 VPC에 EKS 클러스터가 배치되는 경우, 이들을 연결하기 위해서는 VPC Peering 나 TGW(Tranist Gateway)가 필요하게 되어, 복잡한 설정과 네트워크 리소스들 점점 늘어나게 됩니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;더불어 팀이나 서비스별로 구분된 AWS 계정을 사용할 경우, 각각 교차 계정 액세스 설정이 필요하게 되는데, 이 또한 권한 관리의 어려움으로 남게 됩니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;서비스 메시는 사이드카 Proxy 배포가 기본적으로 필요해, 멀티 VPC 클라우드 환경에서 네트워크 복잡도가 증가하였습니다. 이로 인해 멀티 클러스터와 멀티 서비스 메시의 수많은 프록시 운영과 관리의 어려움이 증가하였습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;2. Gateway API&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이러한 쿠버네티스 환경의 진화 속에서 Gateway API가 해결하는 문제는 아래와 같습니다.&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;&lt;b&gt;역할 기반 설계&lt;/b&gt;: 인프라 관리자, 클러스터 운영자, 애플리케이션 개발자에 대해서 역할 기반으로 리소스를 관리할 수 있게 됩니다.&lt;/li&gt;
&lt;li&gt;&lt;b&gt;범용성&lt;/b&gt;: HTTP, HTTPS, gRPC 등 다양한 프로토콜을 지원하고 확장성 있게 설계되었습니다.&lt;/li&gt;
&lt;li&gt;&lt;b&gt;표준화&lt;/b&gt;: Kubernetes Ingress와 같이 Portable 한 표준 API가 되도록 설계되었습니다.&lt;/li&gt;
&lt;li&gt;&lt;b&gt;확장성&lt;/b&gt;: 멀티 클러스터 환경 및 다양한 VPC 간의 원활한 네트워크 통합이 가능하도록 설계되었습니다. (다만, 이는 Gateway API의 일반적인 특성이라기보다는 이후 살펴볼 Amazon VPC Lattice의 특성에 가깝습니다)&lt;/li&gt;
&lt;/ul&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;쿠버네티스의 Gateway API는 아래와 같은 리소스로 구성됩니다. 이를 통해서 Ingress에 비해서 각 리소스를 담당하는 역할들을 분리할 수 있게 되었습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;800&quot; data-origin-height=&quot;700&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/taIWd/btsNBs8TnJl/fn20UhY5ZnXYooB7HpTlo1/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/taIWd/btsNBs8TnJl/fn20UhY5ZnXYooB7HpTlo1/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/taIWd/btsNBs8TnJl/fn20UhY5ZnXYooB7HpTlo1/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FtaIWd%2FbtsNBs8TnJl%2Ffn20UhY5ZnXYooB7HpTlo1%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;800&quot; height=&quot;700&quot; data-origin-width=&quot;800&quot; data-origin-height=&quot;700&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;출처: &lt;a href=&quot;https://gateway-api.sigs.k8s.io/&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://gateway-api.sigs.k8s.io/&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Ingress 리소스는 네트워크 설정, 인증, 인증서 등 중요한 인프라 설정까지도 개발자가 Ingress 안에서 작성해야 하고, 기반 인프라로 제공되는 Ingress Controller에서 제공하는 기능을 파악해 Ingress에서 어노테이션으로 처리해야 하는 부담이 있었습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;반면 Gateway API는 계층적 구조를 통해 인프라 관리자는 GatewayClass와 Gateway를 관리하며 인프라 수준의리소스를 관리하고, 애플리케이션 개발자는 다양한 Route를 통해 자신의 애플리케이션에 대한 세부 라우팅 규칙을 독립적으로 정의할 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Gateway API도 다른 Kubernetes 컴포넌트와 마찬가지로 Controller를 통해 동작합니다. 여러 벤더 별 Gateway API Controller 구현체를 제공하고 있으며 AWS에서는 &lt;a href=&quot;https://www.gateway-api-controller.eks.aws.dev/latest/concepts/overview/#service-directory-networks-policies-and-gateways&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;AWS Gateway API Controller&lt;/a&gt;를 제공하고 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이때 AWS Gateway API Controller는 Amazon VPC Lattice를 구현체로 가집니다. 다음 절에서 Amazon VPC Lattice에 대해서 살펴보겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;3. Amazon VPC Lattice&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Amazon VPC Lattice가 해결하는 문제는 서로 다른 VPC 간의 네트워크 연결을 간소화하고 보안 및 모니터링을 제공하는 데 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Amazon VPC Lattice가 제공하는 주요 기능과 장점은 아래와 같습니다.&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;복잡한 네트워크 구성을 단순화해서 쉽게 사용할 수 있습니다.&lt;/li&gt;
&lt;li&gt;여러 VPC, EC2 인스턴스, 컨테이너, 서버리스로 구성한 애플리케이션의 네트워크 구성을 중앙 집중화 할 수 있습니다.&lt;/li&gt;
&lt;li&gt;별도의 사이드카 프록시를 구성할 필요가 없습니다.&lt;/li&gt;
&lt;li&gt;IAM 및 SigV4를 통해 각 애플리케이션으로의 보안 구성을 손쉽게 적용할 수 있습니다.&lt;/li&gt;
&lt;li&gt;CloudWatch, S3, Kinesis Data Firehose를 통해 쉽게 로깅, 트래픽 패턴 분석 등을 수행할 수 있습니다.&lt;/li&gt;
&lt;/ul&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아래 그림과 같이 서로 다른 VPC에서 생성된 서비스를 VPC Lattice의 Service Network 레이어에서 추상화여 서로 다른 VPC에서 연결이 가능하도록 합니다. 그림의 우측에 설명된 것과 같이, 서로 다른 AWS 계정의 VPC도 연결이 가능합니다. 또한 이렇게 추상화된 서비스의 대상(Target)은 EC2, ECS, EKS, Lambda 까지 확장됩니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1024&quot; data-origin-height=&quot;551&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bQII9v/btsNAUR5Iwn/VnKlLJzSYdKwfNEU22Atp0/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bQII9v/btsNAUR5Iwn/VnKlLJzSYdKwfNEU22Atp0/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bQII9v/btsNAUR5Iwn/VnKlLJzSYdKwfNEU22Atp0/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbQII9v%2FbtsNAUR5Iwn%2FVnKlLJzSYdKwfNEU22Atp0%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1024&quot; height=&quot;551&quot; data-origin-width=&quot;1024&quot; data-origin-height=&quot;551&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;출처: &lt;a href=&quot;https://aws.amazon.com/ko/blogs/korea/simplify-service-to-service-connectivity-security-and-monitoring-with-amazon-vpc-lattice-now-generally-available/&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://aws.amazon.com/ko/blogs/korea/simplify-service-to-service-connectivity-security-and-monitoring-with-amazon-vpc-lattice-now-generally-available/&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Amazon VPC Lattice는 EKS를 위한 전용 서비스가 아니기 때문에, 구성요소는 조금 복잡할 수 있습니다. 다만 EKS의 Gateway API Controller를 통해 EKS의 Gateway API의 리소스를 생성하면 Amazon VPC Lattice의 각 구성요소가 생성/구성된다고 보시면 됩니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;VPC Lattice의 구성요소에 대해서 간략히 아래와 같이 정리할 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;Service Network&lt;/h4&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;서비스 네트워크는 논리적인 서비스의 경계입니다. 여러 서비스를 논리적으로 그룹화하고 이들 간의 통신을 관리합니다. 하나 이상의 VPC를 서비스 네트워크에 연결하여 해당 네트워크 내의 서비스 간 통신을 가능하게 합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;동일한 서비스 네트워크 연결된 VPC에서 클라이언트와 서비스는 권한이 부여된 경우 서로 통신할 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;304&quot; data-origin-height=&quot;291&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/eeEyT2/btsNA83S2vK/UVqRoQekPKYXUuvkyfVovk/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/eeEyT2/btsNA83S2vK/UVqRoQekPKYXUuvkyfVovk/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/eeEyT2/btsNA83S2vK/UVqRoQekPKYXUuvkyfVovk/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FeeEyT2%2FbtsNA83S2vK%2FUVqRoQekPKYXUuvkyfVovk%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;304&quot; height=&quot;291&quot; data-origin-width=&quot;304&quot; data-origin-height=&quot;291&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;출처: &lt;a href=&quot;https://www.gateway-api-controller.eks.aws.dev/latest/concepts/overview/#service-directory-networks-policies-and-gateways&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://www.gateway-api-controller.eks.aws.dev/latest/concepts/overview/#service-directory-networks-policies-and-gateways&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;Service&lt;/h4&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;VPC Lattice 서비스는 인스턴스, 컨테이너, 서버리스 환경에 구성되는 애플리케이션 단위를 의미합니다. 쿠버네티스의 서비스와 같이 각 대상에서 구성된 애플리케이션이 서비스 네트워크에서 VPC Lattice 서비스로 노출되는 것으로 이해할 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;484&quot; data-origin-height=&quot;154&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/HBKtw/btsNCoq6cXu/3rFV7Mk4Ndko0iLfqOAQ01/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/HBKtw/btsNCoq6cXu/3rFV7Mk4Ndko0iLfqOAQ01/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/HBKtw/btsNCoq6cXu/3rFV7Mk4Ndko0iLfqOAQ01/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FHBKtw%2FbtsNCoq6cXu%2F3rFV7Mk4Ndko0iLfqOAQ01%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;484&quot; height=&quot;154&quot; data-origin-width=&quot;484&quot; data-origin-height=&quot;154&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;출처: &lt;a href=&quot;https://www.gateway-api-controller.eks.aws.dev/latest/concepts/overview/#service-directory-networks-policies-and-gateways&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://www.gateway-api-controller.eks.aws.dev/latest/concepts/overview/#service-directory-networks-policies-and-gateways&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;Service Directory&lt;/h4&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;서비스 디렉터리는 모든 VPC Lattice 서비스를 중앙에서 검색하고 관리할 수 있는 카탈로그입니다. 개발자와 운영팀이 사용 가능한 서비스를 쉽게 찾고 접근할 수 있도록 지원합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;Auth Policy&lt;/h4&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Auth Policy는 VPC Lattice 서비스에 대한 액세스를 제어하는 IAM 기반 정책입니다. 이를 통해 특정 IAM 보안 주체나 역할이 VPC Lattice 서비스에 접근할 수 있는지 여부를 세밀하게 제어할 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;AWS Gateway API Controller는 Gateway API에 의해 정의된 사용자 지정 리소스를 확장하여 Kubernetes API를 사용하여 VPC Lattice 리소스를 생성합니다. 이 Controller가 클러스터에 설치되면 Controller는 Gateway API의 리소스(Gateway 및 Route)의 생성을 감시하고 적절한 Amazon VPC Lattice 오브젝트를 프로비저닝 합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이를 통해 사용자는 커스텀 코드를 작성하거나 사이드카 Proxy를 관리할 필요 없이 Kubernetes API를 사용하여 VPC Lattice Service, VPC Lattice Service Network 및 Target Group을 구성할 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아래와 같이 Gateway API 리소스를 생성하면 Amazon VPC Lattice의 각 구성 요소가 구성됩니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;879&quot; data-origin-height=&quot;556&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/OL2aC/btsNB5k1uW8/CQfYDYWbLeLG0FTbhMWwOK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/OL2aC/btsNB5k1uW8/CQfYDYWbLeLG0FTbhMWwOK/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/OL2aC/btsNB5k1uW8/CQfYDYWbLeLG0FTbhMWwOK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FOL2aC%2FbtsNB5k1uW8%2FCQfYDYWbLeLG0FTbhMWwOK%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;879&quot; height=&quot;556&quot; data-origin-width=&quot;879&quot; data-origin-height=&quot;556&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;출처: &lt;a href=&quot;https://aws.amazon.com/ko/blogs/containers/introducing-aws-gateway-api-controller-for-amazon-vpc-lattice-an-implementation-of-kubernetes-gateway-api/&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://aws.amazon.com/ko/blogs/containers/introducing-aws-gateway-api-controller-for-amazon-vpc-lattice-an-implementation-of-kubernetes-gateway-api/&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이제 다음 장에서 Amazon VPC Lattice를 실습해 보겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;4. Simple Client to Server Communication 실습&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;해당 실습은 아래 문서를 바탕으로 실습하였습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;https://aws-ia.github.io/terraform-aws-eks-blueprints/patterns/network/client-server-communication/&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://aws-ia.github.io/terraform-aws-eks-blueprints/patterns/network/client-server-communication/&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;본 실습은 Client 역할을 하는 EC2 인스턴스를 위한 Client VPC와 서버로써 서비스를 제공하는 EKS VPC를 배포합니다. EKS에 배포된 애플리케이션은 Gateway API를 통해 Amazon VPC Lattice로 클라이언트에 노출됩니다. 또한 EKS에 배포된 External DNS add-on을 통해 Amazon Route53으로 노출된 서비스에 대한 도메인 이름을 구성합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1511&quot; data-origin-height=&quot;770&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bUdh18/btsNAaIbbJy/vpEJooT5CkLP1LtNAM3Vc1/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bUdh18/btsNAaIbbJy/vpEJooT5CkLP1LtNAM3Vc1/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bUdh18/btsNAaIbbJy/vpEJooT5CkLP1LtNAM3Vc1/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbUdh18%2FbtsNAaIbbJy%2FvpEJooT5CkLP1LtNAM3Vc1%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1511&quot; height=&quot;770&quot; data-origin-width=&quot;1511&quot; data-origin-height=&quot;770&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;출처: &lt;a href=&quot;https://aws-ia.github.io/terraform-aws-eks-blueprints/patterns/network/client-server-communication/&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://aws-ia.github.io/terraform-aws-eks-blueprints/patterns/network/client-server-communication/&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;실습 환경은 Amazon EKS Blueprints for Terraform에서 제공하여 Terraform이 실행 가능한 환경에서 진행합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아래와 같이 terraform 코드를 준비하고, terraform 프로비저닝을 준비합니다.&lt;/p&gt;
&lt;pre class=&quot;vim&quot;&gt;&lt;code&gt;git clone https://github.com/aws-ia/terraform-aws-eks-blueprints.git
cd terraform-aws-eks-blueprints/patterns/vpc-lattice/client-server-communication&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;해당 코드는 기본적으로 us-west-2를 기준으로 작성되어 있으므로, 필요한 경우 &lt;code&gt;main.tf&lt;/code&gt;에서 region 값을 변경하시면 됩니다.&lt;/p&gt;
&lt;pre class=&quot;lsl&quot;&gt;&lt;code&gt; 26
 27 locals {
 28   name   = basename(path.cwd)
 29   region = &quot;ap-northeast-2&quot; # 수정
 30
 31   cluster_vpc_cidr = &quot;10.0.0.0/16&quot;
 32   client_vpc_cidr  = &quot;10.1.0.0/16&quot;
 33   azs              = slice(data.aws_availability_zones.available.names, 0, 3)
 34&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;terraform을 초기화하고 배포합니다.&lt;/p&gt;
&lt;pre class=&quot;coq&quot;&gt;&lt;code&gt;terraform init

terraform apply -target=&quot;module.client_vpc&quot; -auto-approve
terraform apply -target=&quot;module.cluster_vpc&quot; -auto-approve
terraform apply -target=aws_route53_zone.primary -auto-approve

terraform apply -target=&quot;module.client_sg&quot; -auto-approve
terraform apply -target=&quot;module.endpoint_sg&quot; -auto-approve

terraform apply -target=&quot;module.client&quot; -auto-approve
terraform apply -target=&quot;module.vpc_endpoints&quot; -auto-approve

terraform apply -target=&quot;module.eks&quot; -auto-approve
terraform apply -target=&quot;module.addons&quot; -auto-approve

terraform apply -auto-approve&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Terraform을 통해 배포된 실습 환경은 Client VPC와 Client용의 EC2, EKS VPC와 서버 애플리케이션이 실행되는 EKS, 그리고 VPC Lattice의 서비스 네트워크까지 사전에 구성을 진행합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;또한 Helm 프로바이더를 통해서 필요한 애플리케이션 또한 EKS에 배포하도록 되어 있습니다.&lt;/p&gt;
&lt;pre class=&quot;vim&quot;&gt;&lt;code&gt;~/terraform-aws-eks-blueprints/patterns/vpc-lattice/client-server-communication# tree
.
├── README.md
├── assets
│   └── diagram.png
├── charts
│   └── demo-application
│       ├── Chart.yaml
│       └── templates
│           ├── deployment.yaml
│           ├── gateway-class.yaml
│           ├── gateway.yaml
│           ├── httproute.yaml
│           └── service.yaml
├── client.tf
├── eks.tf
├── lattice.tf
├── main.tf
├── outputs.tf
├── variables.tf
└── versions.tf&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;그러하므로, 실제로 배포를 하는 과정 자체보다는 생성된 리소스를 바탕으로 설명을 이어 나가겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;배포가 완료되면 아래와 같이 kubeconfig을 설정합니다.&lt;/p&gt;
&lt;pre class=&quot;axapta&quot;&gt;&lt;code&gt;aws eks update-kubeconfig --name client-server-communication --alias client-server-communication --region ap-northeast-2&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이후 생성된 클러스터에서 생성된 노드와 파드를 확인하겠습니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;kubectl get no -owide
NAME                                             STATUS   ROLES    AGE   VERSION               INTERNAL-IP   EXTERNAL-IP   OS-IMAGE                       KERNEL-VERSION                    CONTAINER-RUNTIME
ip-10-0-13-51.ap-northeast-2.compute.internal    Ready    &amp;lt;none&amp;gt;   22m   v1.30.9-eks-5d632ec   10.0.13.51    &amp;lt;none&amp;gt;        Amazon Linux 2023.7.20250414   6.1.132-147.221.amzn2023.x86_64   containerd://1.7.27
ip-10-0-29-200.ap-northeast-2.compute.internal   Ready    &amp;lt;none&amp;gt;   22m   v1.30.9-eks-5d632ec   10.0.29.200   &amp;lt;none&amp;gt;        Amazon Linux 2023.7.20250414   6.1.132-147.221.amzn2023.x86_64   containerd://1.7.27
ip-10-0-32-36.ap-northeast-2.compute.internal    Ready    &amp;lt;none&amp;gt;   22m   v1.30.9-eks-5d632ec   10.0.32.36    &amp;lt;none&amp;gt;        Amazon Linux 2023.7.20250414   6.1.132-147.221.amzn2023.x86_64   containerd://1.7.27

kubectl get po -A
NAMESPACE                           NAME                                                              READY   STATUS    RESTARTS   AGE
apps                                server-6d44dd47-pbp9w                                             1/1     Running   0          117s
apps                                server-6d44dd47-qf2t5                                             1/1     Running   0          117s
aws-application-networking-system   aws-gateway-api-controller-aws-gateway-controller-chart-796rfqv   1/1     Running   0          7m14s
aws-application-networking-system   aws-gateway-api-controller-aws-gateway-controller-chart-79hqg6c   1/1     Running   0          7m14s
external-dns                        external-dns-555c676b8-hx7wk                                      1/1     Running   0          7m12s
kube-system                         aws-node-47vct                                                    2/2     Running   0          17m
kube-system                         aws-node-5tqg5                                                    2/2     Running   0          17m
kube-system                         aws-node-lthxz                                                    2/2     Running   0          17m
kube-system                         coredns-5b9dfbf96-v94zg                                           1/1     Running   0          21m
kube-system                         coredns-5b9dfbf96-w2jkd                                           1/1     Running   0          21m
kube-system                         kube-proxy-jnnjs                                                  1/1     Running   0          17m
kube-system                         kube-proxy-pfmfx                                                  1/1     Running   0          17m
kube-system                         kube-proxy-vk4ff                                                  1/1     Running   0          17m&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;실행 중인 파드를 보면 addon으로 External DNS와 AWS Gateway API Controller가 배포된 것을 확인할 수 있습니다. 또한 샘플 애플리케이션이 server 디플로이먼트가 apps 네임스페이스 실행 중입니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이번 주제의 중점인 VPC를 살펴보면 아래와 같이 2개의 VPC가 생성되었습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1929&quot; data-origin-height=&quot;226&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/Pcla6/btsNAULmfcZ/3GRt963bx7lkt6PvHlk4Ak/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/Pcla6/btsNAULmfcZ/3GRt963bx7lkt6PvHlk4Ak/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/Pcla6/btsNAULmfcZ/3GRt963bx7lkt6PvHlk4Ak/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FPcla6%2FbtsNAULmfcZ%2F3GRt963bx7lkt6PvHlk4Ak%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1929&quot; height=&quot;226&quot; data-origin-width=&quot;1929&quot; data-origin-height=&quot;226&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;terraform의 &lt;code&gt;main.tf&lt;/code&gt;에서 이들의 IP대역이 정의되어 있습니다.&lt;/p&gt;
&lt;pre class=&quot;nix&quot;&gt;&lt;code&gt;locals {
  name   = basename(path.cwd)
  region = &quot;ap-northeast-2&quot;

  cluster_vpc_cidr = &quot;10.0.0.0/16&quot;
  client_vpc_cidr  = &quot;10.1.0.0/16&quot;
  azs              = slice(data.aws_availability_zones.available.names, 0, 3)

  tags = {
    Blueprint  = local.name
    GithubRepo = &quot;github.com/aws-ia/terraform-aws-eks-blueprints&quot;
  }
}&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;VPC의 PrivateLink and Lattice에서 Service Network를 살펴보면 my-service라는 서비스 네트워크가 생성되어 있습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1878&quot; data-origin-height=&quot;236&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/Xx6hy/btsNAN6Ohod/VLOd7Rcs48KcURqDxPuptk/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/Xx6hy/btsNAN6Ohod/VLOd7Rcs48KcURqDxPuptk/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/Xx6hy/btsNAN6Ohod/VLOd7Rcs48KcURqDxPuptk/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FXx6hy%2FbtsNAN6Ohod%2FVLOd7Rcs48KcURqDxPuptk%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1878&quot; height=&quot;236&quot; data-origin-width=&quot;1878&quot; data-origin-height=&quot;236&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;my-services에서 VPC associations를 살펴보면 생성된 Client VPC와 Cluster VPC가 연계된 것을 확인할 수 있습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1888&quot; data-origin-height=&quot;400&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/8bedv/btsNBxPEexd/2d2kofq5ram76z3TOpjHl0/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/8bedv/btsNBxPEexd/2d2kofq5ram76z3TOpjHl0/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/8bedv/btsNBxPEexd/2d2kofq5ram76z3TOpjHl0/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2F8bedv%2FbtsNBxPEexd%2F2d2kofq5ram76z3TOpjHl0%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1888&quot; height=&quot;400&quot; data-origin-width=&quot;1888&quot; data-origin-height=&quot;400&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이는 terraform 코드 중 &lt;code&gt;lattices.tf&lt;/code&gt;에 정의된 내용으로, VPC에서 Amazon VPC Lattice를 사용하려면 사전에 VPC Lattice의 서비스 네트워크와 VPC의 연결이 필요한 것을 알 수 있습니다.&lt;/p&gt;
&lt;pre class=&quot;routeros&quot;&gt;&lt;code&gt;resource &quot;aws_vpclattice_service_network&quot; &quot;this&quot; {
  name      = &quot;my-services&quot;
  auth_type = &quot;NONE&quot;

  tags = local.tags
}

resource &quot;aws_vpclattice_service_network_vpc_association&quot; &quot;cluster_vpc&quot; {
  vpc_identifier             = module.cluster_vpc.vpc_id
  service_network_identifier = aws_vpclattice_service_network.this.id
}

resource &quot;aws_vpclattice_service_network_vpc_association&quot; &quot;client_vpc&quot; {
  vpc_identifier             = module.client_vpc.vpc_id
  service_network_identifier = aws_vpclattice_service_network.this.id
}

resource &quot;time_sleep&quot; &quot;wait_for_lattice_resources&quot; {
  depends_on = [helm_release.demo_application]

  create_duration = &quot;120s&quot;
}&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이 Lattice의 서비스 네트워크는 Gateway API의 Gateway 리소스에 해당합니다.&lt;/p&gt;
&lt;pre class=&quot;yaml&quot;&gt;&lt;code&gt;apiVersion: gateway.networking.k8s.io/v1beta1
kind: Gateway
metadata:
  name: my-services
  namespace: apps
spec:
  gatewayClassName: amazon-vpc-lattice
  listeners:
    - name: http
      protocol: HTTP
      port: 80&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;그리고 VPC의 PrivateLink and Lattice에서 Lattice services를 살펴보면 생성된 Lattice 서비스를 확인할 수 있습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1878&quot; data-origin-height=&quot;251&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/brISdq/btsNA39j46E/x9fthHV8PkSUghaXKnjHa0/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/brISdq/btsNA39j46E/x9fthHV8PkSUghaXKnjHa0/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/brISdq/btsNA39j46E/x9fthHV8PkSUghaXKnjHa0/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbrISdq%2FbtsNA39j46E%2Fx9fthHV8PkSUghaXKnjHa0%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1878&quot; height=&quot;251&quot; data-origin-width=&quot;1878&quot; data-origin-height=&quot;251&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;앞선 Gateway API와 VPC Lattice 컴포넌트 연계를 살펴볼 때, Route에 해당하는 것을 알 수 있습니다.&lt;/p&gt;
&lt;pre class=&quot;routeros&quot;&gt;&lt;code&gt;apiVersion: gateway.networking.k8s.io/v1beta1
kind: HTTPRoute
metadata:
  name: server
  namespace: apps
spec:
  hostnames:
    - server.example.com
  parentRefs:
    - name: my-services
      sectionName: http
  rules:
    - backendRefs:
        - name: server
          kind: Service
          port: 8090
      matches:
        - path:
            type: PathPrefix
            value: /&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;실제 VPC 서비스를 진입하여 Routing을 살펴보면 아래의 정보를 확인할 수 있습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1863&quot; data-origin-height=&quot;723&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/XOhCM/btsNz9JiYas/kbq5k358BPVbLz0ZDb7w4K/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/XOhCM/btsNz9JiYas/kbq5k358BPVbLz0ZDb7w4K/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/XOhCM/btsNz9JiYas/kbq5k358BPVbLz0ZDb7w4K/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FXOhCM%2FbtsNz9JiYas%2Fkbq5k358BPVbLz0ZDb7w4K%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1863&quot; height=&quot;723&quot; data-origin-width=&quot;1863&quot; data-origin-height=&quot;723&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;트래픽이 Forward 되는 Target Group을 살펴보면, 아래와 같이 대상을 확인할 수 있습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1874&quot; data-origin-height=&quot;843&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bD9Zpu/btsNBFz6Nq2/d7yeGvXr7vHI3LfppStPkk/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bD9Zpu/btsNBFz6Nq2/d7yeGvXr7vHI3LfppStPkk/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bD9Zpu/btsNBFz6Nq2/d7yeGvXr7vHI3LfppStPkk/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbD9Zpu%2FbtsNBFz6Nq2%2Fd7yeGvXr7vHI3LfppStPkk%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1874&quot; height=&quot;843&quot; data-origin-width=&quot;1874&quot; data-origin-height=&quot;843&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;쿠버네티스의 생성된 서비스의 엔드포인트가 Target으로 등록된 것을 알 수 있습니다.&lt;/p&gt;
&lt;pre class=&quot;routeros&quot;&gt;&lt;code&gt;kubectl describe svc -n apps
Name:              server
Namespace:         apps
Labels:            app.kubernetes.io/managed-by=Helm
Annotations:       meta.helm.sh/release-name: demo-application
                   meta.helm.sh/release-namespace: apps
Selector:          app=server
Type:              ClusterIP
IP Family Policy:  SingleStack
IP Families:       IPv4
IP:                172.20.245.38
IPs:               172.20.245.38
Port:              &amp;lt;unset&amp;gt;  8090/TCP
TargetPort:        8090/TCP
Endpoints:         10.0.12.82:8090,10.0.27.112:8090
Session Affinity:  None
Events:            &amp;lt;none&amp;gt;&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;생성된 Gateway API의 리소스를 살펴보겠습니다.&lt;/p&gt;
&lt;pre class=&quot;routeros&quot;&gt;&lt;code&gt;kubectl get gatewayclass
NAME                 CONTROLLER                                              ACCEPTED   AGE
amazon-vpc-lattice   application-networking.k8s.aws/gateway-api-controller   True       32m

kubectl get gateway -n apps
NAME          CLASS                ADDRESS   PROGRAMMED   AGE
my-services   amazon-vpc-lattice             True         31m

kubectl get httproute -n apps
NAME     HOSTNAMES                AGE
server   [&quot;server.example.com&quot;]   32m&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이제 생성된 EC2를 통해서 직접 EKS에서 생성한 Gateway 서비스로 호출을 해보겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;EC2는 Session Manager를 통해서 접근하기 위해서 Instance에서 Connect&amp;gt;Session Manager로 접속합니다. EC2는 Client VPC에 생성된 것을 알 수 있습니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;sh-4.2$ hostname
ip-10-1-8-29.ap-northeast-2.compute.internal
sh-4.2$ ip -br -c a
lo               UNKNOWN        127.0.0.1/8 ::1/128
eth0             UP             10.1.8.29/20 fe80::1a:91ff:fee8:e4b1/64&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;EC2가 위치한 VPC 또한 VPC Lattice의 서비스 네트워크에 연결되어 있으므로, 아래와 같이 연결이 가능합니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;sh-4.2$ nslookup server.example.com
Server:         10.1.0.2
Address:        10.1.0.2#53

Non-authoritative answer:
server.example.com      canonical name = server-apps-0af7ca50d301362d4.7d67968.vpc-lattice-svcs.ap-northeast-2.on.aws.
Name:   server-apps-0af7ca50d301362d4.7d67968.vpc-lattice-svcs.ap-northeast-2.on.aws
Address: 169.254.171.0
Name:   server-apps-0af7ca50d301362d4.7d67968.vpc-lattice-svcs.ap-northeast-2.on.aws
Address: fd00:ec2:80::a9fe:ab00

sh-4.2$ curl -i http://server.example.com
HTTP/1.1 200 OK
date: Fri, 25 Apr 2025 17:59:25 GMT
content-length: 52
content-type: text/plain; charset=utf-8

Requsting to Pod(server-6d44dd47-qf2t5): server pod
sh-4.2$ curl -i http://server.example.com
HTTP/1.1 200 OK
date: Fri, 25 Apr 2025 17:59:42 GMT
content-length: 52
content-type: text/plain; charset=utf-8&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;접근 테스트를 하면서 server 측 로그를 살펴보면 lattice를 통해 연결된 로그가 남는 것을 알 수 있습니다. 아래를 보면 XFF에 Client IP가 EC2의 IP인 것을 알 수 있습니다.&lt;/p&gt;
&lt;pre class=&quot;elixir&quot;&gt;&lt;code&gt;kubectl logs -f deployment/server -n apps --all-containers=true --since=1m
...
2025/04/25 17:59:44 Receiving %!(EXTRA *http.Request=&amp;amp;{GET / HTTP/1.1 1 1 map[Accept:[*/*] User-Agent:[curl/8.3.0] X-Amzn-Lattice-Network:[SourceVpcArn=arn:aws:ec2:ap-northeast-2:430118812536:vpc/vpc-0462a13d1bd5329ce] X-Amzn-Lattice-Target:[ServiceArn=arn:aws:vpc-lattice:ap-northeast-2:430118812536:service/svc-0af7ca50d301362d4; ServiceNetworkArn=arn:aws:vpc-lattice:ap-northeast-2:430118812536:servicenetwork/sn-0939f55c8c47c81f1; TargetGroupArn=arn:aws:vpc-lattice:ap-northeast-2:430118812536:targetgroup/tg-029eaf68f9bcc9c9f] X-Amzn-Source-Vpc:[vpc-0462a13d1bd5329ce] X-Forwarded-For:[10.1.8.29]] {} &amp;lt;nil&amp;gt; 0 [] false server.example.com map[] map[] &amp;lt;nil&amp;gt; map[] 169.254.171.193:1684 / &amp;lt;nil&amp;gt; &amp;lt;nil&amp;gt; &amp;lt;nil&amp;gt; 0xc000448040})&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;실습을 마무리하고 리소스를 정리하도록 하겠습니다. 참고로 Route53에 생성된 Private zone은 레코드가 있는 경우 삭제가 이뤄지지 않습니다. EKS에 생성된 오브젝트들을 먼저 삭제하고, 이후에 terraform으로 AWS 리소스를 삭제하시기 바랍니다.&lt;/p&gt;
&lt;pre class=&quot;bash&quot; data-ke-language=&quot;bash&quot;&gt;&lt;code&gt;terraform destroy -target=&quot;module.addons&quot; -auto-approve
terraform destroy -target=&quot;module.eks&quot; -auto-approve

terraform destroy -target=&quot;module.vpc_endpoints&quot; -auto-approve
terraform destroy -target=&quot;module.client&quot; -auto-approve

terraform destroy -target=&quot;module.endpoint_sg&quot; -auto-approve
terraform destroy -target=&quot;module.client_sg&quot; -auto-approve

terraform destroy -target=aws_route53_zone.primary -auto-approve
terraform destroy -target=&quot;module.cluster_vpc&quot; -auto-approve
terraform destroy -target=&quot;module.client_vpc&quot; -auto-approve

terraform destroy -auto-approve&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;마무리&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이번 포스트를 통해서 Amazon VPC Lattice라는 비교적 신규 기능을 체험해 볼 수 있었습니다. 복잡해지는 VPC 구성에서 Peering이나 Transit Gateway가 없이도 Amazon VPC Lattice를 통해서 간결한 통신이 가능해질 것으로 보입니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;또한 EKS의 Gateway API Controller는 Amazon VPC Lattice를 구현체로 사용하게 되어, Amazon VPC Lattice이 제공하는 장점을 EKS에서도 그대로 사용할 수 있습니다. 이로써 멀티 클러스터 간의 서비스 연결에 도움을 받을 수 있을 것입니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;본 포스팅은 12주간 진행되었던 AEWS(Amazon EKS Workshop Study) 3기를 진행하면서 작성하였습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Amazon EKS의 각 요소들을 살펴보고 또한 다양한 신규 기능과 쿠버네티스 에코시스템을 두루 살펴볼 수 있는 좋은 기회였습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;그럼 긴 여정을 마무리하고 다음 포스트로 다시 찾아뵙겠습니다.&lt;/p&gt;</description>
      <category>EKS</category>
      <category>amazon vpc lattice</category>
      <category>EKS</category>
      <category>gateway api</category>
      <author>한명</author>
      <guid isPermaLink="true">https://a-person.tistory.com/47</guid>
      <comments>https://a-person.tistory.com/47#entry47comment</comments>
      <pubDate>Sat, 26 Apr 2025 03:31:20 +0900</pubDate>
    </item>
    <item>
      <title>[9] EKS GPU 리소스 활용</title>
      <link>https://a-person.tistory.com/46</link>
      <description>&lt;p data-ke-size=&quot;size16&quot;&gt;이번 포스트에서는 AI/ML 워크로드를 EKS에서 사용하기 위해 GPU 리소스를 활용하는 방안을 살펴보겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;또한 AWS에서 제공하는 &lt;code&gt;Build GenAI &amp;amp; ML for Performance and Scale, using Amazon EKS, Amazon FSx and AWS Inferentia&lt;/code&gt; 워크샵을 따라 실습을 진행하였습니다.&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;워크샵 주소: &lt;a href=&quot;https://catalog.workshops.aws/genaifsxeks/en-US&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://catalog.workshops.aws/genaifsxeks/en-US&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;목차&lt;/h2&gt;
&lt;ol style=&quot;list-style-type: decimal;&quot; data-ke-list-type=&quot;decimal&quot;&gt;
&lt;li&gt;AI 워크로드의 컨테이너 사용 여정&lt;/li&gt;
&lt;li&gt;EKS 워크샵 실습&lt;br /&gt;2.1 실습 환경 구성&lt;br /&gt;2.2 스토리지 구성&lt;br /&gt;2.3&amp;nbsp;생성형&amp;nbsp;AI&amp;nbsp;Chat&amp;nbsp;애플리케이션&amp;nbsp;배포&lt;/li&gt;
&lt;/ol&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;1. AI 워크로드의 컨테이너 사용 여정&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;전통적으로 ML 엔지니어들은 베어메탈 서버에 직접 GPU드라이버와 라이브러리를 설치하여 작업을 하였습니다. 다만 이러한 접근 방식은 아래와 같은 문제점을 가지고 있습니다.&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;&lt;b&gt;환경 구성의 복잡성&lt;/b&gt;: CUDA, cuDNN 등 복잡한 드라이버 스택 설치 및 관리가 필요. 특히 버전 호환성 문제로 인해 특정 프레임워크(TensorFlow, PyTorch)가 특정 CUDA/cuDNN 버전만 지원하는 경우가 많았음.&lt;/li&gt;
&lt;li&gt;&lt;b&gt;재현성 부족&lt;/b&gt;: 동일한 실험 환경을 다른 시스템에서 재현하기 어려움. 환경의 차이로 인해 동일한 워크로드가 다른 환경에서 실패하는 문제 발생.&lt;/li&gt;
&lt;li&gt;&lt;b&gt;리소스 비효율성&lt;/b&gt;: 고가의 GPU 리소스가 특정 사용자나 프로젝트에 고정되어 활용도가 저하. (베어메탈 GPU 서버의 활용률은 30% 미만인 경우가 많았음)&lt;/li&gt;
&lt;li&gt;&lt;b&gt;확장성 제한&lt;/b&gt;: 대규모 분산 학습을 위한 인프라 확장이 어려웠음. 새로운 GPU 서버를 추가할 때마다 동일한 환경 구성 과정을 반복 필요.&lt;/li&gt;
&lt;/ul&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;몇 가지 문제점은 일반적인 애플리케이션의 컨테이너화의 이유와 유사한 양상을 가지고 있고, GPU의 컨테이너 사용을 고려할 수 있지만, 이 또한 여러 가지 제약사항을 가지고 있었습니다.&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;&lt;b&gt;물리적으로 분할하기 어려운 GPU&lt;/b&gt;: CPU 코어나 메모리와 달리, 초기 GPU는 물리적으로 분할하여 여러 컨테이너에 할당하기 어려웠음 (최근에는 MIG 기술로 개선됨 - Nvidia GPU A100에서 새롭게 생긴 기능, 2020년 등장)&lt;/li&gt;
&lt;li&gt;&lt;b&gt;GPU 드라이버 복잡성&lt;/b&gt;: GPU 접근은 복잡한 사용자 공간 라이브러리와 커널 드라이버를 통해 이루어짐&lt;/li&gt;
&lt;li&gt;&lt;b&gt;장치 파일 접근 제어&lt;/b&gt;: /dev/nvidia* 과 같은 장치 파일에 대한 접근을 안전하게 관리 필요&lt;/li&gt;
&lt;/ul&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;여러가지 제약사항을 극복하고자 컨테이너 환경의 리소스 사용은 아래와 같은 방향으로 진화하였습니다. 또한 이러한 흐름에서 GPU도 발전하여 단일 GPU에서 멀티 GPU 환경으로 변화하기도 합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;1) 초기 단계(2016-2018)&lt;/h4&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;초기에는 GPU 장치 파일을 컨테이너에 직접 마운트하고 필요한 라이브러리를 볼륨으로 공유해야 했습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Docker 명령어를 수동으로 실행하는 예시를 살펴보면, Tensorflow에서 CUDA를 통해 NVIDIA GPU 장치에 액세스하도록 직접 맵핑을 해주는 방식입니다.&lt;/p&gt;
&lt;pre class=&quot;elixir&quot;&gt;&lt;code&gt;docker run --device=/dev/nvidia0:/dev/nvidia0 \
           --device=/dev/nvidiactl:/dev/nvidiactl \
           -v /usr/local/cuda:/usr/local/cuda \
           tensorflow/tensorflow:latest-gpu&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이러한 접근 방식은 아래와 같은 문제점을 가지고 있습니다.&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;모든 장치 파일을 수동으로 지정해야 함&lt;/li&gt;
&lt;li&gt;호스트와 컨테이너 간 라이브러리 버전 충돌 가능성&lt;/li&gt;
&lt;li&gt;여러 컨테이너 간 GPU 공유 메커니즘 부재&lt;/li&gt;
&lt;li&gt;오케스트레이션 환경에서 자동화하기 어려움&lt;/li&gt;
&lt;/ul&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;2) NVIDIA Container Runtime 활용 (2018-2020)&lt;/h4&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;NVIDIA는 이러한 문제를 해결하기 위해 NVIDIA Container Runtime을 개발하였습니다. NVIDIA Container Runtime은 OCI(Open Containers Initiative) 스펙과 호환되는 GPU 인식 컨테이너 런타임입니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이를 통해 다음과 같은 기능을 자동화하게 됩니다.&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;GPU 장치 파일 마운트&lt;/li&gt;
&lt;li&gt;NVIDIA 드라이버 라이브러리 주입&lt;/li&gt;
&lt;li&gt;CUDA 호환성 검사&lt;/li&gt;
&lt;li&gt;GPU 기능 감지 및 노출&lt;/li&gt;
&lt;/ul&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아래와 같은 방식으로 실행할 수 있습니다.&lt;/p&gt;
&lt;pre class=&quot;routeros&quot;&gt;&lt;code&gt;# Docker 19.03 이전 버전 사용
docker run --runtime=nvidia nvidia/cuda:11.0-base nvidia-smi

# Docker 19.03 이후부터는 더 간단하게 --gpus 플래그를 사용
docker run --gpus '&quot;device=0,1&quot;' nvidia/cuda:11.0-base nvidia-smi&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;NVIDIA container runtime을 통해서 GPU를 검출하고 설정하며, 호스트와 컨테이너간 드라이버 호환성을 자동 관리할 수 있게 되었습니다. 또한 컨테이너 이미지를 활용해 이식성이 향상되는 개선점이 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;3) Device Plugin 등장 (2020-Now)&lt;/h4&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Kubernetes 오픈 소스에 Device Plugin에 대한 제안이 2017년 9월 처음으로 이루어졌습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;https://github.com/kubernetes/design-proposals-archive/blob/main/resource-management/device-plugin.md&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://github.com/kubernetes/design-proposals-archive/blob/main/resource-management/device-plugin.md&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Device Plugin은 쿠버네티스 환경에서 GPU 리소스를 발견하고 이를 리소스로 노출해주는 플러그인입니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아래 그림과 같이 GPU 디바이스를 발견하면 드라이버를 설치하고 이를 Kubelet Registry gRPC server로 등록을 요청합니다. 이를 통해 GPU 리소스가 해당 노드의 리소스로 노출됩니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;https://github.com/kubernetes/design-proposals-archive/blob/main/resource-management/device-plugin-overview.png&quot;&gt;&lt;img src=&quot;https://github.com/kubernetes/design-proposals-archive/raw/main/resource-management/device-plugin-overview.png&quot; alt=&quot;Process&quot; /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;출처: &lt;a href=&quot;https://github.com/kubernetes/design-proposals-archive/blob/main/resource-management/device-plugin.md&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://github.com/kubernetes/design-proposals-archive/blob/main/resource-management/device-plugin.md&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이후 NVIDIA도 device plugin을 통해서 쿠버네티스 환경에서 손 쉽게 NVIDIA GPU를 활용할 수 있게 되었습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;https://github.com/NVIDIA/k8s-device-plugin&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://github.com/NVIDIA/k8s-device-plugin&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;다만 여전히 여러 가지 드라이버나 Container Runtime 그리고 각종 라이브러리를 설치하는 것은 여전히 쉬운 일이 아닙니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;최근에는 NVIDIA GPU를 보다 효율적으로 활용하기 위해서 NVIDIA GPU Operator를 제공하고 있으며, Device Plugin을 통한 드라이버 설치와 CUDA, NVIDIA Container Toolkit에 GFD, DCGM와 같은 모니터링 컴포넌트까지 오퍼레이터를 통해서 설치할 수 있도록 제공하고 있습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1260&quot; data-origin-height=&quot;709&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bGLHNU/btsNteI5gkj/2hkNm9EjhMUq0jM4HPQV60/img.jpg&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bGLHNU/btsNteI5gkj/2hkNm9EjhMUq0jM4HPQV60/img.jpg&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bGLHNU/btsNteI5gkj/2hkNm9EjhMUq0jM4HPQV60/img.jpg&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbGLHNU%2FbtsNteI5gkj%2F2hkNm9EjhMUq0jM4HPQV60%2Fimg.jpg&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1260&quot; height=&quot;709&quot; data-origin-width=&quot;1260&quot; data-origin-height=&quot;709&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;출처: &lt;a href=&quot;https://docs.nvidia.com/datacenter/cloud-native/gpu-operator/latest/getting-started.html&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://docs.nvidia.com/datacenter/cloud-native/gpu-operator/latest/getting-started.html&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;한편 과거의 단일 GPU에서의 동시성을 제공하려던 시도에서 지금은 멀티 GPU로 발전하고 있습니다. 먼저 단일 GPU 동시성의 고민을 아래 그림에서 살펴볼 수 있습니다.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://d2908q01vomqb2.cloudfront.net/fe2ef495a1152561572949784c16bf23abb28057/2023/09/12/GPU-Concurrency.png&quot; alt=&quot;The image talks about GPU concurrency choices&quot; /&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;출처: &lt;a href=&quot;https://aws.amazon.com/ko/blogs/containers/gpu-sharing-on-amazon-eks-with-nvidia-time-slicing-and-accelerated-ec2-instances/&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://aws.amazon.com/ko/blogs/containers/gpu-sharing-on-amazon-eks-with-nvidia-time-slicing-and-accelerated-ec2-instances/&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이를 조금 더 살펴보겠습니다.&lt;/p&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;CUDA의 단일 프로세스&lt;/h4&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이는 GPU 활용의 가장 기본적인 형태로, 단일 프로세스가 계산적 필요 사항을 충족하기 위해 CUDA(&lt;a href=&quot;https://en.wikipedia.org/wiki/CUDA&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;Compute Unified Device Architecture&lt;/a&gt;)를 사용하여 GPU에 액세스합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;독립형 애플리케이션이나 GPU의 전체 성능이 필요한 작업에 이상적이며, GPU가 특정 고성능 컴퓨팅 작업에만 독점적으로 할당되어 공유할 필요가 없는 경우입니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;CUDA MPS(Multi-Process Service)를 사용한 다중 프로세스&lt;/h4&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;CUDA MPS는 여러 프로세스가 단일 GPU 컨텍스트를 공유할 수 있도록 하는 CUDA의 기능입니다. 즉, 여러 작업이 상당한 컨텍스트 전환 오버헤드 없이 동시에 GPU에 액세스할 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;여러 애플리케이션이나 작업이 GPU에 동시에 액세스해야 하는 경우나 작업의 GPU 요구 사항이 다양하고 큰 오버헤드 없이 GPU 활용도를 극대화하려는 시나리오에 이상적입니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;참고: &lt;a href=&quot;https://docs.nvidia.com/deploy/mps/index.html&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://docs.nvidia.com/deploy/mps/index.html&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;Time Slicing&lt;/h4&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;타임 슬라이싱은 GPU 접근을 작은 시간 간격으로 나누어 여러 작업이 미리 정의된 시간 간격으로 GPU를 사용할 수 있도록 하는 것입니다. 이는 CPU가 여러 프로세스 간에 타임 슬라이싱을 하는 방식과 유사합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;간헐적으로 GPU에 접근해야 하는 여러 작업이 있는 환경에 적합한 방식입니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아래는 EKS에서 Time Slicing을 적용하는 방식으로 NVIDIA device plugin에 아래와 같이 ConfigMap을 추가로 지정하여 사용할 수 있습니다.&lt;/p&gt;
&lt;pre class=&quot;routeros&quot;&gt;&lt;code&gt;# time-slicing 적용 전: GPU 1개
$ kubectl get nodes -o json | jq -r '.items[] | select(.status.capacity.&quot;nvidia.com/gpu&quot; != null) | {name: .metadata.name, capacity: .status.capacity}'
{
  &quot;name&quot;: &quot;i-0af783eca345807e8.us-west-2.compute.internal&quot;,
  &quot;capacity&quot;: {
    &quot;cpu&quot;: &quot;32&quot;,
    &quot;ephemeral-storage&quot;: &quot;83873772Ki&quot;,
    &quot;hugepages-1Gi&quot;: &quot;0&quot;,
    &quot;hugepages-2Mi&quot;: &quot;0&quot;,
    &quot;memory&quot;: &quot;130502176Ki&quot;,
    &quot;nvidia.com/gpu&quot;: &quot;1&quot;,
    &quot;pods&quot;: &quot;234&quot;
  }
}

# Time-slicing 적용하기
$ cat &amp;lt;&amp;lt; EOF &amp;gt; nvidia-device-plugin.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: nvidia-device-plugin
  namespace: kube-system
data:
  any: |-
    version: v1
    flags:
      migStrategy: none
    sharing:
      timeSlicing:
        resources:
        - name: nvidia.com/gpu
          replicas: 10
EOF
$ kubectl apply -f nvidia-device-plugin.yaml

# 새로운 ConfigMap 기반으로 반영하기
$ helm upgrade -i nvdp nvdp/nvidia-device-plugin \
  --namespace kube-system \
  -f nvdp-values.yaml \
  --version 0.14.0 \
  --set config.name=nvidia-device-plugin \
  --force

# TimeSlicing 적용 후: GPU 10개로 증가함
$ kubectl get nodes -o json | jq -r '.items[] | select(.status.capacity.&quot;nvidia.com/gpu&quot; != null) | {name: .metadata.name, capacity: .status.capacity}'
{
  &quot;name&quot;: &quot;i-0af783eca345807e8.us-west-2.compute.internal&quot;,
  &quot;capacity&quot;: {
    &quot;cpu&quot;: &quot;32&quot;,
    &quot;ephemeral-storage&quot;: &quot;83873772Ki&quot;,
    &quot;hugepages-1Gi&quot;: &quot;0&quot;,
    &quot;hugepages-2Mi&quot;: &quot;0&quot;,
    &quot;memory&quot;: &quot;130502176Ki&quot;,
    &quot;nvidia.com/gpu&quot;: &quot;10&quot;,
    &quot;pods&quot;: &quot;234&quot;
  }
}&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;해당 실습은 아래 문서를 참고하실 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;https://aws.amazon.com/ko/blogs/containers/gpu-sharing-on-amazon-eks-with-nvidia-time-slicing-and-accelerated-ec2-instances/&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://aws.amazon.com/ko/blogs/containers/gpu-sharing-on-amazon-eks-with-nvidia-time-slicing-and-accelerated-ec2-instances/&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;멀티 인스턴스 GPU(MIG, Multi-Instance GPU)&lt;/h4&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;NVIDIA A100 GPU와 같이 MIG를 지원하는 HW를 사용해야 하며, 단일 GPU를 각각 별도의 메모리, 캐시 및 컴퓨팅 코어를 가진 여러 인스턴스로 분할할 수 있도록 합니다. 이를 통해 각 인스턴스의 성능이 보장됩니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;엄격한 격리를 보장하는 것이 목표인 다중 테넌트 환경에서 GPU 활용도를 극대화할 수 있는 솔루션입니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;참고: &lt;a href=&quot;https://www.nvidia.com/en-us/technologies/multi-instance-gpu/&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://www.nvidia.com/en-us/technologies/multi-instance-gpu/&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;가상 GPU(vGPU)를 통한 GPU 가상화&lt;/h4&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;NVIDIA vGPU 기술은 여러 가상 머신(VM)이 하나의 물리적 GPU 성능을 공유할 수 있도록 합니다. GPU 리소스를 가상화하여 각 VM이 전용 GPU 슬라이스를 가질 수 있도록 합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;가상화된 환경에서 목표는 GPU 기능을 여러 가상 머신으로 확장하여,&amp;nbsp;클라우드 서비스 제공업체나 기업의 목표는 클라이언트에게 서비스로 GPU 기능을 제공하는 것입니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;참고: &lt;a href=&quot;https://www.nvidia.com/en-us/data-center/virtual-solutions/&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://www.nvidia.com/en-us/data-center/virtual-solutions/&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;다만 최근에는 LLM의 발전과 같이 단일 GPU만으로는 대량의 학습과 추론을 감당할 수 없게 되고, 이에 멀티 GPU를 지원하기 위해 AI/ML 인프라가 발전하고 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;멀티 GPU 활용을 위해서 분산 학습에서 발생하는 네트워크 병목 현상을 효과적으로 지원해 줄 필요가 있으며, 이를 위해서 NVIDIDA의 NCCL(NVIDIA Collective Communications Library)과 같은 라이브러리가 개발되었습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;537&quot; data-origin-height=&quot;276&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/cdnxRR/btsNrxDAL55/LAYE3oMjgbvoaMAOIcpmx0/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/cdnxRR/btsNrxDAL55/LAYE3oMjgbvoaMAOIcpmx0/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/cdnxRR/btsNrxDAL55/LAYE3oMjgbvoaMAOIcpmx0/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FcdnxRR%2FbtsNrxDAL55%2FLAYE3oMjgbvoaMAOIcpmx0%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;537&quot; height=&quot;276&quot; data-origin-width=&quot;537&quot; data-origin-height=&quot;276&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;출처: &lt;a href=&quot;https://developer.nvidia.com/nccl&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://developer.nvidia.com/nccl&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;또한 AWS에서는 EFA(Elastic Fabric Adater)라는 GPU 병목 문제를 해결하기 위한 별도의 네트워크 인터페이스를 제공하고 있습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;948&quot; data-origin-height=&quot;382&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/mRQ2d/btsNsm161Tt/vipqJAZ96Lz2HQR6pCw3w0/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/mRQ2d/btsNsm161Tt/vipqJAZ96Lz2HQR6pCw3w0/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/mRQ2d/btsNsm161Tt/vipqJAZ96Lz2HQR6pCw3w0/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FmRQ2d%2FbtsNsm161Tt%2FvipqJAZ96Lz2HQR6pCw3w0%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;948&quot; height=&quot;382&quot; data-origin-width=&quot;948&quot; data-origin-height=&quot;382&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;출처: &lt;a href=&quot;https://docs.aws.amazon.com/ko_kr/AWSEC2/latest/UserGuide/efa.html&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://docs.aws.amazon.com/ko_kr/AWSEC2/latest/UserGuide/efa.html&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;AI 워크로드 활용을 위해 컨테이너 및 인프라의 발전 과정을 짧게 살펴보았습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이제 EKS 환경에서 AI 워크로드를 실행하는 실습을 워크샵을 통해서 살펴보겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;2. EKS 워크샵 실습&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;먼저 AWS에서 지원하는 GPU 인스턴스 타입을 살펴보겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;AWS에서는 가속화된 컴퓨팅(Accelerated Computing, XC) 유형에서 다양한 GPU 인스턴스를 제공하고 있습니다. 아래 문서를 살펴볼 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;https://aws.amazon.com/ko/ec2/instance-types/#Accelerated_Computing&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://aws.amazon.com/ko/ec2/instance-types/#Accelerated_Computing&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이 중 Trn은 AWS Trainium칩을 Inf은 AWS Inferentia 칩을 탑재한 AWS 자체 인스턴스 유형입니다. 이름에서 유추할 수 있는데, Trainium은 딥러닝의 학습(Training)을 Inferentia은 딥러닝 Inference(추론)용으로 특별히 구축된 인스턴스입니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;code&gt;Build GenAI &amp;amp; ML for Performance and Scale, using Amazon EKS, Amazon FSx and AWS Inferentia&lt;/code&gt; 워크샵에서는 AWS Inferentia를 통해 노드그룹을 구성하여 vLLM 및 WebUI 파드를 EKS에 실행해보고자 합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아래는 간략한 실습 구성입니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;986&quot; data-origin-height=&quot;784&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bxxhR0/btsNrnab2gd/9m7ABNX6opK4Hy7UHqOmZk/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bxxhR0/btsNrnab2gd/9m7ABNX6opK4Hy7UHqOmZk/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bxxhR0/btsNrnab2gd/9m7ABNX6opK4Hy7UHqOmZk/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbxxhR0%2FbtsNrnab2gd%2F9m7ABNX6opK4Hy7UHqOmZk%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;986&quot; height=&quot;784&quot; data-origin-width=&quot;986&quot; data-origin-height=&quot;784&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;EKS 클러스터에 vLLM 및 WebUI Pod를 배포하여 Kubernetes에 Gen AI Chatbot 애플리케이션을 배포&lt;/li&gt;
&lt;li&gt;Amazon FSx for Lustre 및 Amazon S3를 사용하여 Mistral-7B 모델을 저장 및 액세스&lt;/li&gt;
&lt;li&gt;vLLM은 AWS Inferentia 노드 그룹을 사용하여 AI 워크로드에 Accelerate Compute를 활용&lt;/li&gt;
&lt;/ul&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;사전에 몇 가지 용어와 구성 요소들을 살펴보겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;b&gt;LLM(Large Language Models)&lt;/b&gt;: LLM은 자연어의 패턴과 구조를 학습하기 위해 방대한 양의 텍스트 데이터에 대해 학습된 기계 학습 모델의 한 유형입니다. 이러한 모델을 텍스트 생성, 질문 답변 및 언어 번역과 같은 광범위한 자연어 처리 작업에 사용할 수 있습니다. 워크샵에서는 70억 개의 매개변수가 있는 오픈 소스 LLM 모델인 Mistral-7B-Instruct 모델을 사용할 것입니다&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;b&gt;vLLM(Virtual Large Language Model&lt;/b&gt;): vLLM은 추론 및 제공을 위한 LLM 사용하기 쉬운 오픈 소스 라이브러리입니다. Mistral-7B-Instruct와 같은 LLM 모델을 배포하여 텍스트 생성 추론을 제공할 수 있는 프레임워크를 제공합니다. vLLM은 OpenAI API와 호환되는 API를 제공하여 LLM 애플리케이션을 쉽게 통합할 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;b&gt;Amazon FSx for Lustre&lt;/b&gt;: 속도가 중요한 워크로드를 위해 확장 가능한 고성능 파일 시스템을 제공하고, 밀리초 미만의 지연 시간을 제공하며, TB/s의 처리량과 수백만 IOPS로 확장할 수 있는 스토리지 서비스입니다. 이 워크숍에서 Mistral-7B-Instruct 모델은 Amazon FSx for Lustre 파일 시스템에 연결된 Amazon S3 버킷에 저장됩니다. vLLM 컨테이너에서 Amazon FSx for Lustre 인스턴스를 통해 마운트 된 Mistral 모델 데이터를 사용합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;b&gt;AWS Inferentia&lt;/b&gt;: 딥 러닝(DL) 및 생성형 AI 추론 애플리케이션을 위해 Amazon EC2에서 가장 저렴한 비용으로 고성능을 제공하도록 AWS에서 설계했으며, Inferentia2 기반 Amazon EC2 Inf2 인스턴스는 LLM과 같이 복잡한 모델을 실행하는데 최적화되어 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;2.1 실습 환경 확인&lt;/h3&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;워크샵 환경은 AWS Cloud9을 통해서 접근이 가능합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;먼저 Cloud9의 터미널에서 EKS 클러스터에 대한 kubeconfig를 얻어와 노드를 조회해 봅니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;TOKEN=`curl -s -X PUT &quot;http://169.254.169.254/latest/api/token&quot; -H &quot;X-aws-ec2-metadata-token-ttl-seconds: 21600&quot;`
export AWS_REGION=$(curl -s -H &quot;X-aws-ec2-metadata-token: $TOKEN&quot; http://169.254.169.254/latest/meta-data/placement/region)
export CLUSTER_NAME=eksworkshop

aws eks update-kubeconfig --name $CLUSTER_NAME --region $AWS_REGION
Added new context arn:aws:eks:us-west-2:771943767300:cluster/eksworkshop to /home/ec2-user/.kube/config

kubectl get nodes
NAME                                         STATUS   ROLES    AGE     VERSION
ip-10-0-104-229.us-west-2.compute.internal   Ready    &amp;lt;none&amp;gt;   2d10h   v1.30.9-eks-5d632ec
ip-10-0-40-187.us-west-2.compute.internal    Ready    &amp;lt;none&amp;gt;   2d10h   v1.30.9-eks-5d632ec

kubectl get po -A
NAMESPACE               NAME                                                              READY   STATUS    RESTARTS        AGE
default                 kube-ops-view-5d9d967b77-499j6                                    1/1     Running   0               2d11h
karpenter               karpenter-7db6458b6b-9kz4w                                        1/1     Running   1 (2d11h ago)   2d11h
karpenter               karpenter-7db6458b6b-f48n8                                        1/1     Running   1 (2d11h ago)   2d11h
kube-prometheus-stack   kube-prometheus-stack-grafana-74cdb96ddf-kgtbd                    3/3     Running   0               2d11h
kube-prometheus-stack   kube-prometheus-stack-kube-state-metrics-6b7844d7f7-m8swx         1/1     Running   0               2d11h
kube-prometheus-stack   kube-prometheus-stack-operator-86f8df994b-x495m                   1/1     Running   0               2d11h
kube-prometheus-stack   kube-prometheus-stack-prometheus-node-exporter-dqdld              1/1     Running   0               2d11h
kube-prometheus-stack   kube-prometheus-stack-prometheus-node-exporter-ksnt4              1/1     Running   0               2d11h
kube-prometheus-stack   prometheus-kube-prometheus-stack-prometheus-0                     2/2     Running   0               2d11h
kube-system             aws-load-balancer-controller-594fbfbbc6-m486q                     1/1     Running   0               2d11h
kube-system             aws-load-balancer-controller-594fbfbbc6-nz6wc                     1/1     Running   0               2d11h
kube-system             aws-node-crfs7                                                    2/2     Running   0               2d11h
kube-system             aws-node-kmqtx                                                    2/2     Running   0               2d11h
kube-system             coredns-86f597cb5-6q9b8                                           1/1     Running   0               2d11h
kube-system             coredns-86f597cb5-h55sz                                           1/1     Running   0               2d11h
kube-system             ebs-csi-controller-7b5dc8b6c7-fxzkm                               6/6     Running   0               2d11h
kube-system             ebs-csi-controller-7b5dc8b6c7-v645l                               6/6     Running   0               2d11h
kube-system             ebs-csi-node-5xl2v                                                3/3     Running   0               2d11h
kube-system             ebs-csi-node-wd5vs                                                3/3     Running   0               2d11h
kube-system             eks-pod-identity-agent-p27dw                                      1/1     Running   0               2d11h
kube-system             eks-pod-identity-agent-w8gv2                                      1/1     Running   0               2d11h
kube-system             kube-proxy-5wj24                                                  1/1     Running   0               2d11h
kube-system             kube-proxy-9gf2j                                                  1/1     Running   0               2d11h
kube-system             metrics-server-7577444cf8-k626n                                   1/1     Running   0               2d11h
nvidia-device-plugin    nvidia-device-plugin-node-feature-discovery-master-695f7b9bk5h8   1/1     Running   0               2d11h
nvidia-device-plugin    nvidia-device-plugin-node-feature-discovery-worker-62zgr          1/1     Running   1 (2d11h ago)   2d11h
nvidia-device-plugin    nvidia-device-plugin-node-feature-discovery-worker-tfvlk          1/1     Running   0               2d11h&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;해당 워크샵은 Terraform과 EKS Blueprints for Terraform에서 제공되는 코드를 통해서 구성되었습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;참고: &lt;a href=&quot;https://github.com/aws-ia/terraform-aws-eks-blueprints&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://github.com/aws-ia/terraform-aws-eks-blueprints&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;설치된 EKS 클러스터에는 Karpenter가 이미 구성되어 있습니다.&lt;/p&gt;
&lt;pre class=&quot;lsl&quot;&gt;&lt;code&gt;kubectl get pods --namespace karpenter
NAME                         READY   STATUS    RESTARTS        AGE
karpenter-7db6458b6b-9kz4w   1/1     Running   1 (2d10h ago)   2d10h
karpenter-7db6458b6b-f48n8   1/1     Running   1 (2d10h ago)   2d10h&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;2.2 스토리지 구성 (모델 데이터를 Amazon FSx for Lustre로 호스트)&lt;/h3&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이 워크샵에서 사용되는 Mistral-7B-Instruct 모델은 Amazon S3 버킷에 저장되며, Amazon FSx for Lustre 파일시스템으로 연결되어 파드들에서 사용하게 됩니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;733&quot; data-origin-height=&quot;704&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/ss7wS/btsNrYHLjEs/6DKEXkULjEK6KlChbRRwtK/img.jpg&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/ss7wS/btsNrYHLjEs/6DKEXkULjEK6KlChbRRwtK/img.jpg&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/ss7wS/btsNrYHLjEs/6DKEXkULjEK6KlChbRRwtK/img.jpg&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2Fss7wS%2FbtsNrYHLjEs%2F6DKEXkULjEK6KlChbRRwtK%2Fimg.jpg&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;733&quot; height=&quot;704&quot; data-origin-width=&quot;733&quot; data-origin-height=&quot;704&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이번 장에서는 FSx For Lustre 인스턴스를 EKS 클러스터에 배포하고 이를 통해 CSI Driver, PV, StorageClass와 같은 스토리지 개념을 살펴보겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;FSx for Lustre CSI Driver는 Amazon EKS 클러스터가 FSx for Lustre 파일 시스템을 기반으로 하는 영구 볼륨의 수명 주기를 관리할 수 있도록 하는 CSI 인터페이스를 제공합니다. FSx for Lustre CSI 드라이버를 사용하면 컨테이너 워크로드를 위한 지연 시간이 짧은 고성능 영구 스토리지를 빠르고 쉽게 통합할 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;blockquote data-ke-style=&quot;style3&quot;&gt;[Note] FSx For Lustre를 CSI Driver로 사용해 파일시스템을 제공하고, 실제 데이터는 Amazon S3에 저장됨&lt;/blockquote&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;먼저 CSI 드라이버를 배포하는 과정을 살펴보겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;실습에서 활용하기 위해서 환경변수를 선언합니다.&lt;/p&gt;
&lt;pre class=&quot;dsconfig&quot;&gt;&lt;code&gt;ACCOUNT_ID=$(aws sts get-caller-identity --query &quot;Account&quot; --output text)&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이후 CSI 드라이버가 사용자를 대신하여 AWS API를 호출할 수 있도록 허용하는 IAM 정책 및 서비스 계정 생성합니다.&lt;/p&gt;
&lt;pre class=&quot;dsconfig&quot;&gt;&lt;code&gt;cat &amp;lt;&amp;lt; EOF &amp;gt;  fsx-csi-driver.json
{
    &quot;Version&quot;:&quot;2012-10-17&quot;,
    &quot;Statement&quot;:[
        {
            &quot;Effect&quot;:&quot;Allow&quot;,
            &quot;Action&quot;:[
                &quot;iam:CreateServiceLinkedRole&quot;,
                &quot;iam:AttachRolePolicy&quot;,
                &quot;iam:PutRolePolicy&quot;
            ],
            &quot;Resource&quot;:&quot;arn:aws:iam::*:role/aws-service-role/s3.data-source.lustre.fsx.amazonaws.com/*&quot;
        },
        {
            &quot;Action&quot;:&quot;iam:CreateServiceLinkedRole&quot;,
            &quot;Effect&quot;:&quot;Allow&quot;,
            &quot;Resource&quot;:&quot;*&quot;,
            &quot;Condition&quot;:{
                &quot;StringLike&quot;:{
                    &quot;iam:AWSServiceName&quot;:[
                        &quot;fsx.amazonaws.com&quot;
                    ]
                }
            }
        },
        {
            &quot;Effect&quot;:&quot;Allow&quot;,
            &quot;Action&quot;:[
                &quot;s3:ListBucket&quot;,
                &quot;fsx:CreateFileSystem&quot;,
                &quot;fsx:DeleteFileSystem&quot;,
                &quot;fsx:DescribeFileSystems&quot;,
                &quot;fsx:TagResource&quot;
            ],
            &quot;Resource&quot;:[
                &quot;*&quot;
            ]
        }
    ]
}
EOF

aws iam create-policy \
        --policy-name Amazon_FSx_Lustre_CSI_Driver \
        --policy-document file://fsx-csi-driver.json
{
    &quot;Policy&quot;: {
        &quot;PolicyName&quot;: &quot;Amazon_FSx_Lustre_CSI_Driver&quot;,
        &quot;PolicyId&quot;: &quot;ANPA3HO3PDUCJUZ5O6GGM&quot;,
        &quot;Arn&quot;: &quot;arn:aws:iam::771943767300:policy/Amazon_FSx_Lustre_CSI_Driver&quot;,
        &quot;Path&quot;: &quot;/&quot;,
        &quot;DefaultVersionId&quot;: &quot;v1&quot;,
        &quot;AttachmentCount&quot;: 0,
        &quot;PermissionsBoundaryUsageCount&quot;: 0,
        &quot;IsAttachable&quot;: true,
        &quot;CreateDate&quot;: &quot;2025-04-19T13:16:10+00:00&quot;,
        &quot;UpdateDate&quot;: &quot;2025-04-19T13:16:10+00:00&quot;
    }
}&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이제 CSI Driver에 대한 쿠버네티스의 Service Account를 만들고 Service Account에 IAM Policy를 연결합니다.&lt;/p&gt;
&lt;pre class=&quot;yaml&quot;&gt;&lt;code&gt;eksctl create iamserviceaccount \
    --region $AWS_REGION \
    --name fsx-csi-controller-sa \
    --namespace kube-system \
    --cluster $CLUSTER_NAME \
    --attach-policy-arn arn:aws:iam::$ACCOUNT_ID:policy/Amazon_FSx_Lustre_CSI_Driver \
    --approve
2025-04-19 13:17:50 [ℹ]  1 iamserviceaccount (kube-system/fsx-csi-controller-sa) was included (based on the include/exclude rules)
2025-04-19 13:17:50 [!]  serviceaccounts that exist in Kubernetes will be excluded, use --override-existing-serviceaccounts to override
2025-04-19 13:17:50 [ℹ]  1 task: { 
    2 sequential sub-tasks: { 
        create IAM role for serviceaccount &quot;kube-system/fsx-csi-controller-sa&quot;,
        create serviceaccount &quot;kube-system/fsx-csi-controller-sa&quot;,
    } }2025-04-19 13:17:50 [ℹ]  building iamserviceaccount stack &quot;eksctl-eksworkshop-addon-iamserviceaccount-kube-system-fsx-csi-controller-sa&quot;
2025-04-19 13:17:50 [ℹ]  deploying stack &quot;eksctl-eksworkshop-addon-iamserviceaccount-kube-system-fsx-csi-controller-sa&quot;
2025-04-19 13:17:50 [ℹ]  waiting for CloudFormation stack &quot;eksctl-eksworkshop-addon-iamserviceaccount-kube-system-fsx-csi-controller-sa&quot;
2025-04-19 13:18:20 [ℹ]  waiting for CloudFormation stack &quot;eksctl-eksworkshop-addon-iamserviceaccount-kube-system-fsx-csi-controller-sa&quot;
2025-04-19 13:18:20 [ℹ]  created serviceaccount &quot;kube-system/fsx-csi-controller-sa&quot;&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;FSx for Lustre CSI Driver를 배포합니다.&lt;/p&gt;
&lt;pre class=&quot;pgsql&quot;&gt;&lt;code&gt;kubectl apply -k &quot;github.com/kubernetes-sigs/aws-fsx-csi-driver/deploy/kubernetes/overlays/stable/?ref=release-1.2&quot;
# Warning: 'bases' is deprecated. Please use 'resources' instead. Run 'kustomize edit fix' to update your Kustomization automatically.
Warning: resource serviceaccounts/fsx-csi-controller-sa is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
serviceaccount/fsx-csi-controller-sa configured
serviceaccount/fsx-csi-node-sa created
clusterrole.rbac.authorization.k8s.io/fsx-csi-external-provisioner-role created
clusterrole.rbac.authorization.k8s.io/fsx-csi-node-role created
clusterrole.rbac.authorization.k8s.io/fsx-external-resizer-role created
clusterrolebinding.rbac.authorization.k8s.io/fsx-csi-external-provisioner-binding created
clusterrolebinding.rbac.authorization.k8s.io/fsx-csi-node-getter-binding created
clusterrolebinding.rbac.authorization.k8s.io/fsx-csi-resizer-binding created
deployment.apps/fsx-csi-controller created
daemonset.apps/fsx-csi-node created
csidriver.storage.k8s.io/fsx.csi.aws.com created&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이후 생성된 파드를 살펴보면, 컨트롤러에 해당하는 디플로이먼트와 fsx-csi-node가 데몬셋으로 실행 중입니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;kube-system             fsx-csi-controller-6f4c577bd4-hfrgq                               4/4     Running   0               11s
kube-system             fsx-csi-controller-6f4c577bd4-ptvrb                               4/4     Running   0               11s
kube-system             fsx-csi-node-77jff                                                3/3     Running   0               11s
kube-system             fsx-csi-node-xbqfb                                                3/3     Running   0               11s&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;앞서 생성된 Role의 ARN을 변수에 저장하고, FSx for Lustre CSI Driver를 설치하면서 생성된 Service Account에 ARN을 annotation으로 입력합니다.&lt;/p&gt;
&lt;pre class=&quot;elixir&quot;&gt;&lt;code&gt;export ROLE_ARN=$(aws cloudformation describe-stacks --stack-name &quot;eksctl-${CLUSTER_NAME}-addon-iamserviceaccount-kube-system-fsx-csi-controller-sa&quot; --query &quot;Stacks[0].Outputs[0].OutputValue&quot;  --region $AWS_REGION --output text)

kubectl annotate serviceaccount -n kube-system fsx-csi-controller-sa \
 eks.amazonaws.com/role-arn=$ROLE_ARN --overwrite=true

# 확인
kubectl get sa/fsx-csi-controller-sa -n kube-system -o yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  annotations:
    eks.amazonaws.com/role-arn: arn:aws:iam::771943767300:role/eksctl-eksworkshop-addon-iamserviceaccount-ku-Role1-XnHXsBDGvgbs
    kubectl.kubernetes.io/last-applied-configuration: |
      {&quot;apiVersion&quot;:&quot;v1&quot;,&quot;kind&quot;:&quot;ServiceAccount&quot;,&quot;metadata&quot;:{&quot;annotations&quot;:{},&quot;labels&quot;:{&quot;app.kubernetes.io/name&quot;:&quot;aws-fsx-csi-driver&quot;},&quot;name&quot;:&quot;fsx-csi-controller-sa&quot;,&quot;namespace&quot;:&quot;kube-system&quot;}}
  creationTimestamp: &quot;2025-04-19T13:18:20Z&quot;
  labels:
    app.kubernetes.io/managed-by: eksctl
    app.kubernetes.io/name: aws-fsx-csi-driver
  name: fsx-csi-controller-sa
  namespace: kube-system
  resourceVersion: &quot;1085000&quot;
  uid: dcfe09bc-921d-431f-9e3b-87c15e57aa64&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이제 Persistent Volume을 사용하기 위해서 필요한 작업을 진행하겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;실습에서는 Amazon S3 버킷에 연결된 FSx For Lustre Instance를 이용해 정적 프로비저닝 방식을 사용합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;콘솔에서 살펴보면 이미 Amazon S3와 FSx Filesystem이 생성되어 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Amazon S3 Bucket:&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;2193&quot; data-origin-height=&quot;702&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/2VhKY/btsNrxXXSmh/Rfwrx4T5SH1CqNslKHlhgK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/2VhKY/btsNrxXXSmh/Rfwrx4T5SH1CqNslKHlhgK/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/2VhKY/btsNrxXXSmh/Rfwrx4T5SH1CqNslKHlhgK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2F2VhKY%2FbtsNrxXXSmh%2FRfwrx4T5SH1CqNslKHlhgK%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;2193&quot; height=&quot;702&quot; data-origin-width=&quot;2193&quot; data-origin-height=&quot;702&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;FSx File system:&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;2205&quot; data-origin-height=&quot;433&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/li3bn/btsNrwSfcnE/RbBug7DV1Sq1Mvp0HzVs50/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/li3bn/btsNrwSfcnE/RbBug7DV1Sq1Mvp0HzVs50/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/li3bn/btsNrwSfcnE/RbBug7DV1Sq1Mvp0HzVs50/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2Fli3bn%2FbtsNrwSfcnE%2FRbBug7DV1Sq1Mvp0HzVs50%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;2205&quot; height=&quot;433&quot; data-origin-width=&quot;2205&quot; data-origin-height=&quot;433&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;해당 파일시스템을 살펴보면 1.2 TiB 용량에 250 MB/s의 Throughput을 가진 것을 확인할 수 있습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1825&quot; data-origin-height=&quot;596&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/dapYsl/btsNrYAZUC9/A6CQXDStSPOHOTm7S9nao0/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/dapYsl/btsNrYAZUC9/A6CQXDStSPOHOTm7S9nao0/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/dapYsl/btsNrYAZUC9/A6CQXDStSPOHOTm7S9nao0/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FdapYsl%2FbtsNrYAZUC9%2FA6CQXDStSPOHOTm7S9nao0%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1825&quot; height=&quot;596&quot; data-origin-width=&quot;1825&quot; data-origin-height=&quot;596&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Cloud9에서 실습 폴더로 이동하면 워크샵에 이미 각 yaml이 준비되어 있습니다.&lt;/p&gt;
&lt;pre class=&quot;stylus&quot;&gt;&lt;code&gt;cd /home/ec2-user/environment/eks/FSxL
ls
fsxL-claim.yaml  fsxL-dynamic-claim.yaml  fsxL-persistent-volume.yaml  fsxL-storage-class.yaml  pod.yaml  pod_performance.yaml&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;생성된 FSx를 활용하기 위해 환경 변수를 사전에 정의하고 실습을 이어 가겠습니다.&lt;/p&gt;
&lt;pre class=&quot;reasonml&quot;&gt;&lt;code&gt;FSXL_VOLUME_ID=$(aws fsx describe-file-systems --query 'FileSystems[].FileSystemId' --output text)
DNS_NAME=$(aws fsx describe-file-systems --query 'FileSystems[].DNSName' --output text)
MOUNT_NAME=$(aws fsx describe-file-systems --query 'FileSystems[].LustreConfiguration.MountName' --output text)&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아래 파일을 위에서 선언한 변수들로 변경합니다.&lt;/p&gt;
&lt;pre class=&quot;properties&quot;&gt;&lt;code&gt;# fsxL-persistent-volume.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  name: fsx-pv
spec:
  persistentVolumeReclaimPolicy: Retain
  capacity:
    storage: 1200Gi
  volumeMode: Filesystem
  accessModes:
    - ReadWriteMany
  mountOptions:
    - flock
  csi:
    driver: fsx.csi.aws.com
    volumeHandle: FSXL_VOLUME_ID
    volumeAttributes:
      dnsname: DNS_NAME
      mountname: MOUNT_NAME

sed -i'' -e &quot;s/FSXL_VOLUME_ID/$FSXL_VOLUME_ID/g&quot; fsxL-persistent-volume.yaml
sed -i'' -e &quot;s/DNS_NAME/$DNS_NAME/g&quot; fsxL-persistent-volume.yaml
sed -i'' -e &quot;s/MOUNT_NAME/$MOUNT_NAME/g&quot; fsxL-persistent-volume.yaml

cat fsxL-persistent-volume.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  name: fsx-pv
spec:
  persistentVolumeReclaimPolicy: Retain
  capacity:
    storage: 1200Gi
  volumeMode: Filesystem
  accessModes:
    - ReadWriteMany
  mountOptions:
    - flock
  csi:
    driver: fsx.csi.aws.com
    volumeHandle: fs-0fa5aa91d85fd7c00
    volumeAttributes:
      dnsname: fs-0fa5aa91d85fd7c00.fsx.us-west-2.amazonaws.com
      mountname: h4zdbb4v&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;PV를 배포합니다.&lt;/p&gt;
&lt;pre class=&quot;maxima&quot;&gt;&lt;code&gt;kubectl apply -f fsxL-persistent-volume.yaml
persistentvolume/fsx-pv created

kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM                                                                      STORAGECLASS   VOLUMEATTRIBUTESCLASS   REASON   AGE
fsx-pv                                     1200Gi     RWX            Retain           Available                                                                                             &amp;lt;unset&amp;gt;                          6s
pvc-b15cf809-3b71-449c-aaa1-c0c7a33996ba   50Gi       RWO            Delete           Bound       kube-prometheus-stack/data-prometheus-kube-prometheus-stack-prometheus-0   gp3            &amp;lt;unset&amp;gt;                          2d12h&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;생성된 PV를 참조하는 PVC를 배포합니다.&lt;/p&gt;
&lt;pre class=&quot;properties&quot;&gt;&lt;code&gt;cat fsxL-claim.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: fsx-lustre-claim
spec:
  accessModes:
    - ReadWriteMany
  storageClassName: &quot;&quot;
  resources:
    requests:
      storage: 1200Gi
  volumeName: fsx-pvWSParticipantRole:~/environment/eks/FSxL $ 

kubectl apply -f fsxL-claim.yaml
persistentvolumeclaim/fsx-lustre-claim created

kubectl get pvc
NAME               STATUS   VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   VOLUMEATTRIBUTESCLASS   AGE
fsx-lustre-claim   Bound    fsx-pv   1200Gi     RWX                           &amp;lt;unset&amp;gt;                 3s&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;2.3 생성형 AI Chat 애플리케이션 배포&lt;/h3&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;앞서 살펴본 내용과 같이 EKS 클러스터에는 Karpenter가 설치가 되어 있습니다. 이제 vLLM 파드를 AWS Inferentia 노드에 배포하기 위해서 사전에 AWS Inferentia용 Karpenter NodePool 및 EC2 NodeClass 생성하겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;애플리케이션 yaml이 위치한 실습 폴더로 이동하면, NodePool과 EC2NodeClass의 정의도 같이 존재합니다.&lt;/p&gt;
&lt;pre class=&quot;jboss-cli&quot;&gt;&lt;code&gt;cd /home/ec2-user/environment/eks/genai

ls
inferentia_nodepool.yaml  mistral-fsxl.yaml  open-webui.yaml&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;해당 정의를 확인해 보고 kubectl로 배포합니다.&lt;/p&gt;
&lt;pre class=&quot;yaml&quot;&gt;&lt;code&gt;cat inferentia_nodepool.yaml 
apiVersion: karpenter.sh/v1
kind: NodePool
metadata:
  name: inferentia
  labels:
    intent: genai-apps
    NodeGroupType: inf2-neuron-karpenter
spec:
  template:
    spec:
      taints:
        - key: aws.amazon.com/neuron
          value: &quot;true&quot;
          effect: &quot;NoSchedule&quot;
      requirements:
        - key: &quot;karpenter.k8s.aws/instance-family&quot;
          operator: In
          values: [&quot;inf2&quot;]
        - key: &quot;karpenter.k8s.aws/instance-size&quot;
          operator: In
          values: [ &quot;xlarge&quot;, &quot;2xlarge&quot;, &quot;8xlarge&quot;, &quot;24xlarge&quot;, &quot;48xlarge&quot;]
        - key: &quot;kubernetes.io/arch&quot;
          operator: In
          values: [&quot;amd64&quot;]
        - key: &quot;karpenter.sh/capacity-type&quot;
          operator: In
          values: [&quot;spot&quot;, &quot;on-demand&quot;]
      nodeClassRef:
        group: karpenter.k8s.aws
        kind: EC2NodeClass
        name: inferentia
  limits:
    cpu: 1000
    memory: 1000Gi
  disruption:
    consolidationPolicy: WhenEmpty
    # expireAfter: 720h # 30 * 24h = 720h
    consolidateAfter: 180s
  weight: 100
---
apiVersion: karpenter.k8s.aws/v1
kind: EC2NodeClass
metadata:
  name: inferentia
spec:
  amiFamily: AL2
  amiSelectorTerms:
  - alias: al2@v20240917
  blockDeviceMappings:
    - deviceName: /dev/xvda
      ebs:
        deleteOnTermination: true
        volumeSize: 100Gi
        volumeType: gp3
  role: &quot;Karpenter-eksworkshop&quot; 
  subnetSelectorTerms:          
    - tags:
        karpenter.sh/discovery: &quot;eksworkshop&quot;
  securityGroupSelectorTerms:
    - tags:
        karpenter.sh/discovery: &quot;eksworkshop&quot;
  tags:
    intent: apps
    managed-by: karpenter

kubectl apply -f inferentia_nodepool.yaml
nodepool.karpenter.sh/inferentia created
ec2nodeclass.karpenter.k8s.aws/inferentia created

kubectl get nodepool,ec2nodeclass inferentia
NAME                               NODECLASS    NODES   READY   AGE
nodepool.karpenter.sh/inferentia   inferentia   0       True    6s

NAME                                        READY   AGE
ec2nodeclass.karpenter.k8s.aws/inferentia   True    6s&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;NodePool 정의를 살펴보면 AWS Inferentia INF2로 지정된 것을 알 수 있습니다.&lt;/p&gt;
&lt;pre class=&quot;less&quot;&gt;&lt;code&gt;      requirements:
        - key: &quot;karpenter.k8s.aws/instance-family&quot;
          operator: In
          values: [&quot;inf2&quot;]&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이로써 GPU 리소스에 대한 요청이 있는 경우 해당 NodePool을 바탕으로 Karpenter가 노드 확장이 준비되었습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;앞서 살펴본 내용과 같이 GPU 노드를 초기화하기 위해서 device plugin이 필요합니다. 이를 위해서 Neuron Device Plugin과 Neuron Scheduler를 설치하겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;b&gt;Neuron Device Plugin&lt;/b&gt;은 Neuron 코어와 디바이스를 쿠버네티스에 리소스로 노출합니다.&lt;/p&gt;
&lt;pre class=&quot;awk&quot;&gt;&lt;code&gt;kubectl apply -f https://raw.githubusercontent.com/aws-neuron/aws-neuron-sdk/master/src/k8/k8s-neuron-device-plugin-rbac.yml
kubectl apply -f https://raw.githubusercontent.com/aws-neuron/aws-neuron-sdk/master/src/k8/k8s-neuron-device-plugin.yml&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;b&gt;Neuron Scheduler extension&lt;/b&gt;은 둘 이상의 Neuron 코어나 디바이스 리소스가 필요한 Pod를 예약하는 데 필요합니다. Neuron 스케줄러 확장은 인접하지 않은 코어(non-contiguous Core)/Device ID가 있는 노드를 필터링하고, 이를 필요로 하는 파드에게 대해 연속적인 코어(contiguous core)/Device ID를 할당합니다.&lt;/p&gt;
&lt;pre class=&quot;awk&quot;&gt;&lt;code&gt;kubectl apply -f https://raw.githubusercontent.com/aws-neuron/aws-neuron-sdk/master/src/k8/k8s-neuron-scheduler-eks.yml
kubectl apply -f https://raw.githubusercontent.com/aws-neuron/aws-neuron-sdk/master/src/k8/my-scheduler.yml&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이제 준비가 다되었습니다. vLLM 애플리케이션을 배포합니다.&lt;/p&gt;
&lt;pre class=&quot;yaml&quot;&gt;&lt;code&gt;cat mistral-fsxl.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: vllm-mistral-inf2-deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      app: vllm-mistral-inf2-server
  template:
    metadata:
      labels:
        app: vllm-mistral-inf2-server
    spec:
      tolerations:
      - key: &quot;aws.amazon.com/neuron&quot;
        operator: &quot;Exists&quot;
        effect: &quot;NoSchedule&quot;
      containers:
      - name: inference-server
        image: public.ecr.aws/u3r1l1j7/eks-genai:neuronrayvllm-100G-root
        resources:
          requests:
            aws.amazon.com/neuron: 1
          limits:
            aws.amazon.com/neuron: 1
        args:
        - --model=$(MODEL_ID)
        - --enforce-eager
        - --gpu-memory-utilization=0.96
        - --device=neuron
        - --max-num-seqs=4
        - --tensor-parallel-size=2
        - --max-model-len=10240
        - --served-model-name=mistralai/Mistral-7B-Instruct-v0.2-neuron
        env:
        - name: MODEL_ID
          value: /work-dir/Mistral-7B-Instruct-v0.2/
        - name: NEURON_COMPILE_CACHE_URL
          value: /work-dir/Mistral-7B-Instruct-v0.2/neuron-cache/
        - name: PORT
          value: &quot;8000&quot;
        volumeMounts:
        - name: persistent-storage
          mountPath: &quot;/work-dir&quot;
      volumes:
      - name: persistent-storage
        persistentVolumeClaim:
          claimName: fsx-lustre-claim
---
apiVersion: v1
kind: Service
metadata:
  name: vllm-mistral7b-service
spec:
  selector:
    app: vllm-mistral-inf2-server
  ports:
  - protocol: TCP
    port: 80
    targetPort: 8000

kubectl apply -f mistral-fsxl.yaml
deployment.apps/vllm-mistral-inf2-deployment created
service/vllm-mistral7b-service created&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;해당 컨테이너는 neuron 코어를 1개 요청하고 있습니다.&lt;/p&gt;
&lt;pre class=&quot;yaml&quot;&gt;&lt;code&gt;      - name: inference-server
        image: public.ecr.aws/u3r1l1j7/eks-genai:neuronrayvllm-100G-root
        resources:
          requests:
            aws.amazon.com/neuron: 1
          limits:
            aws.amazon.com/neuron: 1&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;현재는 만족하는 노드가 없기 때문에 파드가 잠시 Pending으로 확인되다가, 노드가 생성된 이후 ContainerCreating 단계로 진행됩니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;kubectl get po 
NAME                                            READY   STATUS    RESTARTS   AGE
kube-ops-view-5d9d967b77-499j6                  1/1     Running   0          2d13h
vllm-mistral-inf2-deployment-7d886c8cc8-65ddn   0/1     Pending   0          29s

kubectl get no -owide
NAME                                         STATUS   ROLES    AGE     VERSION               INTERNAL-IP    EXTERNAL-IP   OS-IMAGE                       KERNEL-VERSION                    CONTAINER-RUNTIME
ip-10-0-104-229.us-west-2.compute.internal   Ready    &amp;lt;none&amp;gt;   2d13h   v1.30.9-eks-5d632ec   10.0.104.229   &amp;lt;none&amp;gt;        Amazon Linux 2023.7.20250331   6.1.131-143.221.amzn2023.x86_64   containerd://1.7.27
ip-10-0-40-187.us-west-2.compute.internal    Ready    &amp;lt;none&amp;gt;   2d13h   v1.30.9-eks-5d632ec   10.0.40.187    &amp;lt;none&amp;gt;        Amazon Linux 2023.7.20250331   6.1.131-143.221.amzn2023.x86_64   containerd://1.7.27
ip-10-0-69-233.us-west-2.compute.internal    Ready    &amp;lt;none&amp;gt;   55s     v1.30.4-eks-a737599   10.0.69.233    &amp;lt;none&amp;gt;        Amazon Linux 2                 5.10.224-212.876.amzn2.x86_64     containerd://1.7.11

kubectl get po 
NAME                                            READY   STATUS              RESTARTS   AGE
kube-ops-view-5d9d967b77-499j6                  1/1     Running             0          2d13h
vllm-mistral-inf2-deployment-7d886c8cc8-65ddn   0/1     ContainerCreating   0          61skubectl get po 
NAME                                            READY   STATUS              RESTARTS   AGE
kube-ops-view-5d9d967b77-499j6                  1/1     Running             0          2d13h
vllm-mistral-inf2-deployment-7d886c8cc8-65ddn   0/1     ContainerCreating   0          102s&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Karpenter에서도 Node가 생성되는 로그를 확인할 수 있습니다.&lt;/p&gt;
&lt;pre class=&quot;routeros&quot;&gt;&lt;code&gt;kubectl -n karpenter logs -l app.kubernetes.io/name=karpenter --all-containers=true -f --tail=20
...
{&quot;level&quot;:&quot;INFO&quot;,&quot;time&quot;:&quot;2025-04-19T14:51:45.919Z&quot;,&quot;logger&quot;:&quot;controller&quot;,&quot;message&quot;:&quot;found provisionable pod(s)&quot;,&quot;commit&quot;:&quot;62a726c&quot;,&quot;controller&quot;:&quot;provisioner&quot;,&quot;namespace&quot;:&quot;&quot;,&quot;name&quot;:&quot;&quot;,&quot;reconcileID&quot;:&quot;e18e71e7-dc58-4aab-8556-1bccaeca3ff9&quot;,&quot;Pods&quot;:&quot;default/vllm-mistral-inf2-deployment-7d886c8cc8-65ddn&quot;,&quot;duration&quot;:&quot;60.151148ms&quot;}
{&quot;level&quot;:&quot;INFO&quot;,&quot;time&quot;:&quot;2025-04-19T14:51:45.919Z&quot;,&quot;logger&quot;:&quot;controller&quot;,&quot;message&quot;:&quot;computed new nodeclaim(s) to fit pod(s)&quot;,&quot;commit&quot;:&quot;62a726c&quot;,&quot;controller&quot;:&quot;provisioner&quot;,&quot;namespace&quot;:&quot;&quot;,&quot;name&quot;:&quot;&quot;,&quot;reconcileID&quot;:&quot;e18e71e7-dc58-4aab-8556-1bccaeca3ff9&quot;,&quot;nodeclaims&quot;:1,&quot;pods&quot;:1}
{&quot;level&quot;:&quot;INFO&quot;,&quot;time&quot;:&quot;2025-04-19T14:51:45.929Z&quot;,&quot;logger&quot;:&quot;controller&quot;,&quot;message&quot;:&quot;created nodeclaim&quot;,&quot;commit&quot;:&quot;62a726c&quot;,&quot;controller&quot;:&quot;provisioner&quot;,&quot;namespace&quot;:&quot;&quot;,&quot;name&quot;:&quot;&quot;,&quot;reconcileID&quot;:&quot;e18e71e7-dc58-4aab-8556-1bccaeca3ff9&quot;,&quot;NodePool&quot;:{&quot;name&quot;:&quot;inferentia&quot;},&quot;NodeClaim&quot;:{&quot;name&quot;:&quot;inferentia-ht4ms&quot;},&quot;requests&quot;:{&quot;aws.amazon.com/neuron&quot;:&quot;1&quot;,&quot;cpu&quot;:&quot;210m&quot;,&quot;memory&quot;:&quot;240Mi&quot;,&quot;pods&quot;:&quot;11&quot;},&quot;instance-types&quot;:&quot;inf2.24xlarge, inf2.48xlarge, inf2.8xlarge, inf2.xlarge&quot;}
{&quot;level&quot;:&quot;INFO&quot;,&quot;time&quot;:&quot;2025-04-19T14:51:48.323Z&quot;,&quot;logger&quot;:&quot;controller&quot;,&quot;message&quot;:&quot;launched nodeclaim&quot;,&quot;commit&quot;:&quot;62a726c&quot;,&quot;controller&quot;:&quot;nodeclaim.lifecycle&quot;,&quot;controllerGroup&quot;:&quot;karpenter.sh&quot;,&quot;controllerKind&quot;:&quot;NodeClaim&quot;,&quot;NodeClaim&quot;:{&quot;name&quot;:&quot;inferentia-ht4ms&quot;},&quot;namespace&quot;:&quot;&quot;,&quot;name&quot;:&quot;inferentia-ht4ms&quot;,&quot;reconcileID&quot;:&quot;5f90ead5-d4f2-43fa-a9a7-2bc0ad466878&quot;,&quot;provider-id&quot;:&quot;aws:///us-west-2b/i-0c4f2fac2f6fd8d90&quot;,&quot;instance-type&quot;:&quot;inf2.xlarge&quot;,&quot;zone&quot;:&quot;us-west-2b&quot;,&quot;capacity-type&quot;:&quot;spot&quot;,&quot;allocatable&quot;:{&quot;aws.amazon.com/neuron&quot;:&quot;1&quot;,&quot;cpu&quot;:&quot;3920m&quot;,&quot;ephemeral-storage&quot;:&quot;89Gi&quot;,&quot;memory&quot;:&quot;14162Mi&quot;,&quot;pods&quot;:&quot;58&quot;,&quot;vpc.amazonaws.com/pod-eni&quot;:&quot;18&quot;}}
{&quot;level&quot;:&quot;INFO&quot;,&quot;time&quot;:&quot;2025-04-19T14:52:08.417Z&quot;,&quot;logger&quot;:&quot;controller&quot;,&quot;message&quot;:&quot;registered nodeclaim&quot;,&quot;commit&quot;:&quot;62a726c&quot;,&quot;controller&quot;:&quot;nodeclaim.lifecycle&quot;,&quot;controllerGroup&quot;:&quot;karpenter.sh&quot;,&quot;controllerKind&quot;:&quot;NodeClaim&quot;,&quot;NodeClaim&quot;:{&quot;name&quot;:&quot;inferentia-ht4ms&quot;},&quot;namespace&quot;:&quot;&quot;,&quot;name&quot;:&quot;inferentia-ht4ms&quot;,&quot;reconcileID&quot;:&quot;bce3cedc-2ccc-4daf-a85b-d07300c00468&quot;,&quot;provider-id&quot;:&quot;aws:///us-west-2b/i-0c4f2fac2f6fd8d90&quot;,&quot;Node&quot;:{&quot;name&quot;:&quot;ip-10-0-69-233.us-west-2.compute.internal&quot;}}
{&quot;level&quot;:&quot;INFO&quot;,&quot;time&quot;:&quot;2025-04-19T14:52:26.450Z&quot;,&quot;logger&quot;:&quot;controller&quot;,&quot;message&quot;:&quot;initialized nodeclaim&quot;,&quot;commit&quot;:&quot;62a726c&quot;,&quot;controller&quot;:&quot;nodeclaim.lifecycle&quot;,&quot;controllerGroup&quot;:&quot;karpenter.sh&quot;,&quot;controllerKind&quot;:&quot;NodeClaim&quot;,&quot;NodeClaim&quot;:{&quot;name&quot;:&quot;inferentia-ht4ms&quot;},&quot;namespace&quot;:&quot;&quot;,&quot;name&quot;:&quot;inferentia-ht4ms&quot;,&quot;reconcileID&quot;:&quot;7cf15a5c-0afb-414e-af31-e22ea03b2714&quot;,&quot;provider-id&quot;:&quot;aws:///us-west-2b/i-0c4f2fac2f6fd8d90&quot;,&quot;Node&quot;:{&quot;name&quot;:&quot;ip-10-0-69-233.us-west-2.compute.internal&quot;},&quot;allocatable&quot;:{&quot;aws.amazon.com/neuron&quot;:&quot;1&quot;,&quot;aws.amazon.com/neuroncore&quot;:&quot;2&quot;,&quot;aws.amazon.com/neurondevice&quot;:&quot;1&quot;,&quot;cpu&quot;:&quot;3920m&quot;,&quot;ephemeral-storage&quot;:&quot;95551679124&quot;,&quot;hugepages-1Gi&quot;:&quot;0&quot;,&quot;hugepages-2Mi&quot;:&quot;0&quot;,&quot;memory&quot;:&quot;14992800Ki&quot;,&quot;pods&quot;:&quot;58&quot;}}
{&quot;level&quot;:&quot;INFO&quot;,&quot;time&quot;:&quot;2025-04-17T01:41:44.291Z&quot;,&quot;logger&quot;:&quot;controller.controller-runtime.metrics&quot;,&quot;message&quot;:&quot;Starting metrics server&quot;,&quot;commit&quot;:&quot;62a726c&quot;}
{&quot;level&quot;:&quot;INFO&quot;,&quot;time&quot;:&quot;2025-04-17T01:41:44.291Z&quot;,&quot;logger&quot;:&quot;controller&quot;,&quot;message&quot;:&quot;starting server&quot;,&quot;commit&quot;:&quot;62a726c&quot;,&quot;name&quot;:&quot;health probe&quot;,&quot;addr&quot;:&quot;[::]:8081&quot;}
{&quot;level&quot;:&quot;INFO&quot;,&quot;time&quot;:&quot;2025-04-17T01:41:44.292Z&quot;,&quot;logger&quot;:&quot;controller.controller-runtime.metrics&quot;,&quot;message&quot;:&quot;Serving metrics server&quot;,&quot;commit&quot;:&quot;62a726c&quot;,&quot;bindAddress&quot;:&quot;:8080&quot;,&quot;secure&quot;:false}
{&quot;level&quot;:&quot;INFO&quot;,&quot;time&quot;:&quot;2025-04-17T01:41:44.393Z&quot;,&quot;logger&quot;:&quot;controller&quot;,&quot;message&quot;:&quot;attempting to acquire leader lease karpenter/karpenter-leader-election...&quot;,&quot;commit&quot;:&quot;62a726c&quot;}&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;생성된 노드에는 neuron-device-plugin이 실행 중이고, 노드를 describe 해보면 neuron 리소스가 확인됩니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt; kubectl get po -A -owide |grep ip-10-0-69-233.us-west-2.compute.internal
default                 vllm-mistral-inf2-deployment-7d886c8cc8-65ddn                     1/1     Running   0               4m39s   10.0.80.165    ip-10-0-69-233.us-west-2.compute.internal    &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-prometheus-stack   kube-prometheus-stack-prometheus-node-exporter-b9vtq              1/1     Running   0               4m15s   10.0.69.233    ip-10-0-69-233.us-west-2.compute.internal    &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system             aws-node-9np65                                                    2/2     Running   0               4m16s   10.0.69.233    ip-10-0-69-233.us-west-2.compute.internal    &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system             ebs-csi-node-qbkkz                                                3/3     Running   0               4m16s   10.0.80.162    ip-10-0-69-233.us-west-2.compute.internal    &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system             eks-pod-identity-agent-cmrp2                                      1/1     Running   0               4m16s   10.0.69.233    ip-10-0-69-233.us-west-2.compute.internal    &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system             fsx-csi-node-h4tzg                                                3/3     Running   0               4m16s   10.0.80.163    ip-10-0-69-233.us-west-2.compute.internal    &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system             kube-proxy-7gpfd                                                  1/1     Running   0               4m16s   10.0.69.233    ip-10-0-69-233.us-west-2.compute.internal    &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system             neuron-device-plugin-4bmlz                                        1/1     Running   0               4m2s    10.0.80.160    ip-10-0-69-233.us-west-2.compute.internal    &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system             neuron-device-plugin-daemonset-5tgll                              1/1     Running   0               4m2s    10.0.80.161    ip-10-0-69-233.us-west-2.compute.internal    &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
nvidia-device-plugin    nvidia-device-plugin-node-feature-discovery-worker-zkqv4          1/1     Running   0               4m16s   10.0.80.164    ip-10-0-69-233.us-west-2.compute.internal    &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;

kubectl describe node ip-10-0-69-233.us-west-2.compute.internal |grep -A 10 Capacity:
Capacity:
  aws.amazon.com/neuron:        1
  aws.amazon.com/neuroncore:    2
  aws.amazon.com/neurondevice:  1
  cpu:                          4
  ephemeral-storage:            104845292Ki
  hugepages-1Gi:                0
  hugepages-2Mi:                0
  memory:                       16009632Ki
  pods:                         58&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;실제 vLLM 파드를 이용해 추론(Inference)을 하기 위해서 WebUI를 생성하도록 하겠습니다.&lt;/p&gt;
&lt;pre class=&quot;yaml&quot;&gt;&lt;code&gt; cat open-webui.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: open-webui-deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      app: open-webui-server
  template:
    metadata:
      labels:
        app: open-webui-server
    spec:
      containers:
      - name: open-webui
        image: kopi/openwebui
        env:
        - name: WEBUI_AUTH
          value: &quot;False&quot;
        - name: OPENAI_API_KEY
          value: &quot;xxx&quot;
        - name: OPENAI_API_BASE_URL
          value: &quot;http://vllm-mistral7b-service/v1&quot;
---
apiVersion: v1
kind: Service
metadata:
  name: open-webui-service
  annotations:
    service.beta.kubernetes.io/aws-load-balancer-type: external
    service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing
    service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: instance
spec:
  selector:
    app: open-webui-server
  # type: LoadBalancer
  ports:
  - protocol: TCP
    port: 80
    targetPort: 8080
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: open-webui-ingress
  annotations:
    alb.ingress.kubernetes.io/scheme: internet-facing
    alb.ingress.kubernetes.io/target-type: ip
    alb.ingress.kubernetes.io/healthcheck-path: /
    alb.ingress.kubernetes.io/healthcheck-interval-seconds: '10'
    alb.ingress.kubernetes.io/healthcheck-timeout-seconds: '9'
    alb.ingress.kubernetes.io/healthy-threshold-count: '2'
    alb.ingress.kubernetes.io/unhealthy-threshold-count: '10'
    alb.ingress.kubernetes.io/success-codes: '200-302'
    alb.ingress.kubernetes.io/load-balancer-name: open-webui-ingress
  labels:
    app: open-webui-ingress
spec:
  ingressClassName: alb
  rules:
  - http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: open-webui-service
            port: 
              number: 80

kubectl apply -f open-webui.yaml
deployment.apps/open-webui-deployment created
service/open-webui-service created
ingress.networking.k8s.io/open-webui-ingress created&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;생성된 Ingress를 확인하여 웹 브라우저를 통해 접근합니다.&lt;/p&gt;
&lt;pre class=&quot;routeros&quot;&gt;&lt;code&gt;kubectl get ing
NAME                 CLASS   HOSTS   ADDRESS                                                     PORTS   AGE
open-webui-ingress   alb     *       open-webui-ingress-1559589477.us-west-2.elb.amazonaws.com   80      57s&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;접속해 보면 Open WebUI의 화면을 확인할 수 있습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;2215&quot; data-origin-height=&quot;1293&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bRqj7t/btsNspq0jm4/GgUYFwRW4koR9Kus4Il8X1/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bRqj7t/btsNspq0jm4/GgUYFwRW4koR9Kus4Il8X1/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bRqj7t/btsNspq0jm4/GgUYFwRW4koR9Kus4Il8X1/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbRqj7t%2FbtsNspq0jm4%2FGgUYFwRW4koR9Kus4Il8X1%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;2215&quot; height=&quot;1293&quot; data-origin-width=&quot;2215&quot; data-origin-height=&quot;1293&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;모델 선택을 통해서 vLLM 파드에서 서빙 중인 Mistral-7B 모델을 선택할 수 있습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;986&quot; data-origin-height=&quot;257&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/br3SSZ/btsNryWOuMt/8faNF8EEFlGoWLFtzAqoVk/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/br3SSZ/btsNryWOuMt/8faNF8EEFlGoWLFtzAqoVk/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/br3SSZ/btsNryWOuMt/8faNF8EEFlGoWLFtzAqoVk/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2Fbr3SSZ%2FbtsNryWOuMt%2F8faNF8EEFlGoWLFtzAqoVk%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;986&quot; height=&quot;257&quot; data-origin-width=&quot;986&quot; data-origin-height=&quot;257&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Chat을 사용해 보면 정상적으로 응답이 오는 것을 알 수 있습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;2213&quot; data-origin-height=&quot;606&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bito6Q/btsNsNSIy98/V2cjQrQDkF26SKcEP2TAM0/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bito6Q/btsNsNSIy98/V2cjQrQDkF26SKcEP2TAM0/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bito6Q/btsNsNSIy98/V2cjQrQDkF26SKcEP2TAM0/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2Fbito6Q%2FbtsNsNSIy98%2FV2cjQrQDkF26SKcEP2TAM0%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;2213&quot; height=&quot;606&quot; data-origin-width=&quot;2213&quot; data-origin-height=&quot;606&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이상 &lt;b&gt;Build GenAI &amp;amp; ML for Performance and Scale, using Amazon EKS, Amazon FSx and AWS Inferentia&lt;/b&gt; 워크샵을 활용한 실습을 마무리하겠습니다.&lt;/p&gt;</description>
      <category>EKS</category>
      <category>device plugin</category>
      <category>EKS</category>
      <category>fsx for lustre</category>
      <category>GPU</category>
      <category>inferentia</category>
      <author>한명</author>
      <guid isPermaLink="true">https://a-person.tistory.com/46</guid>
      <comments>https://a-person.tistory.com/46#entry46comment</comments>
      <pubDate>Sun, 20 Apr 2025 00:31:12 +0900</pubDate>
    </item>
    <item>
      <title>Vault를 활용한 쿠버네티스 Secret 관리</title>
      <link>https://a-person.tistory.com/45</link>
      <description>&lt;p data-ke-size=&quot;size16&quot;&gt;이번 포스트에서는 Kubernetes 환경에서 Secret 관리를 위해서 Vault를 활용하는 방식을 살펴 보겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;실습에서는 Vault에 Secret을 저장하고, 애플리케이션에서 Vault의 Secret을 참조하는 방식을 살펴봅니다. Vault의 Secret을 활용하는 방식에는 세가지가 있습니다.&lt;/p&gt;
&lt;ol style=&quot;list-style-type: decimal;&quot; data-ke-list-type=&quot;decimal&quot;&gt;
&lt;li&gt;The Vault Sidecar Agent Injector&lt;/li&gt;
&lt;li&gt;The Vault Container Storage Interface provider&lt;/li&gt;
&lt;li&gt;The Vault Secrets Operator&lt;/li&gt;
&lt;/ol&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이를 실습을 통해서 살펴보겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;목차&lt;/h2&gt;
&lt;ol style=&quot;list-style-type: decimal;&quot; data-ke-list-type=&quot;decimal&quot;&gt;
&lt;li&gt;실습 환경 구성&lt;/li&gt;
&lt;li&gt;Valut Sidecar Agent Injector 실습&lt;/li&gt;
&lt;li&gt;Vault CSI Driver 실습&lt;/li&gt;
&lt;li&gt;Vault Secrets Operator 실습&lt;/li&gt;
&lt;/ol&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;1. 실습 환경 구성&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;실습 환경은 kind로 생성한 쿠버네티스 환경을 통해 진행합니다.&lt;/p&gt;
&lt;pre class=&quot;yaml&quot;&gt;&lt;code&gt;# 클러스터 배포 전 확인
docker ps
mkdir cicd-labs
cd cicd-labs

# WSL2 Ubuntu eth0 IP를 지정
ip -br -c a

MyIP=&amp;lt;각자 자신의 WSL2 Ubuntu eth0 IP&amp;gt;
MyIP=172.28.157.42

# cicd-labs 디렉터리에서 아래 파일 작성
cat &amp;gt; kind-3node.yaml &amp;lt;&amp;lt;EOF
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
networking:
  apiServerAddress: &quot;$MyIP&quot;
nodes:
- role: control-plane
  extraPortMappings:
  - containerPort: 30000
    hostPort: 30000
  - containerPort: 30001
    hostPort: 30001
  - containerPort: 30002
    hostPort: 30002
  - containerPort: 30003
    hostPort: 30003
  - containerPort: 30004
    hostPort: 30004
  - containerPort: 30005
    hostPort: 30006
- role: worker
- role: worker
EOF
kind create cluster --config kind-3node.yaml --name myk8s --image kindest/node:v1.32.2

# 확인
kind get nodes --name myk8s
myk8s-worker2
myk8s-control-plane
myk8s-worker

# kind 는 별도 도커 네트워크 생성 후 사용 : 기본값 172.18.0.0/16
docker network ls
NETWORK ID     NAME      DRIVER    SCOPE
9b31821d4e20   bridge    bridge    local
31ba753a11a7   host      host      local
d91da96d2114   kind      bridge    local
f2e13485b121   none      null      local
docker inspect kind |jq
[
  {
    &quot;Name&quot;: &quot;kind&quot;,
    &quot;Id&quot;: &quot;d91da96d2114ac123cfadc41d90d4ef2be33b049c1e369b4acaa38249d367a4d&quot;,
    &quot;Created&quot;: &quot;2025-03-27T22:07:47.108546778+09:00&quot;,
    &quot;Scope&quot;: &quot;local&quot;,
    &quot;Driver&quot;: &quot;bridge&quot;,
    &quot;EnableIPv6&quot;: true,
    &quot;IPAM&quot;: {
      &quot;Driver&quot;: &quot;default&quot;,
      &quot;Options&quot;: {},
      &quot;Config&quot;: [
        {
          &quot;Subnet&quot;: &quot;172.18.0.0/16&quot;,
          &quot;Gateway&quot;: &quot;172.18.0.1&quot;
        },
        {
          &quot;Subnet&quot;: &quot;fc00:f853:ccd:e793::/64&quot;,
          &quot;Gateway&quot;: &quot;fc00:f853:ccd:e793::1&quot;
        }
      ]
    },
    &quot;Internal&quot;: false,
    &quot;Attachable&quot;: false,
    &quot;Ingress&quot;: false,
    &quot;ConfigFrom&quot;: {
      &quot;Network&quot;: &quot;&quot;
    },
    &quot;ConfigOnly&quot;: false,
    &quot;Containers&quot;: {
      &quot;10a9d53e4bd22ce9fc210c6693f8087185686eb51aa9d58696e0d95dafa24b6c&quot;: {
        &quot;Name&quot;: &quot;myk8s-worker2&quot;,
        &quot;EndpointID&quot;: &quot;b3cb0b47353edc3e6aaa258bd6729d94ea648bc7fcba661fa6c1287e7f349923&quot;,
        &quot;MacAddress&quot;: &quot;02:42:ac:12:00:03&quot;,
        &quot;IPv4Address&quot;: &quot;172.18.0.3/16&quot;,
        &quot;IPv6Address&quot;: &quot;fc00:f853:ccd:e793::3/64&quot;
      },
      &quot;595658837c5ff4934afbd6a2bcf1c23047490e22825e78697ff311a92cef88d9&quot;: {
        &quot;Name&quot;: &quot;myk8s-worker&quot;,
        &quot;EndpointID&quot;: &quot;6d011d2675ca64750be2c1c35a9cdff22f714de2ca72bebee05bc62e11686205&quot;,
        &quot;MacAddress&quot;: &quot;02:42:ac:12:00:04&quot;,
        &quot;IPv4Address&quot;: &quot;172.18.0.4/16&quot;,
        &quot;IPv6Address&quot;: &quot;fc00:f853:ccd:e793::4/64&quot;
      },
      &quot;ae13bcf2982f31e171f89953a89471524480e994c924349162d31e7795e0c369&quot;: {
        &quot;Name&quot;: &quot;myk8s-control-plane&quot;,
        &quot;EndpointID&quot;: &quot;e35cfd5f5280307273fc4a6c26fd8311b3dcb32d1fb3d9dd0f55f2ad32f0b515&quot;,
        &quot;MacAddress&quot;: &quot;02:42:ac:12:00:02&quot;,
        &quot;IPv4Address&quot;: &quot;172.18.0.2/16&quot;,
        &quot;IPv6Address&quot;: &quot;fc00:f853:ccd:e793::2/64&quot;
      }
    },
    &quot;Options&quot;: {
      &quot;com.docker.network.bridge.enable_ip_masquerade&quot;: &quot;true&quot;,
      &quot;com.docker.network.driver.mtu&quot;: &quot;1500&quot;
    },
    &quot;Labels&quot;: {}
  }
]

# k8s api 주소 확인
kubectl cluster-info
Kubernetes control plane is running at https://172.28.157.42:41243
CoreDNS is running at https://172.28.157.42:41243/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

# 노드 정보 확인 : CRI 는 containerd 사용
kubectl get node -o wide
NAME                  STATUS   ROLES           AGE   VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE                         KERNEL-VERSION                       CONTAINER-RUNTIME
myk8s-control-plane   Ready    control-plane   12m   v1.32.2   172.18.0.2    &amp;lt;none&amp;gt;        Debian GNU/Linux 12 (bookworm)   5.15.167.4-microsoft-standard-WSL2   containerd://2.0.3
myk8s-worker          Ready    &amp;lt;none&amp;gt;          11m   v1.32.2   172.18.0.4    &amp;lt;none&amp;gt;        Debian GNU/Linux 12 (bookworm)   5.15.167.4-microsoft-standard-WSL2   containerd://2.0.3
myk8s-worker2         Ready    &amp;lt;none&amp;gt;          11m   v1.32.2   172.18.0.3    &amp;lt;none&amp;gt;        Debian GNU/Linux 12 (bookworm)   5.15.167.4-microsoft-standard-WSL2   containerd://2.0.3

# 파드 정보 확인 : CNI 는 kindnet 사용
kubectl get pod -A -o wide
NAMESPACE            NAME                                          READY   STATUS    RESTARTS   AGE   IP           NODE                  NOMINATED NODE   READINESS GATES
kube-system          coredns-668d6bf9bc-cxqxb                      1/1     Running   0          12m   10.244.0.4   myk8s-control-plane   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system          coredns-668d6bf9bc-g45j9                      1/1     Running   0          12m   10.244.0.3   myk8s-control-plane   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system          etcd-myk8s-control-plane                      1/1     Running   0          12m   172.18.0.2   myk8s-control-plane   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system          kindnet-fclmw                                 1/1     Running   0          12m   172.18.0.3   myk8s-worker2         &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system          kindnet-gfxg4                                 1/1     Running   0          12m   172.18.0.2   myk8s-control-plane   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system          kindnet-vvqvp                                 1/1     Running   0          12m   172.18.0.4   myk8s-worker          &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system          kube-apiserver-myk8s-control-plane            1/1     Running   0          12m   172.18.0.2   myk8s-control-plane   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system          kube-controller-manager-myk8s-control-plane   1/1     Running   0          12m   172.18.0.2   myk8s-control-plane   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system          kube-proxy-7sgcd                              1/1     Running   0          12m   172.18.0.4   myk8s-worker          &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system          kube-proxy-8xck8                              1/1     Running   0          12m   172.18.0.3   myk8s-worker2         &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system          kube-proxy-tpvwq                              1/1     Running   0          12m   172.18.0.2   myk8s-control-plane   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system          kube-scheduler-myk8s-control-plane            1/1     Running   0          12m   172.18.0.2   myk8s-control-plane   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
local-path-storage   local-path-provisioner-7dc846544d-p9l6q       1/1     Running   0          12m   10.244.0.2   myk8s-control-plane   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;


# 컨트롤플레인/워커 노드(컨테이너) 확인 : 도커 컨테이너 이름은 myk8s-control-plane , myk8s-worker/worker-2 임을 확인
docker ps
CONTAINER ID   IMAGE                  COMMAND                  CREATED          STATUS          PORTS                                                                                           NAMES
10a9d53e4bd2   kindest/node:v1.32.2   &quot;/usr/local/bin/entr&amp;hellip;&quot;   13 minutes ago   Up 13 minutes                                                                                                   myk8s-worker2
ae13bcf2982f   kindest/node:v1.32.2   &quot;/usr/local/bin/entr&amp;hellip;&quot;   13 minutes ago   Up 13 minutes   0.0.0.0:30000-30004-&amp;gt;30000-30004/tcp, 172.28.157.42:41243-&amp;gt;6443/tcp, 0.0.0.0:30006-&amp;gt;30005/tcp   myk8s-control-plane
595658837c5f   kindest/node:v1.32.2   &quot;/usr/local/bin/entr&amp;hellip;&quot;   13 minutes ago   Up 13 minutes                                                                                                   myk8s-worker&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이제 생성된 쿠버네티스 환경에 Vault를 설치하도록 하겠습니다.&lt;/p&gt;
&lt;pre class=&quot;yaml&quot;&gt;&lt;code&gt;# Create a Kubernetes namespace.
kubectl create namespace vault

# Setup Helm repo
helm repo add hashicorp https://helm.releases.hashicorp.com

# Check that you have access to the chart.
helm search repo hashicorp/vault
NAME                                    CHART VERSION   APP VERSION     DESCRIPTION
hashicorp/vault                         0.30.0          1.19.0          Official HashiCorp Vault Chart
hashicorp/vault-secrets-gateway         0.0.2           0.1.0           A Helm chart for Kubernetes
hashicorp/vault-secrets-operator        0.10.0          0.10.0          Official Vault Secrets Operator Chart&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Helm 차트에 사용할 values 파일을 생성합니다.&lt;/p&gt;
&lt;pre class=&quot;yaml&quot;&gt;&lt;code&gt;cat &amp;lt;&amp;lt;EOF &amp;gt; vault-values.yaml
global:
  enabled: true
  tlsDisable: true  # Disable TLS for demo purposes

server:
  image:
    repository: &quot;hashicorp/vault&quot;
    tag: &quot;1.19.0&quot;
  standalone:
    enabled: true
    replicas: 1
    config: |
      ui = true

      listener &quot;tcp&quot; {
        address = &quot;[::]:8200&quot;
        cluster_address = &quot;[::]:8201&quot;
        tls_disable = 1
      }

      storage &quot;file&quot; {
        path = &quot;/vault/data&quot;
      }

  service:
    enabled: true
    type: NodePort
    port: 8200
    targetPort: 8200
    nodePort: 30000   #   Kind에서 열어둔 포트 중 하나 사용

injector:
  enabled: true

csi:
  enabled: true
EOF&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;실습에서는 테스트 환경이므로 standalone 방식으로 구성하였습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;또한 injector를 enable하여, sidecar injector를 사용할 수 있도록 정의하였고, csi 드라이버 테스트를 위해 csi 또한 enable하였습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이제 Helm으로 Vault를 배포합니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;# Helm Install 실행
helm upgrade vault hashicorp/vault -n vault -f vault-values.yaml --install

# 배포확인
kubectl get pods,svc,pvc -n vault
NAME                                        READY   STATUS    RESTARTS   AGE
pod/vault-0                                 0/1     Running   0          4m45s
pod/vault-agent-injector-56459c7545-fnv9t   1/1     Running   0          4m45s
pod/vault-csi-provider-79rg5                2/2     Running   0          4m45s
pod/vault-csi-provider-x58j8                2/2     Running   0          4m45s

NAME                               TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)                         AGE
service/vault                      NodePort    10.96.116.44   &amp;lt;none&amp;gt;        8200:30000/TCP,8201:31459/TCP   4m45s
service/vault-agent-injector-svc   ClusterIP   10.96.166.73   &amp;lt;none&amp;gt;        443/TCP                         4m45s
service/vault-internal             ClusterIP   None           &amp;lt;none&amp;gt;        8200/TCP,8201/TCP               4m45s

NAME                                 STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   VOLUMEATTRIBUTESCLASS   AGE
persistentvolumeclaim/data-vault-0   Bound    pvc-e5065e5d-dc6e-43ef-a647-da4b0685a79e   10Gi       RWO            standard       &amp;lt;unset&amp;gt;                 4m45s&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;배포된 내용을 확인해보면 vault-0가 statefulset으로 실행 중이며, 옵션으로 추가한 vault-agent-injector-56459c7545-9n94t가 이후에 sidecar injection을 수행합니다. 또한 vault-csi-provider가 실행 중인 것을 알 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;확인해보면 vault-0가 아직 READY가 되지 않은 상태(0/1)인 것을 알 수 있습니다. Vault를 배포한다고 바로 실행되는 것은 아니며, 초기화 되지 않은 상태에서는 기본적으로 Sealed 되어 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아래와 같이 상태를 확인하고 초기화를 진행합니다.&lt;/p&gt;
&lt;pre class=&quot;routeros&quot;&gt;&lt;code&gt;# Vault Status 명령으로 Sealed 상태확인
kubectl exec -it vault-0 -n vault -- vault status
Key                Value
---                -----
Seal Type          shamir
Initialized        false
Sealed             true
Total Shares       0
Threshold          0
Unseal Progress    0/0
Unseal Nonce       n/a
Version            1.19.0
Build Date         2025-03-04T12:36:40Z
Storage Type       file
HA Enabled         false
command terminated with exit code 2&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;실습에서는 아래 스크립트를 통해서 unseal key를 획득하고 이 key를 등록하여 초기화를 진행합니다.&lt;/p&gt;
&lt;pre class=&quot;routeros&quot;&gt;&lt;code&gt;cat &amp;lt;&amp;lt;EOF &amp;gt; init-unseal.sh
#!/bin/bash

# Vault Pod 이름
VAULT_POD=&quot;vault-0&quot;

# Vault 명령 실행
VAULT_CMD=&quot;kubectl exec -it \$VAULT_POD -n vault -- vault&quot;

# 출력 저장 파일
VAULT_KEYS_FILE=&quot;./vault-keys.txt&quot;
UNSEAL_KEY_FILE=&quot;./vault-unseal-key.txt&quot;
ROOT_TOKEN_FILE=&quot;./vault-root-token.txt&quot;

# Vault 초기화 (Unseal Key 1개만 생성되도록 설정)
\$VAULT_CMD operator init -key-shares=1 -key-threshold=1 | sed \$'s/\\x1b\\[[0-9;]*m//g' | tr -d '\r' &amp;gt; &quot;\$VAULT_KEYS_FILE&quot;

# Unseal Key / Root Token 추출
grep 'Unseal Key 1:' &quot;\$VAULT_KEYS_FILE&quot; | awk -F': ' '{print \$2}' &amp;gt; &quot;\$UNSEAL_KEY_FILE&quot;
grep 'Initial Root Token:' &quot;\$VAULT_KEYS_FILE&quot; | awk -F': ' '{print \$2}' &amp;gt; &quot;\$ROOT_TOKEN_FILE&quot;

# Unseal 수행
UNSEAL_KEY=\$(cat &quot;\$UNSEAL_KEY_FILE&quot;)
\$VAULT_CMD operator unseal &quot;\$UNSEAL_KEY&quot;

# 결과 출력
echo &quot;[ ] Vault Unsealed!&quot;
echo &quot;[ ] Root Token: \$(cat \$ROOT_TOKEN_FILE)&quot;
EOF

# 실행 권한 부여
chmod +x init-unseal.sh

# 실행
./init-unseal.sh
Key             Value
---             -----
Seal Type       shamir
Initialized     true
Sealed          false
Total Shares    1
Threshold       1
Version         1.19.0
Build Date      2025-03-04T12:36:40Z
Storage Type    file
Cluster Name    vault-cluster-74a36fbf
Cluster ID      46a5b7ff-288c-eb37-408b-bf8eebd7960c
HA Enabled      false
[ ] Vault Unsealed!
[ ] Root Token: hvs.aRGCNIlhHVcy2VuDf2afTKCm&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이제 &lt;code&gt;vault status&lt;/code&gt; 명령으로 Unseal 상태를 확인 합니다.&lt;/p&gt;
&lt;pre class=&quot;routeros&quot;&gt;&lt;code&gt;kubectl exec -it vault-0 -n vault -- vault status
Key             Value
---             -----
Seal Type       shamir
Initialized     true
Sealed          false
Total Shares    1
Threshold       1
Version         1.19.0
Build Date      2025-03-04T12:36:40Z
Storage Type    file
Cluster Name    vault-cluster-74a36fbf
Cluster ID      46a5b7ff-288c-eb37-408b-bf8eebd7960c
HA Enabled      false&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Unseal이 완료되고 파드 상태를 조회하면, 이제 vault-0가 READY상태가 된 것으로 확인됩니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;kubectl get po -n vault
NAME                                    READY   STATUS    RESTARTS   AGE
vault-0                                 1/1     Running   0          10m
vault-agent-injector-56459c7545-fnv9t   1/1     Running   0          10m
vault-csi-provider-79rg5                2/2     Running   0          10m
vault-csi-provider-x58j8                2/2     Running   0          10m&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;초기화가 마무리되면 이제 UI에도 접속이 가능합니다. 앞서 확인한 Root token을 사용합니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;921&quot; data-origin-height=&quot;722&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/FCSH8/btsNg6L9fX0/dAOYpm8BtsDzFRdQjygwG0/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/FCSH8/btsNg6L9fX0/dAOYpm8BtsDzFRdQjygwG0/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/FCSH8/btsNg6L9fX0/dAOYpm8BtsDzFRdQjygwG0/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FFCSH8%2FbtsNg6L9fX0%2FdAOYpm8BtsDzFRdQjygwG0%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;921&quot; height=&quot;722&quot; data-origin-width=&quot;921&quot; data-origin-height=&quot;722&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;로그인 후 Vault의 UI화면은 아래와 같습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1695&quot; data-origin-height=&quot;1286&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/mmTAQ/btsNiIlcVHq/072AJSrk2d456xwKUeAk7k/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/mmTAQ/btsNiIlcVHq/072AJSrk2d456xwKUeAk7k/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/mmTAQ/btsNiIlcVHq/072AJSrk2d456xwKUeAk7k/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FmmTAQ%2FbtsNiIlcVHq%2F072AJSrk2d456xwKUeAk7k%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1695&quot; height=&quot;1286&quot; data-origin-width=&quot;1695&quot; data-origin-height=&quot;1286&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아래와 같이 WSL 환경에 CLI도 설치를 진행합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;참고: &lt;a href=&quot;https://developer.hashicorp.com/vault/install#linux&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://developer.hashicorp.com/vault/install#linux&lt;/a&gt;&lt;/p&gt;
&lt;pre class=&quot;sql&quot;&gt;&lt;code&gt;wget -O - https://apt.releases.hashicorp.com/gpg | sudo gpg --dearmor -o /usr/share/keyrings/hashicorp-archive-keyring.gpg
echo &quot;deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com $(lsb_release -cs) main&quot; | sudo tee /etc/apt/sources.list.d/hashicorp.list
sudo apt update &amp;amp;&amp;amp; sudo apt install vault

# 확인
export VAULT_ADDR='http://localhost:30000'
vault status
Key             Value
---             -----
Seal Type       shamir
Initialized     true
Sealed          false
Total Shares    1
Threshold       1
Version         1.19.0
Build Date      2025-03-04T12:36:40Z
Storage Type    file
Cluster Name    vault-cluster-74a36fbf
Cluster ID      46a5b7ff-288c-eb37-408b-bf8eebd7960c
HA Enabled      false

vault login
Token (will be hidden):
Success! You are now authenticated. The token information displayed below
is already stored in the token helper. You do NOT need to run &quot;vault login&quot;
again. Future Vault requests will automatically use this token.

Key                  Value
---                  -----
token                hvs.aRGCNIlhHVcy2VuDf2afTKCm
token_accessor       LckN5gmpyUcQPxKYFGWIS3KU
token_duration       &amp;infin;
token_renewable      false
token_policies       [&quot;root&quot;]
identity_policies    []
policies             [&quot;root&quot;]&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Vault를 Static Secret, Dynamic Secret, Data Encryption, Leasing and Renewal, Revocation과 같은 다양한 목적에 맞게 사용할 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이후 Static Secret으로 활용하는 실습을 위해서 Vault의 Key Value 엔진을 활성화하겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;실습에서는 Vault KV version 2 엔진을 활성화하고 샘플 데이터를 저장합니다. 여기서 version 1 은 Key-Value에 대한 버전관리가 불가하며, version 2에서는 Key-Value에서 버전관리가 가능합니다.&lt;/p&gt;
&lt;pre class=&quot;routeros&quot;&gt;&lt;code&gt;# KV v2 형태로 엔진 활성화
vault secrets enable -path=secret kv-v2
Success! Enabled the kv-v2 secrets engine at: secret/

# 샘플 시크릿 저장
vault kv put secret/sampleapp/config \
  username=&quot;demo&quot; \
  password=&quot;p@ssw0rd&quot;

======== Secret Path ========
secret/data/sampleapp/config

======= Metadata =======
Key                Value
---                -----
created_time       2025-04-12T13:18:05.548287122Z
custom_metadata    &amp;lt;nil&amp;gt;
deletion_time      n/a
destroyed          false
version            1


# 입력된 데이터 확인
vault kv get secret/sampleapp/config

======== Secret Path ========
secret/data/sampleapp/config

======= Metadata =======
Key                Value
---                -----
created_time       2025-04-12T13:18:05.548287122Z
custom_metadata    &amp;lt;nil&amp;gt;
deletion_time      n/a
destroyed          false
version            1

====== Data ======
Key         Value
---         -----
password    p@ssw0rd
username    demo&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Vault UI에 접속하여 [Secrets Engine] 탭에 접속 후 secret으로 이동하여, sampleapp - config] 접속하여 정보를 확인합니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1655&quot; data-origin-height=&quot;789&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bE4Lbi/btsNjHZgaEA/5VIk9w6QlWBAJkh4jzq7Bk/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bE4Lbi/btsNjHZgaEA/5VIk9w6QlWBAJkh4jzq7Bk/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bE4Lbi/btsNjHZgaEA/5VIk9w6QlWBAJkh4jzq7Bk/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbE4Lbi%2FbtsNjHZgaEA%2F5VIk9w6QlWBAJkh4jzq7Bk%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1655&quot; height=&quot;789&quot; data-origin-width=&quot;1655&quot; data-origin-height=&quot;789&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Secret 탭에 접근하면 실제 저장된 Key / Value 를 확인할 수 있습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1671&quot; data-origin-height=&quot;552&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/beNWjS/btsNht1XnJU/zAFF1GPkCTWl1JIBcDtm2K/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/beNWjS/btsNht1XnJU/zAFF1GPkCTWl1JIBcDtm2K/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/beNWjS/btsNht1XnJU/zAFF1GPkCTWl1JIBcDtm2K/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbeNWjS%2FbtsNht1XnJU%2FzAFF1GPkCTWl1JIBcDtm2K%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1671&quot; height=&quot;552&quot; data-origin-width=&quot;1671&quot; data-origin-height=&quot;552&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Vault에 저장된 Secret을 활용하는 방식을 실습을 통해 살펴보겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;1. Valut Sidecar Agent Injector 실습&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;생성된 Vault의 Secret을 참조하기 위해서 Sidecar 패턴을 활용하는 방식을 살펴보겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;앞서 Helm을 통해 Vault를 배포할 때 &lt;code&gt;injector&lt;/code&gt;를 활성화 한 것을 기억하실 겁니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Vault Agent Injector는 Kubernetes Pod 내부에 Vault Agent를 자동으로 주입해주는 기능입니다. 이를 통해 어플리케이션이 Vault로부터 자동으로 비밀 정보를 받아올 수 있게 됩니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;3242&quot; data-origin-height=&quot;1070&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/dhkXXy/btsNhsBZ6o4/z9nd9HEnzFbKQPKcoyXxck/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/dhkXXy/btsNhsBZ6o4/z9nd9HEnzFbKQPKcoyXxck/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/dhkXXy/btsNhsBZ6o4/z9nd9HEnzFbKQPKcoyXxck/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FdhkXXy%2FbtsNhsBZ6o4%2Fz9nd9HEnzFbKQPKcoyXxck%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;3242&quot; height=&quot;1070&quot; data-origin-width=&quot;3242&quot; data-origin-height=&quot;1070&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;출처: &lt;a href=&quot;https://www.hashicorp.com/en/blog/kubernetes-vault-integration-via-sidecar-agent-injector-vs-csi-provider&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://www.hashicorp.com/en/blog/kubernetes-vault-integration-via-sidecar-agent-injector-vs-csi-provider&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;먼저 그림의 좌측을 살펴보면 애플리케이션 파드 스펙에 Sidecar Annotation과 함께 API 서버로의 요청하면, Mutating Webhook을 통해 SideCar Injector Service를 통해서 Pods Spec을 업데이트 하는 것을 알 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;우측의 애플리케이션 파드에서는 Vault Agent Init Container를 통해서 Kubernetes Auth를 통해서 Vault Token을 획득하고, 이후 Vault Agent Sicdcar Container가 Vault Token으로 Vault에 요청해 Secret을 획득해 애플리케이션에 주입해주는 방식으로 동작합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;실제 파드에 아래와 같은 annotation을 추가해야 합니다.&lt;/p&gt;
&lt;pre class=&quot;vim&quot;&gt;&lt;code&gt;vault.hashicorp.com/agent-inject: &quot;true&quot;
vault.hashicorp.com/role: &quot;example-role&quot;&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Vault의 Kubernetes 인증 활성화합니다. 이를 통해 클라이언트가 쿠버네티스의 Service Account 토큰을 통해 인증을 받을 수 있도록 제공합니다.&lt;/p&gt;
&lt;pre class=&quot;routeros&quot;&gt;&lt;code&gt;# Kubernetes Auth Method 활성화
vault auth enable kubernetes
Success! Enabled kubernetes auth method at: kubernetes/

# 확인
vault auth list
Path           Type          Accessor                    Description                Version
----           ----          --------                    -----------                -------
approle/       approle       auth_approle_54604e8f       n/a                        n/a
kubernetes/    kubernetes    auth_kubernetes_986afb3c    n/a                        n/a
token/         token         auth_token_ecdae7a6         token based credentials    n/a

# Kubernetes Auth Config 설정
vault write auth/kubernetes/config \
  token_reviewer_jwt=&quot;$(kubectl get secret $(kubectl get serviceaccount vault -n vault -o jsonpath='{.secrets[0].name}') -n vault -o jsonpath=&quot;{.data.token}&quot; | base64 --decode)&quot; \
  kubernetes_host=&quot;$(kubectl config view --raw --minify --flatten -o jsonpath='{.clusters[0].cluster.server}')&quot; \
  kubernetes_ca_cert=&quot;$(kubectl get secret $(kubectl get serviceaccount vault -n vault -o jsonpath='{.secrets[0].name}') -n vault -o jsonpath='{.data.ca\.crt}' | base64 --decode)&quot;&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;그리고 생성된 Secret을 가져올 수 있는 권한(Policy)와 쿠버네티스 Service Account와 Policy를 묶어주는 Role을 생성합니다.&lt;/p&gt;
&lt;pre class=&quot;routeros&quot;&gt;&lt;code&gt;# 필요한 Policy 작성 (앞선 과정에서 만들었으므로 생략가능)
vault policy write sampleapp-policy - &amp;lt;&amp;lt;EOF
path &quot;secret/data/sampleapp/*&quot; {
  capabilities = [&quot;read&quot;]
}
EOF

# Role 생성 (Injector가 로그인할 수 있도록)
vault write auth/kubernetes/role/sampleapp-role \
    bound_service_account_names=&quot;vault-ui-sa&quot; \
    bound_service_account_namespaces=&quot;vault&quot; \
    policies=&quot;sampleapp-policy&quot; \
    ttl=&quot;24h&quot;&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Vault의 권한 부여 방식은 AWS와 유사합니다. 위의 예제를 살펴보면, &quot;secret/data/sampleapp/*&quot; 에 [&quot;read&quot;]를 할 수 있는 sampleapp-policy라는 Policy를 만들고, 그 이후 sampleapp-role라는 role을 만들어 vault-ui-sa와 sampleapp-policy를 연결해 주는 것을 알 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;테스트 애플리케이션 배포합니다. 또한 위에서 정의한 vault-ui-sa 라는 Service Account도 같이 생성합니다.&lt;/p&gt;
&lt;pre class=&quot;yaml&quot;&gt;&lt;code&gt;cat &amp;lt;&amp;lt;EOF | kubectl apply -f -
apiVersion: v1
kind: ServiceAccount
metadata:
  name: vault-ui-sa
  namespace: vault
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: vault-injected-ui
  namespace: vault
spec:
  replicas: 1
  selector:
    matchLabels:
      app: vault-injected-ui
  template:
    metadata:
      labels:
        app: vault-injected-ui
      annotations:
        vault.hashicorp.com/agent-inject: &quot;true&quot;
        vault.hashicorp.com/role: &quot;sampleapp-role&quot;
        vault.hashicorp.com/agent-inject-secret-config.json: &quot;secret/data/sampleapp/config&quot;
        vault.hashicorp.com/agent-inject-template-config.json: |
          {{- with secret &quot;secret/data/sampleapp/config&quot; -}}
          {
            &quot;username&quot;: &quot;{{ .Data.data.username }}&quot;,
            &quot;password&quot;: &quot;{{ .Data.data.password }}&quot;
          }
          {{- end }}
        vault.hashicorp.com/agent-inject-output-path: &quot;/vault/secrets&quot;
    spec:
      serviceAccountName: vault-ui-sa
      containers:
      - name: app
        image: python:3.10
        ports:
        - containerPort: 5000
        command: [&quot;sh&quot;, &quot;-c&quot;]
        args:
          - |
            pip install flask &amp;amp;&amp;amp; cat &amp;lt;&amp;lt;PYEOF &amp;gt; /app.py
            import json, time
            from flask import Flask, render_template_string
            app = Flask(__name__)
            while True:
                try:
                    with open(&quot;/vault/secrets/config.json&quot;) as f:
                        secret = json.load(f)
                    break
                except:
                    time.sleep(1)
            @app.route(&quot;/&quot;)
            def index():
                return render_template_string(&quot;&amp;lt;h2&amp;gt;  Vault Injected UI&amp;lt;/h2&amp;gt;&amp;lt;p&amp;gt;  사용자: {{username}}&amp;lt;/p&amp;gt;&amp;lt;p&amp;gt;  비밀번호: {{password}}&amp;lt;/p&amp;gt;&quot;, secret)
            app.run(host=&quot;0.0.0.0&quot;, port=5000)
            PYEOF
            python /app.py
---
apiVersion: v1
kind: Service
metadata:
  name: vault-injected-ui
  namespace: vault
spec:
  type: NodePort
  ports:
    - port: 5000
      targetPort: 5000
      nodePort: 30002
  selector:
    app: vault-injected-ui
EOF&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Deployment의 spec을 잘 살펴보면, 아래와 같은 annotation을 명시하고 있습니다.&lt;/p&gt;
&lt;pre class=&quot;vim&quot;&gt;&lt;code&gt;        vault.hashicorp.com/agent-inject: &quot;true&quot;
        vault.hashicorp.com/role: &quot;sampleapp-role&quot;&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;생성된 컨테이너 확인를 확인해보겠습니다.&lt;/p&gt;
&lt;pre class=&quot;yaml&quot;&gt;&lt;code&gt;# 파드 내에 사이드카 컨테이너 추가되어 2/2 확인
kubectl get pod -l app=vault-injected-ui -n vault
NAME                                 READY   STATUS    RESTARTS   AGE
vault-injected-ui-77fb865789-f6sn8   2/2     Running   0          2m

kubectl describe pod -l app=vault-injected-ui -n vault
...
Init Containers:
  vault-agent-init:
    Container ID:  containerd://aea5f02cc79d2b89d6f80a5111ae8b64640d0e13d0900fbe06f3eb3750389de1
    Image:         hashicorp/vault:1.19.0
    Image ID:      docker.io/hashicorp/vault@sha256:bbb7f98dc67d9ebdda1256de288df1cb9a5450990e48338043690bee3b332c90
    Port:          &amp;lt;none&amp;gt;
    Host Port:     &amp;lt;none&amp;gt;
    Command:
      /bin/sh
      -ec
    Args:
      echo ${VAULT_CONFIG?} | base64 -d &amp;gt; /home/vault/config.json &amp;amp;&amp;amp; vault agent -config=/home/vault/config.json
...

Containers:
  app:
    Container ID:  containerd://e2caeef00e2e408a4db48e57cba1152b4496bd49d6b66d9571371c2efea64178
    Image:         python:3.10
    Image ID:      docker.io/library/python@sha256:e2c7fb05741c735679b26eda7dd34575151079f8c615875fbefe401972b14d85
    Port:          5000/TCP
    Host Port:     0/TCP
...

  vault-agent:
    Container ID:  containerd://a2acf0a3fda4ba1f5637d8e843718ec19382c3011d8db97e701bae0d457fb92c
    Image:         hashicorp/vault:1.19.0
    Image ID:      docker.io/hashicorp/vault@sha256:bbb7f98dc67d9ebdda1256de288df1cb9a5450990e48338043690bee3b332c90
    Port:          &amp;lt;none&amp;gt;
    Host Port:     &amp;lt;none&amp;gt;
    Command:
      /bin/sh
      -ec
...&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;앞서 살펴본 것과 같이 vault-agent-init 이라는 InitContainer가 있으며, 또한 app 컨테이너 하단을 보면 vault-agent가 사이드카로 실행 중입니다.&lt;/p&gt;
&lt;pre class=&quot;yaml&quot;&gt;&lt;code&gt;# 볼륨 마운트 확인
kubectl exec -it  deploy/vault-injected-ui -c app -n vault -- ls /vault/secrets
config.json

kubectl exec -it  deploy/vault-injected-ui -c app -n vault -- cat /vault/secrets/config.json
{
  &quot;username&quot;: &quot;demo&quot;,
  &quot;password&quot;: &quot;p@ssw0rd&quot;
}

kubectl exec -it deploy/nginx-vault-demo -c vault-agent-sidecar -- ls -l /etc/secrets       
total 4
-rw-r--r--    1 vault    vault           94 Apr 10 02:09 index.html

kubectl exec -it deploy/nginx-vault-demo -c vault-agent-sidecar -- cat /etc/secrets/index.html
  &amp;lt;html&amp;gt;
  &amp;lt;body&amp;gt;
    &amp;lt;p&amp;gt;username: demo&amp;lt;/p&amp;gt;
    &amp;lt;p&amp;gt;password: p@ssw0rd&amp;lt;/p&amp;gt;
  &amp;lt;/body&amp;gt;
  &amp;lt;/html&amp;gt;

kubectl exec -it deploy/nginx-vault-demo -c nginx -- cat /usr/share/nginx/html/index.html
  &amp;lt;html&amp;gt;
  &amp;lt;body&amp;gt;
    &amp;lt;p&amp;gt;username: demo&amp;lt;/p&amp;gt;
    &amp;lt;p&amp;gt;password: p@ssw0rd&amp;lt;/p&amp;gt;
  &amp;lt;/body&amp;gt;
  &amp;lt;/html&amp;gt;


# 로그 확인
kubectl logs -l app=vault-injected-ui -c vault-agent -n vault
2025-04-12T13:37:10.607Z [INFO]  agent: (runner) creating watcher
2025-04-12T13:37:10.612Z [INFO]  agent.auth.handler: authentication successful, sending token to sinks
2025-04-12T13:37:10.612Z [INFO]  agent.auth.handler: starting renewal process
2025-04-12T13:37:10.613Z [INFO]  agent.sink.file: token written: path=/home/vault/.vault-token
2025-04-12T13:37:10.613Z [INFO]  agent.template.server: template server received new token
2025-04-12T13:37:10.614Z [INFO]  agent: (runner) stopping
2025-04-12T13:37:10.614Z [INFO]  agent: (runner) creating new runner (dry: false, once: false)
2025-04-12T13:37:10.614Z [INFO]  agent: (runner) creating watcher
2025-04-12T13:37:10.617Z [INFO]  agent: (runner) starting
2025-04-12T13:37:10.618Z [INFO]  agent.auth.handler: renewed auth token

# mutating admission
kubectl get mutatingwebhookconfigurations.admissionregistration.k8s.io
NAME                       WEBHOOKS   AGE
vault-agent-injector-cfg   1          82m


kubectl describe mutatingwebhookconfigurations.admissionregistration.k8s.io vault-agent-injector-cfg
Name:         vault-agent-injector-cfg
Namespace:
Labels:       app.kubernetes.io/instance=vault
              app.kubernetes.io/managed-by=Helm
              app.kubernetes.io/name=vault-agent-injector
Annotations:  meta.helm.sh/release-name: vault
              meta.helm.sh/release-namespace: vault
API Version:  admissionregistration.k8s.io/v1
Kind:         MutatingWebhookConfiguration
Metadata:
  Creation Timestamp:  2025-04-12T12:29:06Z
  Generation:          2
  Resource Version:    2170
  UID:                 1260c09d-a982-47f6-b15a-b3de37628915
Webhooks:
  Admission Review Versions:
    v1
    v1beta1
  Client Config:
    Ca Bundle:  LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUNTVENDQWU2Z0F3SUJBZ0lVZjZGQ0xaam5iTjg1TURSWm5Ka0VWZ1pMcnhrd0NnWUlLb1pJemowRUF3SXcKR2pFWU1CWUdBMVVFQXhNUFFXZGxiblFnU1c1cVpXTjBJRU5CTUI0WERUSTFNRFF4TWpFeU1qZ3pNRm9YRFRNMQpNRFF4TURFeU1qa3pNRm93R2pFWU1CWUdBMVVFQXhNUFFXZGxiblFnU1c1cVpXTjBJRU5CTUZrd0V3WUhLb1pJCnpqMENBUVlJS29aSXpqMERBUWNEUWdBRVhSbjR3Z0lYSUhKTFN6cnV5MGhDZkV0RWtnN0NHYUp6aFo1TkdKMTMKTXlnVkgzL2NISS9pQ0ZrbG5HWFRsQ3ZjZzUySjlXL0tnUVljSjg2NEVVaUxLS09DQVJBd2dnRU1NQTRHQTFVZApEd0VCL3dRRUF3SUNoREFUQmdOVkhTVUVEREFLQmdnckJnRUZCUWNEQVRBUEJnTlZIUk1CQWY4RUJUQURBUUgvCk1HZ0dBMVVkRGdSaEJGOHdOVG8zWlRwak56b3haam93TXpwbE1UcGtaanBpTnpwaU1EcGxOam80WlRwaVpUcGgKWWpvMk9UbzRaam81TnpvMllUbzFOam96T0RveE16bzFOem94WlRvMllUb3lZenBsTkRvNU1qb3pOem93WlRvdwpORG94TXpvNU9EbzJNekJxQmdOVkhTTUVZekJoZ0Y4d05UbzNaVHBqTnpveFpqb3dNenBsTVRwa1pqcGlOenBpCk1EcGxOam80WlRwaVpUcGhZam8yT1RvNFpqbzVOem8yWVRvMU5qb3pPRG94TXpvMU56b3haVG8yWVRveVl6cGwKTkRvNU1qb3pOem93WlRvd05Eb3hNem81T0RvMk16QUtCZ2dxaGtqT1BRUURBZ05KQURCR0FpRUExKzdRaE5HKwpkM1NRajdZRVdHRHptcXpOUVozU2dtR1ExemQwWHI5L2ZRSUNJUURBQTcwODRDOTY4WVdnUXhPSmQvKysxZm9XCndoUWJ0Rm9taUlPNnMvSkZVZz09Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
    Service:
      Name:        vault-agent-injector-svc
      Namespace:   vault
      Path:        /mutate
      Port:        443
  Failure Policy:  Ignore
  Match Policy:    Exact
  Name:            vault.hashicorp.com
  Namespace Selector:
  Object Selector:
    Match Expressions:
      Key:       app.kubernetes.io/name
      Operator:  NotIn
      Values:
        vault-agent-injector
  Reinvocation Policy:  Never
  Rules:
    API Groups:

    API Versions:
      v1
    Operations:
      CREATE
    Resources:
      pods
    Scope:          Namespaced
  Side Effects:     None
  Timeout Seconds:  30
Events:             &amp;lt;none&amp;gt;

kubectl get svc -n vault
NAME                       TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)                         AGE
vault                      NodePort    10.96.116.44   &amp;lt;none&amp;gt;        8200:30000/TCP,8201:31459/TCP   83m
vault-agent-injector-svc   ClusterIP   10.96.166.73   &amp;lt;none&amp;gt;        443/TCP                         83m
vault-injected-ui          NodePort    10.96.2.33     &amp;lt;none&amp;gt;        5000:30002/TCP                  16m
vault-internal             ClusterIP   None           &amp;lt;none&amp;gt;        8200/TCP,8201/TCP               83m&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;mutation webhook configuration 정의를 보면 vault-agent-injector-svc로 연결되는 것을 확인할 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;생성된 서비스로 접속해보면 Vault에 저장된 Secret 값이 노출되는 것을 알 수있습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;685&quot; data-origin-height=&quot;313&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/cejWEi/btsNjkDR7bA/BEmyA00CBsq4AHzhDuXnPK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/cejWEi/btsNjkDR7bA/BEmyA00CBsq4AHzhDuXnPK/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/cejWEi/btsNjkDR7bA/BEmyA00CBsq4AHzhDuXnPK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FcejWEi%2FbtsNjkDR7bA%2FBEmyA00CBsq4AHzhDuXnPK%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;685&quot; height=&quot;313&quot; data-origin-width=&quot;685&quot; data-origin-height=&quot;313&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;2. Vault CSI Driver 실습&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Vault에서 setcret을 사용하는 두번째 방식은 CSI Driver를 사용하는 방식입니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이를 위해서 Vault 생성 시에 옵션을 통해서 Vault CSI provider가 생성되어 있습니다. 이후 Secrets Store CSI Driver를 설치하면 각 노드에서 Daemonset이 생성됩니다. 사용자는 SecretProviderClass를 생성하고, 여기에는 Secret을 가져오기 위해 어떤 Secret Provider를 사용할지를 선언합니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;2428&quot; data-origin-height=&quot;932&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/brmUVV/btsNgAHiDFF/aBIjxkPWhzq0SVyKOfadF1/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/brmUVV/btsNgAHiDFF/aBIjxkPWhzq0SVyKOfadF1/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/brmUVV/btsNgAHiDFF/aBIjxkPWhzq0SVyKOfadF1/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbrmUVV%2FbtsNgAHiDFF%2FaBIjxkPWhzq0SVyKOfadF1%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;2428&quot; height=&quot;932&quot; data-origin-width=&quot;2428&quot; data-origin-height=&quot;932&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;출처: &lt;a href=&quot;https://www.hashicorp.com/en/blog/kubernetes-vault-integration-via-sidecar-agent-injector-vs-csi-provider&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://www.hashicorp.com/en/blog/kubernetes-vault-integration-via-sidecar-agent-injector-vs-csi-provider&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;그림을 보면 CSI Volume을 요청하는 파드가 생성되면, CSI Secret Store Driver는 Vault CSI provider에게 요청을 보내고 Vault CSI provider가 SecretProviderClass와 파드의 service Account를 사용해 Vault로 부터 secret을 가져와 파드의 CSI volume으로 마운트 합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아래 문서를 바탕으로 실습을 진행했습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;https://developer.hashicorp.com/vault/tutorials/kubernetes/kubernetes-secret-store-driver?in=vault%2Fkubernetes&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://developer.hashicorp.com/vault/tutorials/kubernetes/kubernetes-secret-store-driver?in=vault%2Fkubernetes&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;먼저 Vault는 이미 설치된 상태이기 때문에 아래 내용은 skip하겠습니다.&lt;/p&gt;
&lt;pre class=&quot;vala&quot;&gt;&lt;code&gt;# 리포 추가
helm repo add hashicorp https://helm.releases.hashicorp.com
helm repo update

# Vault 설치
helm install vault hashicorp/vault \
    --set &quot;server.dev.enabled=true&quot; \
    --set &quot;injector.enabled=false&quot; \
    --set &quot;csi.enabled=true&quot;&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;또한 Secret을 생성과 Kubernetes 인증 설정을 진행합니다.&lt;/p&gt;
&lt;pre class=&quot;routeros&quot;&gt;&lt;code&gt;# Secret 생성
# kubectl exec -it vault-0 -- /bin/sh
vault kv put secret/db-pass password=&quot;db-secret-password&quot;
=== Secret Path ===
secret/data/db-pass

======= Metadata =======
Key                Value
---                -----
created_time       2025-04-12T14:45:47.117669981Z
custom_metadata    &amp;lt;nil&amp;gt;
deletion_time      n/a
destroyed          false
version            1

vault kv get secret/db-pass
=== Secret Path ===
secret/data/db-pass

======= Metadata =======
Key                Value
---                -----
created_time       2025-04-12T14:45:47.117669981Z
custom_metadata    &amp;lt;nil&amp;gt;
deletion_time      n/a
destroyed          false
version            1

====== Data ======
Key         Value
---         -----
password    db-secret-password

# Kubernetes 인증 설정 (앞서 설정하였으므로 skip)
vault auth enable kubernetes 
vault write auth/kubernetes/config \
    kubernetes_host=&quot;https://$KUBERNETES_PORT_443_TCP_ADDR:443&quot;

# policy 생성
vault policy write internal-app - &amp;lt;&amp;lt;EOF
path &quot;secret/data/db-pass&quot; {
  capabilities = [&quot;read&quot;]
}
EOF

# role 생성
vault write auth/kubernetes/role/database \
    bound_service_account_names=webapp-sa \
    bound_service_account_namespaces=default \
    policies=internal-app \
    ttl=20m&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이제 클러스터에 secrets store CSI driver를 설치합니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;helm repo add secrets-store-csi-driver https://kubernetes-sigs.github.io/secrets-store-csi-driver/charts

helm install csi secrets-store-csi-driver/secrets-store-csi-driver \
    --set syncSecret.enabled=true -n vault

# 확인
kubectl get ds -n vault
NAME                           DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE
csi-secrets-store-csi-driver   3         3         3       3            3           kubernetes.io/os=linux   77s
vault-csi-provider             2         2         2       2            2           &amp;lt;none&amp;gt;                   133m

kubectl get po -n vault
NAME                                    READY   STATUS    RESTARTS   AGE
csi-secrets-store-csi-driver-kmkqf      3/3     Running   0          71s
csi-secrets-store-csi-driver-tct7q      3/3     Running   0          71s
csi-secrets-store-csi-driver-xvfjb      3/3     Running   0          71s
vault-0                                 1/1     Running   0          133m
vault-agent-injector-56459c7545-fnv9t   1/1     Running   0          133m
vault-csi-provider-79rg5                2/2     Running   0          133m
vault-csi-provider-x58j8                2/2     Running   0          133m
vault-injected-ui-77fb865789-f6sn8      2/2     Running   0          66m&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Vault를 provider로 지정한 SecretProviderClass를 생성하고, 앞서 생성한 role을 사용합니다.&lt;/p&gt;
&lt;pre class=&quot;properties&quot;&gt;&lt;code&gt;cat &amp;gt; spc-vault-database.yaml &amp;lt;&amp;lt;EOF
apiVersion: secrets-store.csi.x-k8s.io/v1
kind: SecretProviderClass
metadata:
  name: vault-database
spec:
  provider: vault
  parameters:
    vaultAddress: &quot;http://vault:8200&quot;
    roleName: &quot;database&quot;
    objects: |
      - objectName: &quot;db-password&quot;
        secretPath: &quot;secret/data/db-pass&quot;
        secretKey: &quot;password&quot;
EOF

kubectl apply --filename spc-vault-database.yaml&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Role에 지정한 Service Account를 생성합니다.&lt;/p&gt;
&lt;pre class=&quot;ebnf&quot;&gt;&lt;code&gt;kubectl create serviceaccount webapp-sa&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아래와 같이 파드를 생성하면서, Volume절에 앞서 생성한 secretProviderClass를 참조하는 볼륨을 생성합니다.&lt;/p&gt;
&lt;pre class=&quot;yaml&quot;&gt;&lt;code&gt;cat &amp;gt; webapp-pod.yaml &amp;lt;&amp;lt;EOF
kind: Pod
apiVersion: v1
metadata:
  name: webapp
spec:
  serviceAccountName: webapp-sa
  containers:
  - image: jweissig/app:0.0.1
    name: webapp
    volumeMounts:
    - name: secrets-store-inline
      mountPath: &quot;/mnt/secrets-store&quot;
      readOnly: true
  volumes:
    - name: secrets-store-inline
      csi:
        driver: secrets-store.csi.k8s.io
        readOnly: true
        volumeAttributes:
          secretProviderClass: &quot;vault-database&quot;
EOF
kubectl apply --filename webapp-pod.yaml
&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;확인해보면 secretProviderClass를 통해 Vault의 Secret을 마운트 한 것을 확인할 수 있습니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;kubectl get pods
NAME     READY   STATUS    RESTARTS   AGE
webapp   1/1     Running   0          19s

kubectl exec webapp -- cat /mnt/secrets-store/db-password
db-secret-password&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;4. Vault Secrets Operator(VSO) 실습&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;앞서 살펴본 방식들은 워크로드에서 Sidecar를 설정하거나 혹은 CSI Volume를 추가하는 방식으로 변경이 필요로 하게 됩니다. 이러한 방식은 기존 리소스를 수정해야 하는 단점이 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;반면 VSO는 기존 쿠버네티스의 Secret을 사용하되, VSO를 통해서 관리하고자 하는 Secret을 지정하면 VSO가 Vault를 통해 Secret을 가져와 쿠버네티스의 Secret을 변경하는 방식을 사용합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이를 위해서 Vault Secrets Operator는 Vault Role에 대응하는 Vault Auth CRD와 Vault Secret에 대응하는 VaultSecret과 같은 CRD를 생성하고, 이들 CRD의 변경 사항을 감시하여 동작합니다. 각 CRD는 Secret 소스에서 가져와 Kubernetes Secret으로 동기화할 수 있도록 필요한 사양을 제공합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;오퍼레이터는 원본 Secret 데이터를 대상 쿠버네티스 Secret에 직접 작성하며, 원본에 변경 사항이 발생할 경우 해당 내용을 대상 Secret에도 수명 주기 동안 지속적으로 반영합니다. 이렇게 함으로써 애플리케이션은 쿠버네티스 Secret에만 접근하면 Secret의 데이터를 사용할 수 있습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;3840&quot; data-origin-height=&quot;1631&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/buFnPl/btsNkv5oYif/B59YiJgKK42VkkufFskAY1/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/buFnPl/btsNkv5oYif/B59YiJgKK42VkkufFskAY1/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/buFnPl/btsNkv5oYif/B59YiJgKK42VkkufFskAY1/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbuFnPl%2FbtsNkv5oYif%2FB59YiJgKK42VkkufFskAY1%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;3840&quot; height=&quot;1631&quot; data-origin-width=&quot;3840&quot; data-origin-height=&quot;1631&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;출처: &lt;a href=&quot;https://www.hashicorp.com/en/blog/kubernetes-vault-integration-via-sidecar-agent-injector-vs-csi-provider&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://www.hashicorp.com/en/blog/kubernetes-vault-integration-via-sidecar-agent-injector-vs-csi-provider&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;즉, 위의 그림을 보면 Vault 연결과 관련된 VaultConnection과 VaultAuth와 같은 CRD를 가지며, 또한 Static, Dynamic Secret에 따라 아래와 같은 CRD를 가집니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Static Secrets 는 운영자가 Vault의 단일 Static Secret을 단일 쿠버네티스 Secret으로 동기화하는 데 필요한 구성입니다.&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;CRD: VaultStaticSecret&lt;/li&gt;
&lt;/ul&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Static Secret에 대한 상세 실습은 아래 문서를 참고하실 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;https://developer.hashicorp.com/vault/tutorials/kubernetes/vault-secrets-operator&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://developer.hashicorp.com/vault/tutorials/kubernetes/vault-secrets-operator&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Dynamic Secrets는 Vault의 Dynamic Secret Engine을 활용하여 동적으로 변경되는 Secrets을 K8s Secrets에 동기화할 수 있습니다. 여기에 지원되는 시크릿 엔진에는 DB Credentials, Cloud Credentials(AWS, Azure, GCP 등)이 있습니다.&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;CRD: VaultDynamicSecret&lt;/li&gt;
&lt;/ul&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;본 실습에서 Dynamic Secret에 대해서 실습을 이어 나가겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;먼저 VSO 배포를 위한 Chart Values 파일 작성합니다.&lt;/p&gt;
&lt;pre class=&quot;yaml&quot;&gt;&lt;code&gt;cat &amp;gt; vault-operator-values.yaml  &amp;lt;&amp;lt;EOF
defaultVaultConnection:
  enabled: true
  address: &quot;http://vault.vault.svc.cluster.local:8200&quot;
  skipTLSVerify: false
controller:
  manager:
    clientCache:
      persistenceModel: direct-encrypted
      storageEncryption:
        enabled: true
        mount: k8s-auth-mount
        keyName: vso-client-cache
        transitMount: demo-transit
        kubernetes:
          role: auth-role-operator
          serviceAccount: vault-secrets-operator-controller-manager
          tokenAudiences: [&quot;vault&quot;]
EOF&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;value.yaml을 바탕으로 Vault Secret Operator를 생성합니다.&lt;/p&gt;
&lt;pre class=&quot;sql&quot;&gt;&lt;code&gt;helm install vault-secrets-operator hashicorp/vault-secrets-operator \
  -n vault-secrets-operator-system \
  --create-namespace \
  --values vault-operator-values.yaml

# 확인
kubectl get po,svc -n vault-secrets-operator-system
NAME                                                             READY   STATUS    RESTARTS   AGE
pod/vault-secrets-operator-controller-manager-7f67cd89fd-qmsb2   2/2     Running   0          39s

NAME                                             TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
service/vault-secrets-operator-metrics-service   ClusterIP   10.96.156.114   &amp;lt;none&amp;gt;        8443/TCP   39s

kubectl describe pod -n vault-secrets-operator-system                                       
...
Service Account:  vault-secrets-operator-controller-manager
...
Containers:
  kube-rbac-proxy:
    Container ID:  containerd://6500b98aa55510c0b1a5950c8c91e3b6c2b4ea312f5c7b0b5ce843d615a543ac
    Image:         quay.io/brancz/kube-rbac-proxy:v0.18.1
    Image ID:      quay.io/brancz/kube-rbac-proxy@sha256:e6a323504999b2a4d2a6bf94f8580a050378eba0900fd31335cf9df5787d9a9b
    Port:          8443/TCP
    Host Port:     0/TCP
    Args:
      --secure-listen-address=0.0.0.0:8443
      --upstream=http://127.0.0.1:8080/
      --logtostderr=true
      --v=0
  ...
  manager:
    Container ID:  containerd://c65fc319d184fe9145281db7bdfa88e9169ee4ddc86db42ded2a61b5194ca7c0
    Image:         hashicorp/vault-secrets-operator:0.10.0
    Image ID:      docker.io/hashicorp/vault-secrets-operator@sha256:3ee9b27677077cb3324ad02feb68fb7c25cfe381cb8ab5f940eee23c16f8c9a8
    Port:          &amp;lt;none&amp;gt;
    Host Port:     &amp;lt;none&amp;gt;
    Command:
      /vault-secrets-operator
    Args:
      --health-probe-bind-address=:8081
      --metrics-bind-address=127.0.0.1:8080
      --leader-elect
      --client-cache-persistence-model=direct-encrypted
      --global-vault-auth-options=allow-default-globals
      --backoff-initial-interval=5s
      --backoff-max-interval=60s
      --backoff-max-elapsed-time=0s
      --backoff-multiplier=1.50
      --backoff-randomization-factor=0.50
      --zap-log-level=info
      --zap-time-encoding=rfc3339
      --zap-stacktrace-level=panic
...

# CRD 확인
kubectl get crd |grep vault
hcpvaultsecretsapps.secrets.hashicorp.com                   2025-04-12T15:48:09Z
vaultauthglobals.secrets.hashicorp.com                      2025-04-12T15:48:09Z
vaultauths.secrets.hashicorp.com                            2025-04-12T15:48:10Z
vaultconnections.secrets.hashicorp.com                      2025-04-12T15:48:10Z
vaultdynamicsecrets.secrets.hashicorp.com                   2025-04-12T15:48:10Z
vaultpkisecrets.secrets.hashicorp.com                       2025-04-12T15:48:10Z
vaultstaticsecrets.secrets.hashicorp.com                    2025-04-12T15:48:10Z

# vault Auth 확인
kubectl get vaultauth -n vault-secrets-operator-system vault-secrets-operator-default-transit-auth -o jsonpath='{.spec}' | jq
{
  &quot;kubernetes&quot;: {
    &quot;audiences&quot;: [
      &quot;vault&quot;
    ],
    &quot;role&quot;: &quot;auth-role-operator&quot;,
    &quot;serviceAccount&quot;: &quot;vault-secrets-operator-controller-manager&quot;,
    &quot;tokenExpirationSeconds&quot;: 600
  },
  &quot;method&quot;: &quot;kubernetes&quot;,
  &quot;mount&quot;: &quot;k8s-auth-mount&quot;,
  &quot;storageEncryption&quot;: {
    &quot;keyName&quot;: &quot;vso-client-cache&quot;,
    &quot;mount&quot;: &quot;demo-transit&quot;
  },
  &quot;vaultConnectionRef&quot;: &quot;default&quot;
}

# vault Connnection 확인
kubectl get vaultconnection -n vault-secrets-operator-system default -o jsonpath='{.spec}' | jq
{
  &quot;address&quot;: &quot;http://vault.vault.svc.cluster.local:8200&quot;,
  &quot;skipTLSVerify&quot;: false
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;실습에서는 Dynamic Secret을 테스트 하기 위해서 데이터베이스의 credentials을 dynamic Secret으로 사용합니다. 예제 데이터베이스인 PostgreSQL 설치합니다.&lt;/p&gt;
&lt;pre class=&quot;gherkin&quot;&gt;&lt;code&gt;helm repo add bitnami https://charts.bitnami.com/bitnami

helm upgrade --install postgres bitnami/postgresql \
  --namespace postgres \
  --create-namespace \
  --set auth.audit.logConnections=true \
  --set auth.postgresPassword=secret-pass

# 확인
kubectl get po -n postgres
NAME                    READY   STATUS    RESTARTS   AGE
postgres-postgresql-0   1/1     Running   0          108s

# psql 로그인 확인
kubectl exec -it -n postgres postgres-postgresql-0 -- sh -c 'PGPASSWORD=secret-pass psql -U postgres -h localhost'
psql (17.4)
Type &quot;help&quot; for help.

postgres=#

kubectl exec -it -n postgres postgres-postgresql-0 -- sh -c &quot;PGPASSWORD=secret-pass psql -U postgres -h localhost -c '\l'&quot;
                                                     List of databases
   Name    |  Owner   | Encoding | Locale Provider |   Collate   |    Ctype    | Locale | ICU Rules |   Access privileges
-----------+----------+----------+-----------------+-------------+-------------+--------+-----------+-----------------------
 postgres  | postgres | UTF8     | libc            | en_US.UTF-8 | en_US.UTF-8 |        |           |
 template0 | postgres | UTF8     | libc            | en_US.UTF-8 | en_US.UTF-8 |        |           | =c/postgres          +
           |          |          |                 |             |             |        |           | postgres=CTc/postgres
 template1 | postgres | UTF8     | libc            | en_US.UTF-8 | en_US.UTF-8 |        |           | =c/postgres          +
           |          |          |                 |             |             |        |           | postgres=CTc/postgres
(3 rows)&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Vault Database Secret Engine 을 활성화 합니다. 아래 과정에서 보면 알 수 있듯이, Database Secret Engine을 통해서 PostgreSQL 연결 정보를 등록하고, 이 연결 정보를 사용할 수 있는 Role을 지정합니다.&lt;/p&gt;
&lt;pre class=&quot;routeros&quot;&gt;&lt;code&gt;# demo-db라는 경로로 Database Secret Engine을 활성화
vault secrets enable -path=demo-db database

# PostgreSQL 연결 정보 등록
# 해당 과정은 postgres가 정상적으로 동작 시 적용 가능
# allowed_roles: 이후 설정할 Role 이름 지정
vault write demo-db/config/demo-db \
   plugin_name=postgresql-database-plugin \
   allowed_roles=&quot;dev-postgres&quot; \
   connection_url=&quot;postgresql://{{username}}:{{password}}@postgres-postgresql.postgres.svc.cluster.local:5432/postgres?sslmode=disable&quot; \
   username=&quot;postgres&quot; \
   password=&quot;secret-pass&quot;&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;위 작업을 통해 postgresql-database-plug을 통해 연결정보를 등록했고, 이를 사용하는데 허용된 Role을 아래와 같이 생성합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;해당 Role 사용 시 Vault가 동적으로 사용자 계정과 비밀번호를 생성 가능하게 됩니다. 이때 실제 postgreSQL에 대한 연결정보는 Vault의 Database Secret Engine에 저장되고, Vault가 중간에 동적으로 DB 연결에 필요한 계정/비밀번호를 생성하게 됩니다. 이를 통해 postgreSQL에 대한 관리 권한을 Vault에 위임하다고 볼 수 있습니다.&lt;/p&gt;
&lt;pre class=&quot;tex&quot;&gt;&lt;code&gt;# DB 사용자 동적 생성 Role 등록 
vault write demo-db/roles/dev-postgres \
   db_name=demo-db \
   creation_statements=&quot;CREATE ROLE \&quot;{{name}}\&quot; WITH LOGIN PASSWORD '{{password}}' VALID UNTIL '{{expiration}}'; \
      GRANT ALL PRIVILEGES ON DATABASE postgres TO \&quot;{{name}}\&quot;;&quot; \
   revocation_statements=&quot;REVOKE ALL ON DATABASE postgres FROM  \&quot;{{name}}\&quot;;&quot; \
   backend=demo-db \
   name=dev-postgres \
   default_ttl=&quot;1m&quot; \
   max_ttl=&quot;1m&quot;&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;애플리케이션에서 사용할 Policy와 Role을 생성합니다.&lt;/p&gt;
&lt;pre class=&quot;routeros&quot;&gt;&lt;code&gt;# Policy 설정: DB 자격증명 읽기 권한
# demo-db/creds/dev-postgres 경로에 대한 read 권한 부여
# 추후 Kubernetes 서비스 어카운트(demo-dynamic-app)에 이 정책을 연결해서 자격증명 요청 가능
vault policy write demo-auth-policy-db - &amp;lt;&amp;lt;EOF
path &quot;demo-db/creds/dev-postgres&quot; {
   capabilities = [&quot;read&quot;]
}
EOF

# Role 생성: 실행 중인 애플리케이션이 Vault로부터 DB 자격증명(동적 사용자)을 받아올 수 있도록 권한을 연결을 위한 설정
vault write auth/kubernetes/role/auth-role \
    bound_service_account_names=&quot;demo-dynamic-app&quot; \
    bound_service_account_namespaces=&quot;demo-ns&quot; \
    policies=&quot;demo-auth-policy-db&quot; \
    token_ttl=0 \
    token_period=120 \
    audience=vault  &lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아래의 과정은 쿠버네티스의 토큰을 받아오는 과정을 캐싱하기 위해서 Transit Secret Engine 설정하는 과정입니다.&lt;/p&gt;
&lt;pre class=&quot;routeros&quot;&gt;&lt;code&gt;# kubectl exec --stdin=true --tty=true vault-0 -n vault -- /bin/sh
----------------------------------------------------------------
# Transit Secret Engine 활성화
# transit 엔진을 demo-transit 경로로 활성화.
# 데이터를 저장하지 않고 암복호화 기능만 제공하는 Vault의 기능
# 클라이언트 캐시는 리더십 변경 시에도 Vault 토큰 및 동적 비밀 임대를 계속 추적하고 갱신할 수 있으므로 원활한 업그레이드를 지원합니다
## Vault 서버에 클라이언트 캐시를 저장하고 암호화할 수 있습니다.
## Vault 동적 비밀을 사용하는 경우 클라이언트 캐시를 영구적으로 저장하고 암호화하는 것이 좋습니다.
## 이렇게 하면 재시작 및 업그레이드를 통해 동적 비밀 임대가 유지됩니다.
vault secrets enable -path=demo-transit transit
vault secrets list
Path             Type         Accessor              Description
----             ----         --------              -----------
cubbyhole/       cubbyhole    cubbyhole_baa07f5b    per-token private secret storage
demo-db/         database     database_3608860f     n/a
demo-transit/    transit      transit_e5ab4c20      n/a
identity/        identity     identity_d7905b56     identity store
secret/          kv           kv_d4b43c42           n/a
sys/             system       system_4a32a805       system endpoints used for control, policy and debugging

# vso-client-cache라는 키를 생성
# 이 키는 VSO가 암복호화 시 사용할 암호화 키 역할
vault write -force demo-transit/keys/vso-client-cache

# vso-client-cache 키에 대해 암호화(encrypt), 복호화(decrypt)를 허용하는 정책 생성
vault policy write demo-auth-policy-operator - &amp;lt;&amp;lt;EOF
path &quot;demo-transit/encrypt/vso-client-cache&quot; {
   capabilities = [&quot;create&quot;, &quot;update&quot;]
}
path &quot;demo-transit/decrypt/vso-client-cache&quot; {
   capabilities = [&quot;create&quot;, &quot;update&quot;]
}
EOF

# Vault Secrets Operator가 사용하는 ServiceAccount에 위 정책을 바인딩
# vso가 Vault에 로그인할 때 사용할 수 있는 JWT 기반 Role 설정
# 해당 Role을 통해 Operator는 Transit 엔진을 이용한 암복호화 API 호출 가능
vault write auth/kubernetes/role/auth-role-operator \
   bound_service_account_names=vault-secrets-operator-controller-manager \
   bound_service_account_namespaces=vault-secrets-operator-system \
   token_ttl=0 \
   token_period=120 \
   token_policies=demo-auth-policy-operator \
   audience=vault

vault read auth/kubernetes/role/auth-role-operator
Key                                         Value
---                                         -----
alias_name_source                           serviceaccount_uid
audience                                    vault
bound_service_account_names                 [vault-secrets-operator-controller-manager]
bound_service_account_namespace_selector    n/a
bound_service_account_namespaces            [vault-secrets-operator-system]
token_bound_cidrs                           []
token_explicit_max_ttl                      0s
token_max_ttl                               0s
token_no_default_policy                     false
token_num_uses                              0
token_period                                2m
token_policies                              [demo-auth-policy-operator]
token_ttl                                   0s
token_type                                  default&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이제 샘플 애플리케이션 YAML 작성 및 배포하겠습니다.&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;demo-ns 네임스페이스 생성하고, 폴더를 생성합니다.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;vault-auth-dynamic.yaml&lt;/code&gt; :
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;애플리케이션이 Vault에 인증하기 위한 ServiceAccount 및 VaultAuth 리소스&lt;/li&gt;
&lt;/ul&gt;
&lt;pre class=&quot;bash&quot; data-ke-language=&quot;bash&quot;&gt;&lt;code&gt;kubectl create ns demo-ns
mkdir vso-dynamic
cd vso-dynamic

cat &amp;gt; vault-auth-dynamic.yaml &amp;lt;&amp;lt;EOF
apiVersion: v1
kind: ServiceAccount
metadata:
  namespace: demo-ns
  name: demo-dynamic-app
---
apiVersion: secrets.hashicorp.com/v1beta1
kind: VaultAuth
metadata:
  name: dynamic-auth
  namespace: demo-ns
spec:
  method: kubernetes
  mount: kubernetes # kubernetes auth 이름
  kubernetes:
    role: auth-role
    serviceAccount: demo-dynamic-app
    audiences:
      - vault
EOF&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;li&gt;&lt;code&gt;app-secret.yaml&lt;/code&gt; :
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;Spring App에서 PostgreSQL에 접속할 때 사용할 해당 시크릿에 username/password을 동적으로 생성&lt;/li&gt;
&lt;/ul&gt;
&lt;pre class=&quot;routeros&quot;&gt;&lt;code&gt;cat &amp;gt; app-secret.yaml &amp;lt;&amp;lt;EOF
apiVersion: v1
kind: Secret
metadata:
  name: vso-db-demo
  namespace: demo-ns
EOF&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;li&gt;&lt;code&gt;vault-dynamic-secret.yaml&lt;/code&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;Vault에서 동적으로 PostgreSQL 접속정보 생성하고 K8s Secret에 저장&lt;/li&gt;
&lt;/ul&gt;
&lt;pre class=&quot;yaml&quot;&gt;&lt;code&gt;cat &amp;gt; vault-dynamic-secret.yaml &amp;lt;&amp;lt;EOF
apiVersion: secrets.hashicorp.com/v1beta1
kind: VaultDynamicSecret
metadata:
  name: vso-db-demo
  namespace: demo-ns
spec:
  refreshAfter: 25s
  mount: demo-db
  path: creds/dev-postgres
  destination:
    name: vso-db-demo # 대상 secret을 지정
    create: true
    overwrite: true
  vaultAuthRef: dynamic-auth # VaultAuth 을 참조함
  rolloutRestartTargets: # secret이 변경될때 rollout되는 디플로이먼트를 지정
  - kind: Deployment
    name: vaultdemo
EOF&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;li&gt;&lt;code&gt;app-spring-deploy.yaml&lt;/code&gt; :
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;DB 접속 테스트를 위한 Spring App&lt;/li&gt;
&lt;/ul&gt;
&lt;pre class=&quot;yaml&quot;&gt;&lt;code&gt;cat &amp;gt; app-spring-deploy.yaml &amp;lt;&amp;lt;EOF
apiVersion: apps/v1
kind: Deployment
metadata:
  name: vaultdemo
  namespace: demo-ns
  labels:
    app: vaultdemo
spec:
  replicas: 1
  selector:
    matchLabels:
      app: vaultdemo
  template:
    metadata:
      labels:
        app: vaultdemo
    spec:
      volumes:
        - name: secrets
          secret:
            secretName: &quot;vso-db-demo&quot;
      containers:
        - name: vaultdemo
          image: hyungwookhub/vso-spring-demo:v5
          imagePullPolicy: IfNotPresent
          env:
            - name: DB_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: &quot;vso-db-demo&quot;
                  key: password
            - name: DB_USERNAME
              valueFrom:
                secretKeyRef:
                  name: &quot;vso-db-demo&quot;
                  key: username
            - name: DB_HOST
              value: &quot;postgres-postgresql.postgres.svc.cluster.local&quot;
            - name: DB_PORT
              value: &quot;5432&quot;
            - name: DB_NAME
              value: &quot;postgres&quot;
          ports:
            - containerPort: 8088
          volumeMounts:
            - name: secrets
              mountPath: /etc/secrets
              readOnly: true
---
apiVersion: v1
kind: Service
metadata:
  name: vaultdemo
  namespace: demo-ns
spec:
  ports:
    - name: vaultdemo
      port: 8088         
      targetPort: 8088 
      nodePort: 30003
  selector:
    app: vaultdemo
  type: NodePort
EOF&lt;/code&gt;&lt;/pre&gt;
애플리케이션 배포를 위해서 해당 폴더에서 &lt;code&gt;kubectl apply -f .&lt;/code&gt;를 수행합니다.&lt;/li&gt;
&lt;/ul&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;실행된 파드와 서비스를 확인합니다.&lt;/p&gt;
&lt;pre class=&quot;routeros&quot;&gt;&lt;code&gt;kubectl get po,svc -n demo-ns
NAME                             READY   STATUS    RESTARTS   AGE
pod/vaultdemo-65766f6679-sccdh   1/1     Running   0          4s

NAME                TYPE       CLUSTER-IP    EXTERNAL-IP   PORT(S)          AGE
service/vaultdemo   NodePort   10.96.46.22   &amp;lt;none&amp;gt;        8088:30003/TCP   7m37s&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이제 실제 동작을 UI(&lt;a href=&quot;http://localhost:30003)%EC%97%90%EC%84%9C&quot;&gt;http://localhost:30003&lt;/a&gt;)에서 확인합니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1439&quot; data-origin-height=&quot;1320&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/uPcup/btsNi24vetU/TEAZVeQpKk46zx70UGBpAK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/uPcup/btsNi24vetU/TEAZVeQpKk46zx70UGBpAK/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/uPcup/btsNi24vetU/TEAZVeQpKk46zx70UGBpAK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FuPcup%2FbtsNi24vetU%2FTEAZVeQpKk46zx70UGBpAK%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1439&quot; height=&quot;1320&quot; data-origin-width=&quot;1439&quot; data-origin-height=&quot;1320&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;샘플 애플리케이션에서 노출된 연결 정보를 통해 데이터베이스 접속이 성공하는 것을 알 수 있습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1436&quot; data-origin-height=&quot;1363&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/c5NmHf/btsNgwyfJez/KLk9usXTHOpuX4WkD6QMc0/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/c5NmHf/btsNgwyfJez/KLk9usXTHOpuX4WkD6QMc0/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/c5NmHf/btsNgwyfJez/KLk9usXTHOpuX4WkD6QMc0/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2Fc5NmHf%2FbtsNgwyfJez%2FKLk9usXTHOpuX4WkD6QMc0%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1436&quot; height=&quot;1363&quot; data-origin-width=&quot;1436&quot; data-origin-height=&quot;1363&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;세부적인 내용은 CLI를 통해서 살펴보겠습니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;# Secret이 변경되고, 적용을 위해서 파드가 42~45초 사이로 재생성됨
while true; do kubectl get pod -n demo-ns ; echo; kubectl get secret -n demo-ns vso-db-demo -oyaml |egrep &quot;username|password&quot;; date; sleep 30 ; echo ; done
NAME                         READY   STATUS    RESTARTS   AGE
vaultdemo-7ccfbc4bc5-sv24l   1/1     Running   0          8s

  password: M1h5dkctVXFEZm05TnVvYm1Jamc=
  username: di1rdWJlcm5ldC1kZXYtcG9zdC1kOXZWSnBwWlF5S0RIa2V3VjVEUi0xNzQ0NDc3MDc4
Sun Apr 13 01:58:07 KST 2025

NAME                         READY   STATUS    RESTARTS   AGE
vaultdemo-7ccfbc4bc5-sv24l   1/1     Running   0          39s

  password: M1h5dkctVXFEZm05TnVvYm1Jamc=
  username: di1rdWJlcm5ldC1kZXYtcG9zdC1kOXZWSnBwWlF5S0RIa2V3VjVEUi0xNzQ0NDc3MDc4
Sun Apr 13 01:58:38 KST 2025

NAME                         READY   STATUS    RESTARTS   AGE
vaultdemo-685f97956f-5xn9d   1/1     Running   0          28s

  password: d0YtdzJMSXZocngxbnppLUU1SFo=
  username: di1rdWJlcm5ldC1kZXYtcG9zdC1JNmVsSmxXRHdvY3ZUa0NKUXdtVy0xNzQ0NDc3MTIw
Sun Apr 13 01:59:08 KST 2025

# 실제 postgresql 에 사용자 정보 확인 (계속 추가되고 있음)
# kubectl exec -it -n postgres postgres-postgresql-0 -- sh -c &quot;PGPASSWORD=secret-pass psql -U postgres -h localhost -c '\du'&quot;
                                                  List of roles
                      Role name                      |                         Attributes
-----------------------------------------------------+------------------------------------------------------------
 postgres                                            | Superuser, Create role, Create DB, Replication, Bypass RLS
 v-kubernet-dev-post-1UGLywrLiJTZaaEf8miA-1744476863 | Password valid until 2025-04-12 16:55:28+00
 v-kubernet-dev-post-6UvLI5FPiUF9fMY7QNEp-1744476652 | Password valid until 2025-04-12 16:51:57+00
 v-kubernet-dev-post-7mJgWsGrFamOFToYkZLX-1744476819 | Password valid until 2025-04-12 16:54:44+00
 v-kubernet-dev-post-9heCLQZOM1vEr5F61aC2-1744476733 | Password valid until 2025-04-12 16:53:18+00
 v-kubernet-dev-post-BTeXCNTCA2sRfAXUs84e-1744476390 | Password valid until 2025-04-12 16:47:35+00
 v-kubernet-dev-post-BiTwN5EhluVL7vqbnMng-1744476390 | Password valid until 2025-04-12 16:47:35+00
 v-kubernet-dev-post-F23WnObMJvf1b3EXjMX8-1744476909 | Password valid until 2025-04-12 16:56:14+00
 v-kubernet-dev-post-F66ZR0rV2R9yaoelOtBr-1744476566 | Password valid until 2025-04-12 16:50:31+00
 v-kubernet-dev-post-I6elJlWDwocvTkCJQwmW-1744477120 | Password valid until 2025-04-12 16:59:45+00
 v-kubernet-dev-post-Jh15Ui20lrcxuFo8yH1b-1744476432 | Password valid until 2025-04-12 16:48:17+00
 v-kubernet-dev-post-T8gKAH6Z2h7hPJeDpVs6-1744476521 | Password valid until 2025-04-12 16:49:46+00
...


# 로그 확인
kubectl stern -n demo-ns -l app=vaultdemo
...

kubectl stern -n vault vault-0
...
vault-0 vault 2025-04-10T08:32:09.484Z [INFO]  expiration: revoked lease: lease_id=demo-db/creds/dev-postgres/Ph1sg8efnqssOU5FVIuAqhMW
vault-0 vault 2025-04-10T08:32:54.229Z [INFO]  expiration: revoked lease: lease_id=demo-db/creds/dev-postgres/XKkHZ65ZYLeILvo8GqWfE7F3
vault-0 vault 2025-04-10T08:33:36.804Z [INFO]  expiration: revoked lease: lease_id=demo-db/creds/dev-postgres/rk1KUFsV7SjZPNFsCqCHGZVx
...
&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;실습을 마무리하고 kind 클러스터를 삭제합니다.&lt;/p&gt;
&lt;pre class=&quot;properties&quot;&gt;&lt;code&gt;# 클러스터 삭제
kind delete cluster --name myk8s

# 확인
docker ps
cat ~/.kube/config&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;마무리&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;쿠버네티스 환경에서 Setcret을 사용하기 위해서 Vault를 활용하는 방안을 살펴보았습니다. 실습은 간략한 Static Secret과 Dynamic Secret을 예제로 살펴보았지만, Vault를 사용해 Secret 관리를 위임하는 다양한 사례가 있을 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Vault를 활용하는 방식에서 Sidecar injector나 CSI Driver를 사용하는 방식은 간단하지만 기존 워크로드의 변경이 있어야 하기 때문에 다소 불편할 수 있습니다. 한편 VSO를 사용하는 방법은 사용 방식은 복잡하지만, 개발자 입장에서는 기존 쿠버네티스의 Secret을 사용하는 인터페이스를 그대로 사용할 수 있다는 장점이 있었습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;그럼 이번 포스트를 마무리 하겠습니다.&lt;/p&gt;</description>
      <category>Kubernetes</category>
      <category>kubernetes</category>
      <category>Secret</category>
      <category>vault</category>
      <author>한명</author>
      <guid isPermaLink="true">https://a-person.tistory.com/45</guid>
      <comments>https://a-person.tistory.com/45#entry45comment</comments>
      <pubDate>Sun, 13 Apr 2025 02:24:14 +0900</pubDate>
    </item>
    <item>
      <title>[8] EKS Upgrade</title>
      <link>https://a-person.tistory.com/44</link>
      <description>&lt;p data-ke-size=&quot;size16&quot;&gt;이번 포스트에서는 EKS Upgrade를 실습을 통해서 알아보겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;본 실습은 EKS Workshop인 Amazon EKS Upgrades: Strategies and Best Practices 를 바탕으로 진행하였음을 알려드립니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;해당 워크샵 링크는 아래와 같습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;https://catalog.us-east-1.prod.workshops.aws/workshops/693bdee4-bc31-41d5-841f-54e3e54f8f4a/en-US&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://catalog.us-east-1.prod.workshops.aws/workshops/693bdee4-bc31-41d5-841f-54e3e54f8f4a/en-US&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;목차&lt;/h2&gt;
&lt;ol style=&quot;list-style-type: decimal;&quot; data-ke-list-type=&quot;decimal&quot;&gt;
&lt;li&gt;EKS의 업그레이드와 전략&lt;/li&gt;
&lt;li&gt;실습 환경 개요&lt;/li&gt;
&lt;li&gt;In-place 클러스터 업그레이드&lt;br /&gt;3.1. 컨트롤 플레인 업그레이드 &lt;br /&gt;3.2. Addons 업그레이드&lt;br /&gt;3.3. 관리형 노드 그룹 업그레이드 &lt;br /&gt;3.4. Karpenter 노드 업그레이드&lt;br /&gt;3.5. Self-managed 노드 업그레이드&lt;br /&gt;3.6. Fargate 노드 업그레이드&lt;/li&gt;
&lt;li&gt;Blue/Green 클러스터 업그레이드&lt;/li&gt;
&lt;/ol&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;1. EKS의 업그레이드와 전략&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;쿠버네티스의 버전은 semantic versioning을 따르며, 특정 버전을 x.y.z라고 할 때 각 major.minor.patch 버전을 의미합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;새로운 쿠버네티스 마이너 버전은 약 4개월 바다 릴리즈 되며, 모든 버전은 12개월 동안 표준 지원을 제공되고, 한시점에 3개의 마이너 버전에 대한 표준 지원을 제공합니다. 표준 지원을 제공한다는 의미는 해당 버전에 대해서 패치가 지원된다는 의미로 이해하실 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Amazon EKS는 쿠버네티스의 릴리즈 사이클을 따릅니다만, 세부적으로는 조금 더 넓은 범위의 지원을 보장합니다. EKS에서 특정 버전이 릴리즈되면 14개월 간 표준 지원이 되며, 또한 총 4개의 마이너 버전에 대한 표준 지원을 제공합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;현재 지원하는 쿠버네티스 버전에 대해서 아래 Amazon EKS kubernetes 릴리즈 일정을 살펴보시기 바랍니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;https://docs.aws.amazon.com/ko_kr/eks/latest/userguide/kubernetes-versions.html#kubernetes-release-calendar&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://docs.aws.amazon.com/ko_kr/eks/latest/userguide/kubernetes-versions.html#kubernetes-release-calendar&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1268&quot; data-origin-height=&quot;445&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bhngXa/btsM4kdbhw3/cZvxRIEi0mQdgWuCzlF9D0/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bhngXa/btsM4kdbhw3/cZvxRIEi0mQdgWuCzlF9D0/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bhngXa/btsM4kdbhw3/cZvxRIEi0mQdgWuCzlF9D0/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbhngXa%2FbtsM4kdbhw3%2FcZvxRIEi0mQdgWuCzlF9D0%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1268&quot; height=&quot;445&quot; data-origin-width=&quot;1268&quot; data-origin-height=&quot;445&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;EKS는 표준 지원(Standard Support)이 지난 이후에도 12개월의 확장 지원(Extended Support)을 제공하지만 이는 비용이 추가 됩니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;웹 콘솔에서 EKS 클러스터의 Overview&amp;gt;Kubernetes version settings&amp;gt;Manage를 통해 Upgrade policy를 선택할 수 있습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1458&quot; data-origin-height=&quot;542&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/VMhxd/btsM3lXRcnH/jpypNQowdiU7RI6s84o0eK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/VMhxd/btsM3lXRcnH/jpypNQowdiU7RI6s84o0eK/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/VMhxd/btsM3lXRcnH/jpypNQowdiU7RI6s84o0eK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FVMhxd%2FbtsM3lXRcnH%2FjpypNQowdiU7RI6s84o0eK%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1458&quot; height=&quot;542&quot; data-origin-width=&quot;1458&quot; data-origin-height=&quot;542&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이때 표준 지원을 선택하는 경우, 표준 지원 기간이 종료되면 자동 업그레이드 되는 점을 유의하셔야 합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;버전 업그레이드에 대해서 고민해야 될 부분은 특정 버전이 14개월 동안 표준 지원되기 때문에 14개월 뒤에 업그레이드를 한다고 생각하실 수도 있지만, 사실 1개 버전만 업그레이드 하는 경우, 다음 버전의 EOS가 곧 도래하기 때문에 몇단계를 더 업그레이드 해야만 다시 1년가량을 안정적으로 사용하실 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;예를 들어, 1.27 버전이 2023/05/24~2024/07/24 까지 표준 지원 기간이지만, 2024년 7월에 1.28로 업그레이드를 해도 2024/12/26일에 다시 EOS가 도래합니다. 그렇기 때문에 실제로는 1.27-&amp;gt;1.28-&amp;gt;1.29-&amp;gt;1.30까지 업그레이드를 해야 이후 1년 정도 EOS 이슈가 없이 사용할 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;EKS 업그레이드 과정&lt;/h3&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;EKS의 업그레이드 과정은 실제 업그레이드에 대한 검토와 백업과 같은 내용을 제외하고 클러스터 자체를 업그레이드 하는 작업에 대해서만 설명합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;전반적인 업그레이드 절차는 아래와 같이 이뤄집니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;1) 컨트롤 플레인 업그레이드&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;2) Add-on 업그레이드&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;3) 데이터 플레인 업그레이드&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이때, 데이터 플레인의 형태가 다양한 경우, 세부적으로 데이터 플레인의 업그레이드 방식이 달라질 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;EKS 업그레이드 전략&lt;/h3&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;EKS의 업그레이드 전략은 In-place 업그레이드와 Blue/Green 업그레이드가 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;In-place 업그레이드는 현재 운영 중인 클러스터에서 버전을 업그레이드 하는 것을 의미하며, Blue/Green 업그레이드는 신규 클러스터(Green)를 생성해 워크로드를 생성한 뒤 신규 클러스터로 전환하는 방법을 의미합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;업그레이드에 대한 세부적인 정보를 아래와 같은 문서를 참고하시기 바랍니다.&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;Best Practices for Cluster Upgrades&lt;/li&gt;
&lt;/ul&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;https://docs.aws.amazon.com/eks/latest/best-practices/cluster-upgrades.html&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://docs.aws.amazon.com/eks/latest/best-practices/cluster-upgrades.html&lt;/a&gt;&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;Kubernetes cluster upgrade: the blue-green deployment strategy&lt;/li&gt;
&lt;/ul&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;https://aws.amazon.com/ko/blogs/containers/kubernetes-cluster-upgrade-the-blue-green-deployment-strategy/&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://aws.amazon.com/ko/blogs/containers/kubernetes-cluster-upgrade-the-blue-green-deployment-strategy/&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;또한 중요한 사항은 Kubernetes Version Skew 정책입니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;https://kubernetes.io/releases/version-skew-policy/#supported-version-skew&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://kubernetes.io/releases/version-skew-policy/#supported-version-skew&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Kubernetes version skew 정책의 의미는 주요 컴포넌트(ex. kube-apiserver, kubelet, etc) 간 버전 차이가 얼마나 허용되는지에 대한 규칙입니다. In-place 업그레이드에서 여러 버전을 순차적으로 업그레이드할 수 있는데, 컨트롤 플레인과 데이터 플레인 간 허용되는 버전 내에서 업그레이드를 고려해야 합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;예를 들어, kube-apiserver의 버전이 1.32일 때, 허용되는 kubelet, kube-proxy의 version skew는 1.29까지입니다. 그러하므로 1.29에서 컨트롤 플레인 버전을 1.32까지 업그레이드 할 수 있고, 이후 노드 그룹의 버전을 순차적으로 업그레이드 하시면 됩니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;또한 Kubernetes 의 In-place 업그레이드는 단계적인 버전 업그레이드만 지원되는 점도 유의를 해야 합니다. 한번에 여러 버전을 업그레이드 할 수 없습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;업그레이드 전 사전 검토 과정에서는 Cluster Insight 의 Upgrade insight를 검토해보기 바랍니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;여기서는 아래와 같이 Kubernetes version skew, 클러스터 상태, add-on 버전 호환성, Deprecated API 에 대한 검토가 이뤄지는 것을 알 수 있씁니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1776&quot; data-origin-height=&quot;1085&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/uXuwg/btsM3UZ2XBa/bY1Hh35y45jreKSkapHqRK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/uXuwg/btsM3UZ2XBa/bY1Hh35y45jreKSkapHqRK/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/uXuwg/btsM3UZ2XBa/bY1Hh35y45jreKSkapHqRK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FuXuwg%2FbtsM3UZ2XBa%2FbY1Hh35y45jreKSkapHqRK%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1776&quot; height=&quot;1085&quot; data-origin-width=&quot;1776&quot; data-origin-height=&quot;1085&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이후 실습을 통해서 상세한 내용을 살펴보겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;2. 실습 환경 개요&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;본 실습에서는 EKS 1.25 클러스터이며 Extended upgrade policy에 해당하는 것을 알 수 있습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1728&quot; data-origin-height=&quot;91&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bzlMjj/btsM3W4DN2p/3moeuz682R6XeqtQFsGjFk/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bzlMjj/btsM3W4DN2p/3moeuz682R6XeqtQFsGjFk/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bzlMjj/btsM3W4DN2p/3moeuz682R6XeqtQFsGjFk/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbzlMjj%2FbtsM3W4DN2p%2F3moeuz682R6XeqtQFsGjFk%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1728&quot; height=&quot;91&quot; data-origin-width=&quot;1728&quot; data-origin-height=&quot;91&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;또한 Compute 정보를 살펴보면 다양한 데이터 플레인 형태를 가지고 있습니다. Nodes를 보면 2개의 Managed node가 있고, 2개의 Self-managed node가 있습니다(실제로 1개는 Karpenter 노드입니다). 그리고 Fargate 노드도 확인됩니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;그러하므로 컨트롤 플레인을 업그레이드 한 뒤, 각 노드 그룹의 유형 별로 다른 업그레이드 방식을 실습을 통해 살펴보겠습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1776&quot; data-origin-height=&quot;1039&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/c0KuXD/btsM3YH9dwZ/HxBwIdkzzsF5aMd4l2xC90/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/c0KuXD/btsM3YH9dwZ/HxBwIdkzzsF5aMd4l2xC90/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/c0KuXD/btsM3YH9dwZ/HxBwIdkzzsF5aMd4l2xC90/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2Fc0KuXD%2FbtsM3YH9dwZ%2FHxBwIdkzzsF5aMd4l2xC90%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1776&quot; height=&quot;1039&quot; data-origin-width=&quot;1776&quot; data-origin-height=&quot;1039&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;워크샵에서는 웹 콘솔뿐 아니라 code-server를 제공하며, cod-server에 접속하면 terraform 파일(녹색), git-ops-repo에 대한 로컬 파일(빨간색)이 저장되어 있습니다. code-server 우측에는 Terminal이나 파일 편집(파란색)을 할 수 있습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1527&quot; data-origin-height=&quot;616&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bklZqw/btsM3Y2tk3z/zzdU1T12PCAzZGt2UkNG00/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bklZqw/btsM3Y2tk3z/zzdU1T12PCAzZGt2UkNG00/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bklZqw/btsM3Y2tk3z/zzdU1T12PCAzZGt2UkNG00/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbklZqw%2FbtsM3Y2tk3z%2FzzdU1T12PCAzZGt2UkNG00%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1527&quot; height=&quot;616&quot; data-origin-width=&quot;1527&quot; data-origin-height=&quot;616&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;git-ops-repo는 code commit이 remote로 지정되어 있으며, argo CD가 구성되어 code commit 리파지터리를 바라보도록 설정되어 있습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1682&quot; data-origin-height=&quot;331&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/13HH9/btsM3hHUC4p/Z6wKvDeyKHl2NfnNEjBif0/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/13HH9/btsM3hHUC4p/Z6wKvDeyKHl2NfnNEjBif0/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/13HH9/btsM3hHUC4p/Z6wKvDeyKHl2NfnNEjBif0/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2F13HH9%2FbtsM3hHUC4p%2FZ6wKvDeyKHl2NfnNEjBif0%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1682&quot; height=&quot;331&quot; data-origin-width=&quot;1682&quot; data-origin-height=&quot;331&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;그리고 EKS에서는 argo CD에 의해서 app of apps 형태로 아래와 같은 파드들이 실행 중에 있습니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;ec2-user:~/environment/terraform:$ kubectl get application -A
NAMESPACE   NAME        SYNC STATUS   HEALTH STATUS
argocd      apps        Synced        Healthy
argocd      assets      Synced        Healthy
argocd      carts       Synced        Healthy
argocd      catalog     Synced        Healthy
argocd      checkout    Synced        Healthy
argocd      karpenter   Synced        Healthy
argocd      orders      Synced        Healthy
argocd      other       Synced        Healthy
argocd      rabbitmq    Synced        Healthy
argocd      ui          OutOfSync     Healthy

ec2-user:~/environment/terraform:$ kubectl get po -A |grep -v kube-system
NAMESPACE     NAME                                                        READY   STATUS    RESTARTS       AGE
argocd        argo-cd-argocd-application-controller-0                     1/1     Running   0              2d8h
argocd        argo-cd-argocd-applicationset-controller-74d9c9c5c7-n5k95   1/1     Running   0              2d8h
argocd        argo-cd-argocd-dex-server-6dbbd57479-mst55                  1/1     Running   0              2d8h
argocd        argo-cd-argocd-notifications-controller-fb4b954d5-v9dw7     1/1     Running   0              2d8h
argocd        argo-cd-argocd-redis-76b4c599dc-c8d2j                       1/1     Running   0              2d8h
argocd        argo-cd-argocd-repo-server-6b777b579d-b7ssz                 1/1     Running   0              2d8h
argocd        argo-cd-argocd-server-86bdbd7b89-gzm7d                      1/1     Running   0              2d8h
assets        assets-7ccc84cb4d-2p284                                     1/1     Running   0              2d8h
carts         carts-7ddbc698d8-wl9k9                                      1/1     Running   1 (2d8h ago)   2d8h
carts         carts-dynamodb-6594f86bb9-8gwpf                             1/1     Running   0              2d8h
catalog       catalog-857f89d57d-nrnf7                                    1/1     Running   3 (2d8h ago)   2d8h
catalog       catalog-mysql-0                                             1/1     Running   0              2d8h
checkout      checkout-558f7777c-z5qvh                                    1/1     Running   0              17h
checkout      checkout-redis-f54bf7cb5-r2sdp                              1/1     Running   0              17h
karpenter     karpenter-74c6ffc5d9-8m6mc                                  1/1     Running   0              2d8h
karpenter     karpenter-74c6ffc5d9-nj7lc                                  1/1     Running   0              2d8h
orders        orders-5b97745747-7rwdl                                     1/1     Running   2 (2d8h ago)   2d8h
orders        orders-mysql-b9b997d9d-bnbmn                                1/1     Running   0              2d8h
rabbitmq      rabbitmq-0                                                  1/1     Running   0              2d8h
ui            ui-5dfb7d65fc-nfrjw                                         1/1     Running   0              2d8h&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;환경을 이해하기 위해 code-comit 으로 push를 해서 argo CD로 sync가 이뤄지도록 변경을 수행해보겠습니다.&lt;/p&gt;
&lt;pre class=&quot;routeros&quot;&gt;&lt;code&gt;# service를 nlb로 노출
cat &amp;lt;&amp;lt; EOF &amp;gt; ~/environment/eks-gitops-repo/apps/ui/service-nlb.yaml
apiVersion: v1
kind: Service
metadata:
  annotations:
    service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: ip
    service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing
    service.beta.kubernetes.io/aws-load-balancer-type: external
  labels:
    app.kubernetes.io/instance: ui
    app.kubernetes.io/name: ui
  name: ui-nlb
  namespace: ui
spec:
  type: LoadBalancer
  ports:
  - name: http
    port: 80
    protocol: TCP
    targetPort: 8080
  selector:
    app.kubernetes.io/instance: ui
    app.kubernetes.io/name: ui
EOF

cat &amp;lt;&amp;lt; EOF &amp;gt; ~/environment/eks-gitops-repo/apps/ui/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: ui
resources:
  - namespace.yaml
  - configMap.yaml
  - serviceAccount.yaml
  - service.yaml
  - deployment.yaml
  - hpa.yaml
  - service-nlb.yaml
EOF

#
cd ~/environment/eks-gitops-repo/
git add apps/ui/service-nlb.yaml apps/ui/kustomization.yaml
git commit -m &quot;Add to ui nlb&quot;
git push
argocd app sync ui
...

# UI 접속 URL 확인 (1.5, 1.3 배율)
kubectl get svc -n ui ui-nlb -o jsonpath='{.status.loadBalancer.ingress[0].hostname}' | awk '{ print &quot;UI URL = http://&quot;$1&quot;&quot;}'&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이제 실제 업그레이드를 수행해 보겠습니다. 클러스터 업그레이드 수단은 웹 콘솔, CLI, IaC 도구 등이 있을 수 있습니다. 본 실습에서는 Terraform을 통해서 모든 업그레이드를 진행합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;3. In-place 클러스터 업그레이드&lt;/h2&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;3.1. 컨트롤 플레인 업그레이드&lt;/h3&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;EKS의 컨트롤 플레인 업그레이드는 Blue/Green 업그레이드 방식으로 진행되는 것으로 알려져 있습니다. 업그레이드 과정에서 이슈가 발생하면 업그레이드는 Roll back되어 영향을 최소화 합니다. Rollback 되는 경우 실패 이유를 평가하여 문제를 해결하기 위한 지침을 제공하여, 문제를 해결하고 다시 업그레이드를 시도할 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;컨트롤 플레인 업그레이드에 앞서 서비스 호출을 모니터링 하겠습니다.&lt;/p&gt;
&lt;pre class=&quot;reasonml&quot;&gt;&lt;code&gt;export UI_WEB=$(kubectl get svc -n ui ui-nlb -o jsonpath='{.status.loadBalancer.ingress[0].hostname}'/actuator/health/liveness)

while true; do date; curl -s $UI_WEB; echo; aws eks describe-cluster --name eksworkshop-eksctl | egrep 'version|endpoint&quot;|issuer|platformVersion'; echo ; sleep 2; echo; done&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Terraform 코드에서 EKS의 버전 정보는 &lt;code&gt;variables.tf&lt;/code&gt; 저장되어 있습니다. 여기서 cluster_version을 1.25에서 1.26으로 변경합니다.&lt;/p&gt;
&lt;pre class=&quot;routeros&quot;&gt;&lt;code&gt;variable &quot;cluster_version&quot; {
  description = &quot;EKS cluster version.&quot;
  type        = string
  default     = &quot;1.25&quot;
}

variable &quot;mng_cluster_version&quot; {
  description = &quot;EKS cluster mng version.&quot;
  type        = string
  default     = &quot;1.25&quot;
}


variable &quot;ami_id&quot; {
  description = &quot;EKS AMI ID for node groups&quot;
  type        = string
  default     = &quot;&quot;
}&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;터미널에서 아래와 같이 수행하면 대략 10분 내에 완료가 됩니다.&lt;/p&gt;
&lt;pre class=&quot;routeros&quot;&gt;&lt;code&gt;ec2-user:~/environment/terraform:$ terraform apply -auto-approve
aws_iam_user.argocd_user: Refreshing state... [id=argocd-user]
module.vpc.aws_vpc.this[0]: Refreshing state... [id=vpc-0cf5ec98d2e448575]
module.eks.data.aws_partition.current[0]: Reading...
data.aws_caller_identity.current: Reading...
...
Plan: 6 to add, 13 to change, 6 to destroy.
...
...
Apply complete! Resources: 6 added, 2 changed, 6 destroyed.

Outputs:

configure_kubectl = &quot;aws eks --region us-west-2 update-kubeconfig --name eksworkshop-eksctl&quot;&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;웹 콘솔에서도 업그레이드가 트리거 된 것을 확인 할 수 있습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1785&quot; data-origin-height=&quot;654&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/Y46pH/btsM4aVTnYT/QvLkbVRG26JRKMj3JXlBYK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/Y46pH/btsM4aVTnYT/QvLkbVRG26JRKMj3JXlBYK/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/Y46pH/btsM4aVTnYT/QvLkbVRG26JRKMj3JXlBYK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FY46pH%2FbtsM4aVTnYT%2FQvLkbVRG26JRKMj3JXlBYK%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1785&quot; height=&quot;654&quot; data-origin-width=&quot;1785&quot; data-origin-height=&quot;654&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;컨트롤 플레인 업그레이드 전/후 시간과 상태를 보면 아래와 같습니다.&lt;/p&gt;
&lt;pre class=&quot;jboss-cli&quot;&gt;&lt;code&gt;# 업그레이드 전
Tue Apr  1 14:22:41 UTC 2025
{&quot;status&quot;:&quot;UP&quot;}
        &quot;version&quot;: &quot;1.25&quot;,
        &quot;endpoint&quot;: &quot;https://C55B5928163C30776DEF011A92FE870C.gr7.us-west-2.eks.amazonaws.com&quot;,
                &quot;issuer&quot;: &quot;https://oidc.eks.us-west-2.amazonaws.com/id/C55B5928163C30776DEF011A92FE870C&quot;
        &quot;platformVersion&quot;: &quot;eks.44&quot;,
...
Tue Apr  1 14:30:20 UTC 2025
{&quot;status&quot;:&quot;UP&quot;}
        &quot;version&quot;: &quot;1.26&quot;,
        &quot;endpoint&quot;: &quot;https://C55B5928163C30776DEF011A92FE870C.gr7.us-west-2.eks.amazonaws.com&quot;,
                &quot;issuer&quot;: &quot;https://oidc.eks.us-west-2.amazonaws.com/id/C55B5928163C30776DEF011A92FE870C&quot;
        &quot;platformVersion&quot;: &quot;eks.45&quot;,&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;업그레이드를 진행한 이후에 데이터 플레인의 버전과는 상이한 상태이지만, Kubernetes version skew에서는 문제가 없는 상황입니다. 아래와 같이 Upgrade insight를 확인해볼 수 있습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1724&quot; data-origin-height=&quot;509&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/TbSa6/btsM3o1nvDJ/MVckNiPpEs3ooj5k2H9guk/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/TbSa6/btsM3o1nvDJ/MVckNiPpEs3ooj5k2H9guk/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/TbSa6/btsM3o1nvDJ/MVckNiPpEs3ooj5k2H9guk/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FTbSa6%2FbtsM3o1nvDJ%2FMVckNiPpEs3ooj5k2H9guk%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1724&quot; height=&quot;509&quot; data-origin-width=&quot;1724&quot; data-origin-height=&quot;509&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;3.2. Addons 업그레이드&lt;/h3&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이제 애드온 업그레이드를 진행하겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;eksctl 을 통해서 가능한 업그레이드 버전을 확인할 수 있습니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;NAME                    VERSION                 STATUS  ISSUES  IAMROLE                                                                                         UPDATE AVAILABLE                                                                                                                                                   CONFIGURATION VALUES
aws-ebs-csi-driver      v1.41.0-eksbuild.1      ACTIVE  0       arn:aws:iam::181150650881:role/eksworkshop-eksctl-ebs-csi-driver-2025033005221599450000001d
coredns                 v1.8.7-eksbuild.10      ACTIVE  0                                                                                                       v1.9.3-eksbuild.22,v1.9.3-eksbuild.21,v1.9.3-eksbuild.19,v1.9.3-eksbuild.17,v1.9.3-eksbuild.15,v1.9.3-eksbuild.11,v1.9.3-eksbuild.10,v1.9.3-eksbuild.9,v1.9.3-eksbuild.7,v1.9.3-eksbuild.6,v1.9.3-eksbuild.5,v1.9.3-eksbuild.3,v1.9.3-eksbuild.2
kube-proxy              v1.25.16-eksbuild.8     ACTIVE  0                                                                                                       v1.26.15-eksbuild.24,v1.26.15-eksbuild.19,v1.26.15-eksbuild.18,v1.26.15-eksbuild.14,v1.26.15-eksbuild.10,v1.26.15-eksbuild.5,v1.26.15-eksbuild.2,v1.26.13-eksbuild.2,v1.26.11-eksbuild.4,v1.26.11-eksbuild.1,v1.26.9-eksbuild.2,v1.26.7-eksbuild.2,v1.26.6-eksbuild.2,v1.26.6-eksbuild.1,v1.26.4-eksbuild.1,v1.26.2-eksbuild.1
vpc-cni                 v1.19.3-eksbuild.1      ACTIVE  0&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;각 애드온은 아래 페이지에서 EKS 버전 별 호환 버전을 확인하실 수 있습니다.&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;Amazon VPC CNI: &lt;a href=&quot;https://docs.aws.amazon.com/eks/latest/userguide/managing-vpc-cni.html&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://docs.aws.amazon.com/eks/latest/userguide/managing-vpc-cni.html&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;coredns: &lt;a href=&quot;https://docs.aws.amazon.com/eks/latest/userguide/managing-coredns.html&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://docs.aws.amazon.com/eks/latest/userguide/managing-coredns.html&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;kube-proxy: &lt;a href=&quot;https://docs.aws.amazon.com/eks/latest/userguide/managing-kube-proxy.html&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://docs.aws.amazon.com/eks/latest/userguide/managing-kube-proxy.html&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;또한 아래 명령으로 1.26에 호환되는 버전 정보를 확인 할 수 있습니다. 현재 VPC CNI와 EBS CSI driver는 최신 버전을 사용 중으로 coredns(v1.8.7-eksbuild.10), kube-proxy(v1.25.16-eksbuild.8)에 대해서 확인합니다.&lt;/p&gt;
&lt;pre class=&quot;gherkin&quot;&gt;&lt;code&gt;ec2-user:~/environment/terraform:$ aws eks describe-addon-versions --addon-name coredns --kubernetes-version 1.26 --output table \
    --query &quot;addons[].addonVersions[:10].{Version:addonVersion,DefaultVersion:compatibilities[0].defaultVersion}&quot;
------------------------------------------
|          DescribeAddonVersions         |
+-----------------+----------------------+
| DefaultVersion  |       Version        |
+-----------------+----------------------+
|  False          |  v1.9.3-eksbuild.22  |
|  False          |  v1.9.3-eksbuild.21  |
|  False          |  v1.9.3-eksbuild.19  |
|  False          |  v1.9.3-eksbuild.17  |
|  False          |  v1.9.3-eksbuild.15  |
|  False          |  v1.9.3-eksbuild.11  |
|  False          |  v1.9.3-eksbuild.10  |
|  False          |  v1.9.3-eksbuild.9   |
|  True           |  v1.9.3-eksbuild.7   |
|  False          |  v1.9.3-eksbuild.6   |
+-----------------+----------------------+
ec2-user:~/environment/terraform:$ aws eks describe-addon-versions --addon-name kube-proxy --kubernetes-version 1.26 --output table \
    --query &quot;addons[].addonVersions[:10].{Version:addonVersion,DefaultVersion:compatibilities[0].defaultVersion}&quot;
--------------------------------------------
|           DescribeAddonVersions          |
+-----------------+------------------------+
| DefaultVersion  |        Version         |
+-----------------+------------------------+
|  False          |  v1.26.15-eksbuild.24  |
|  False          |  v1.26.15-eksbuild.19  |
|  False          |  v1.26.15-eksbuild.18  |
|  False          |  v1.26.15-eksbuild.14  |
|  False          |  v1.26.15-eksbuild.10  |
|  False          |  v1.26.15-eksbuild.5   |
|  False          |  v1.26.15-eksbuild.2   |
|  False          |  v1.26.13-eksbuild.2   |
|  False          |  v1.26.11-eksbuild.4   |
|  False          |  v1.26.11-eksbuild.1   |
+-----------------+------------------------+&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;테라폼 코드 중 &lt;code&gt;addons.tf&lt;/code&gt; 를 열어 아래의 정보를 최신 버전으로 변경합니다.&lt;/p&gt;
&lt;pre class=&quot;nix&quot;&gt;&lt;code&gt;  eks_addons = {
    coredns = {
      addon_version = &quot;v1.8.7-eksbuild.10&quot;
    }
    kube-proxy = {
      addon_version = &quot;v1.25.16-eksbuild.8&quot;
    }
    vpc-cni = {
      most_recent = true
    }
    aws-ebs-csi-driver = {
      service_account_role_arn = module.ebs_csi_driver_irsa.iam_role_arn
    }
  }
&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;terraform 명령으로 업그레이드를 진행합니다.&lt;/p&gt;
&lt;pre class=&quot;groovy&quot;&gt;&lt;code&gt;ec2-user:~/environment/terraform:$ terraform apply -auto-approve
aws_iam_user.argocd_user: Refreshing state... [id=argocd-user]
data.aws_caller_identity.current: Reading...

...

Apply complete! Resources: 0 added, 2 changed, 0 destroyed.

Outputs:

configure_kubectl = &quot;aws eks --region us-west-2 update-kubeconfig --name eksworkshop-eksctl&quot;&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;대략 1분 30초 정도가 소요되었습니다.&lt;/p&gt;
&lt;pre class=&quot;erlang&quot;&gt;&lt;code&gt;Tue Apr  1 14:58:07 UTC 2025
{&quot;status&quot;:&quot;UP&quot;}
        &quot;version&quot;: &quot;1.26&quot;,
        &quot;endpoint&quot;: &quot;https://C55B5928163C30776DEF011A92FE870C.gr7.us-west-2.eks.amazonaws.com&quot;,
                &quot;issuer&quot;: &quot;https://oidc.eks.us-west-2.amazonaws.com/id/C55B5928163C30776DEF011A92FE870C&quot;
        &quot;platformVersion&quot;: &quot;eks.45&quot;,
...
Tue Apr  1 14:59:36 UTC 2025
{&quot;status&quot;:&quot;UP&quot;}
        &quot;version&quot;: &quot;1.26&quot;,
        &quot;endpoint&quot;: &quot;https://C55B5928163C30776DEF011A92FE870C.gr7.us-west-2.eks.amazonaws.com&quot;,
                &quot;issuer&quot;: &quot;https://oidc.eks.us-west-2.amazonaws.com/id/C55B5928163C30776DEF011A92FE870C&quot;
        &quot;platformVersion&quot;: &quot;eks.45&quot;,&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;관련 파드들이 롤링 업데이트 됩니다. 과정을 살펴보면 coredns는 pdb가 지정되어 있기 때문에 하나의 파드가 Running 상태가 된 이후 old파드가 Terminating되는 것을 알 수 있습니다. kube-proxy는 데몬 셋으로 종료 후 신규 파드로 생성됩니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;ec2-user:~/environment:$ kubectl get pdb -n kube-system
NAME                           MIN AVAILABLE   MAX UNAVAILABLE   ALLOWED DISRUPTIONS   AGE
aws-load-balancer-controller   N/A             1                 1                     2d9h
coredns                        N/A             1                 1                     2d9h
ebs-csi-controller             N/A             1                 1                     2d9h

ec2-user:~/environment:$ kubectl get po -n kube-system -w 
NAME                                            READY   STATUS    RESTARTS   AGE
...
kube-proxy-rdhmw                                1/1     Terminating   0          2d9h
coredns-98f76fbc4-d7l7z                         1/1     Terminating   0          2d9h
kube-proxy-rdhmw                                0/1     Terminating   0          2d9h
kube-proxy-rdhmw                                0/1     Terminating   0          2d9h
kube-proxy-rdhmw                                0/1     Terminating   0          2d9h
coredns-58cc4d964b-5rbmb                        0/1     Pending       0          0s
coredns-58cc4d964b-5rbmb                        0/1     Pending       0          0s
coredns-58cc4d964b-5rbmb                        0/1     ContainerCreating   0          0s
coredns-58cc4d964b-d5zmg                        0/1     Pending             0          0s
coredns-58cc4d964b-d5zmg                        0/1     Pending             0          0s
kube-proxy-gbn46                                0/1     Pending             0          1s
kube-proxy-gbn46                                0/1     Pending             0          1s
coredns-58cc4d964b-d5zmg                        0/1     ContainerCreating   0          0s
kube-proxy-gbn46                                0/1     ContainerCreating   0          1s
coredns-98f76fbc4-d7l7z                         0/1     Terminating         0          2d9h
coredns-98f76fbc4-d7l7z                         0/1     Terminating         0          2d9h
coredns-98f76fbc4-d7l7z                         0/1     Terminating         0          2d9h
coredns-58cc4d964b-d5zmg                        0/1     Running             0          2s
coredns-58cc4d964b-d5zmg                        1/1     Running             0          2s
coredns-98f76fbc4-brtkn                         1/1     Terminating         0          2d9h
coredns-58cc4d964b-5rbmb                        0/1     Running             0          3s
coredns-58cc4d964b-5rbmb                        1/1     Running             0          3s
kube-proxy-gbn46                                1/1     Running             0          3s
kube-proxy-rkvpc                                1/1     Terminating         0          2d9h
coredns-98f76fbc4-brtkn                         0/1     Terminating         0          2d9h
coredns-98f76fbc4-brtkn                         0/1     Terminating         0          2d9h
coredns-98f76fbc4-brtkn                         0/1     Terminating         0          2d9h
kube-proxy-rkvpc                                0/1     Terminating         0          2d9h
kube-proxy-rkvpc                                0/1     Terminating         0          2d9h
kube-proxy-rkvpc                                0/1     Terminating         0          2d9h
kube-proxy-tt8mk                                0/1     Pending             0          0s
kube-proxy-tt8mk                                0/1     Pending             0          0s
kube-proxy-tt8mk                                0/1     ContainerCreating   0          0s
kube-proxy-tt8mk                                1/1     Running             0          2s
kube-proxy-psbfc                                1/1     Terminating         0          2d9h
kube-proxy-psbfc                                0/1     Terminating         0          2d9h
kube-proxy-psbfc                                0/1     Terminating         0          2d9h
kube-proxy-psbfc                                0/1     Terminating         0          2d9h
kube-proxy-vv6cz                                0/1     Pending             0          0s
kube-proxy-vv6cz                                0/1     Pending             0          0s
kube-proxy-vv6cz                                0/1     ContainerCreating   0          0s
kube-proxy-vv6cz                                1/1     Running             0          2s
kube-proxy-sv977                                1/1     Terminating         0          2d9h
kube-proxy-sv977                                0/1     Terminating         0          2d9h
kube-proxy-sv977                                0/1     Terminating         0          2d9h
kube-proxy-sv977                                0/1     Terminating         0          2d9h
kube-proxy-t9xxk                                0/1     Pending             0          0s
kube-proxy-t9xxk                                0/1     Pending             0          0s
kube-proxy-t9xxk                                0/1     ContainerCreating   0          0s
kube-proxy-t9xxk                                1/1     Running             0          2s
kube-proxy-5zz6t                                1/1     Terminating         0          17h
kube-proxy-5zz6t                                0/1     Terminating         0          17h
kube-proxy-5zz6t                                0/1     Terminating         0          17h
kube-proxy-5zz6t                                0/1     Terminating         0          17h
kube-proxy-zh6st                                0/1     Pending             0          0s
kube-proxy-zh6st                                0/1     Pending             0          0s
kube-proxy-zh6st                                0/1     ContainerCreating   0          0s
kube-proxy-zh6st                                1/1     Running             0          2s
kube-proxy-jbwlb                                1/1     Terminating         0          2d9h
kube-proxy-jbwlb                                0/1     Terminating         0          2d9h
kube-proxy-jbwlb                                0/1     Terminating         0          2d9h
kube-proxy-jbwlb                                0/1     Terminating         0          2d9h
kube-proxy-6jlqj                                0/1     Pending             0          0s
kube-proxy-6jlqj                                0/1     Pending             0          0s
kube-proxy-6jlqj                                0/1     ContainerCreating   0          0s
kube-proxy-6jlqj                                1/1     Running             0          2s&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;3.3. 관리형 노드 그룹 업그레이드&lt;/h3&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;In-place 클러스터 업그레이드에서도 관리형 노드 그룹의 업그레이드를 In-place와 Blue/Green 업그레이드로 선택 진행할 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;관리형 노드 그룹 In-place 업그레이드&lt;/h4&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;In-place 업그레이드는 점진적인 롤링 업그레이드로 구현되어, 새로운 노드가 먼저 ASG에 추가되고, 이후 구 노드는 cordon, drain, remove 되는 방식으로 진행됩니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이 과정을 설정 단계&amp;gt;확장 단계&amp;gt;업그레이드 단계&amp;gt;축소단계로 이해할 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;1) 설정 단계&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;최신 Lunch template 버전을 사용하도록 ASG를 업데이트하고 updateConfig 속성을 사용하여 병렬로 업그레이드할 노드의 최대 수를 결정.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;참고로 updateConfig는 노드 그룹의 속성에서 확인할 수 있습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;2007&quot; data-origin-height=&quot;444&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/Pnu1o/btsM5Xnr5me/KKqm1U3jmPUuvQUWmQ7tmK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/Pnu1o/btsM5Xnr5me/KKqm1U3jmPUuvQUWmQ7tmK/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/Pnu1o/btsM5Xnr5me/KKqm1U3jmPUuvQUWmQ7tmK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FPnu1o%2FbtsM5Xnr5me%2FKKqm1U3jmPUuvQUWmQ7tmK%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;2007&quot; height=&quot;444&quot; data-origin-width=&quot;2007&quot; data-origin-height=&quot;444&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이때, Update strategy의 Default는 새 노드를 먼저 추가 후 구 노드를 삭제하는 방식이고, Minimal 구 노드를 바로 삭제하는 방식입니다. 비용이 우선시 되는 노드 그룹은 Minimal을 선택할 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;2) 확장 단계&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;ASG의 Maximum size나 desired size 중 큰 값으로 증가시킵니다. 또한 배포된 가용 영역 수의 두배까지 증가합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이 단계에서 노드그룹을 확장하면 구 노드에 대해서는 un-schedulable로 마크하고, &lt;code&gt;node.kubernetes.io/exclude-from-external-load-balancers=true&lt;/code&gt;를 설정해 로드 밸러서에서 노들르 제거할 수 있도록 합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;3) 업그레이드 단계&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;노드에서 파드 drain 하고, 노드를 cordon합니다. 이후 ASG에 종료 요청을 보냅니다. Unavailable 단위로 진행할 수 있으며, 모든 구 노드가 삭제될 때까지 업그레이드 단계를 반복합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;4) 축소단계&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;ASG의 Maximum과 Desired 를 1씩 줄여서 업데이트가 시작되기 전의 값으로 돌아갑니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이제 in-place 업그레이드를 진행합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;code&gt;variable.tf&lt;/code&gt; 파일의 관리형 노드 그룹에 대한 값을 1.26으로 변경합니다.&lt;/p&gt;
&lt;pre class=&quot;routeros&quot;&gt;&lt;code&gt;variable &quot;cluster_version&quot; {
  description = &quot;EKS cluster version.&quot;
  type        = string
  default     = &quot;1.26&quot;
}

variable &quot;mng_cluster_version&quot; {
  description = &quot;EKS cluster mng version.&quot;
  type        = string
  default     = &quot;1.25&quot; # &amp;lt;- 1.26 
}


variable &quot;ami_id&quot; {
  description = &quot;EKS AMI ID for node groups&quot;
  type        = string
  default     = &quot;&quot;
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;마찬가지로 terraform을 적용합니다.&lt;/p&gt;
&lt;pre class=&quot;groovy&quot;&gt;&lt;code&gt;ec2-user:~/environment/terraform:$ terraform apply -auto-approve
...

Apply complete! Resources: 3 added, 1 changed, 3 destroyed.

Outputs:

configure_kubectl = &quot;aws eks --region us-west-2 update-kubeconfig --name eksworkshop-eksctl&quot;
ec2-user:~/environment/terraform:$ &lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;업그레이드 과정에서 증가/축소 및 노드 상태를 확인하기 위해서 아래와 같이 모니터링을 하겠습니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;while true; do date; kubectl get nodes -o wide --label-columns=eks.amazonaws.com/nodegroup,topology.kubernetes.io/zone |grep initial; sleep 5; echo; done

# 최초 
ec2-user:~/environment:$ while true; do date; kubectl get nodes -o wide --label-columns=eks.amazonaws.com/nodegroup,topology.kubernetes.io/zone |grep initial; sleep 5; echo; done
Tue Apr  1 15:49:52 UTC 2025
ip-10-0-12-239.us-west-2.compute.internal           Ready    &amp;lt;none&amp;gt;   2d10h   v1.25.16-eks-59bf375   10.0.12.239   &amp;lt;none&amp;gt;        Amazon Linux 2   5.10.234-225.910.amzn2.x86_64   containerd://1.7.25   initial-2025033005225054810000002a    us-west-2a
ip-10-0-32-55.us-west-2.compute.internal            Ready    &amp;lt;none&amp;gt;   2d10h   v1.25.16-eks-59bf375   10.0.32.55    &amp;lt;none&amp;gt;        Amazon Linux 2   5.10.234-225.910.amzn2.x86_64   containerd://1.7.25   initial-2025033005225054810000002a    us-west-2c


# 증가 단계 (4대의 노드가 추가 됨)
Tue Apr  1 15:53:06 UTC 2025
ip-10-0-12-239.us-west-2.compute.internal           Ready    &amp;lt;none&amp;gt;   2d10h   v1.25.16-eks-59bf375   10.0.12.239   &amp;lt;none&amp;gt;        Amazon Linux 2   5.10.234-225.910.amzn2.x86_64   containerd://1.7.25   initial-2025033005225054810000002a    us-west-2a
ip-10-0-15-190.us-west-2.compute.internal           Ready    &amp;lt;none&amp;gt;   27s     v1.26.15-eks-59bf375   10.0.15.190   &amp;lt;none&amp;gt;        Amazon Linux 2   5.10.234-225.910.amzn2.x86_64   containerd://1.7.25   initial-2025033005225054810000002a    us-west-2a
ip-10-0-28-191.us-west-2.compute.internal           Ready    &amp;lt;none&amp;gt;   2m27s   v1.26.15-eks-59bf375   10.0.28.191   &amp;lt;none&amp;gt;        Amazon Linux 2   5.10.234-225.910.amzn2.x86_64   containerd://1.7.25   initial-2025033005225054810000002a    us-west-2b
ip-10-0-30-83.us-west-2.compute.internal            Ready    &amp;lt;none&amp;gt;   2m26s   v1.26.15-eks-59bf375   10.0.30.83    &amp;lt;none&amp;gt;        Amazon Linux 2   5.10.234-225.910.amzn2.x86_64   containerd://1.7.25   initial-2025033005225054810000002a    us-west-2b
ip-10-0-32-55.us-west-2.compute.internal            Ready    &amp;lt;none&amp;gt;   2d10h   v1.25.16-eks-59bf375   10.0.32.55    &amp;lt;none&amp;gt;        Amazon Linux 2   5.10.234-225.910.amzn2.x86_64   containerd://1.7.25   initial-2025033005225054810000002a    us-west-2c
ip-10-0-46-150.us-west-2.compute.internal           Ready    &amp;lt;none&amp;gt;   94s     v1.26.15-eks-59bf375   10.0.46.150   &amp;lt;none&amp;gt;        Amazon Linux 2   5.10.234-225.910.amzn2.x86_64   containerd://1.7.25   initial-2025033005225054810000002a    us-west-2c


# 업그레이드 단계
# old node cordon
Tue Apr  1 15:53:12 UTC 2025
ip-10-0-12-239.us-west-2.compute.internal           Ready,SchedulingDisabled   &amp;lt;none&amp;gt;   2d10h   v1.25.16-eks-59bf375   10.0.12.239   &amp;lt;none&amp;gt;        Amazon Linux 2   5.10.234-225.910.amzn2.x86_64   containerd://1.7.25   initial-2025033005225054810000002a    us-west-2a
ip-10-0-15-190.us-west-2.compute.internal           Ready                      &amp;lt;none&amp;gt;   33s     v1.26.15-eks-59bf375   10.0.15.190   &amp;lt;none&amp;gt;        Amazon Linux 2   5.10.234-225.910.amzn2.x86_64   containerd://1.7.25   initial-2025033005225054810000002a    us-west-2a
ip-10-0-28-191.us-west-2.compute.internal           Ready                      &amp;lt;none&amp;gt;   2m33s   v1.26.15-eks-59bf375   10.0.28.191   &amp;lt;none&amp;gt;        Amazon Linux 2   5.10.234-225.910.amzn2.x86_64   containerd://1.7.25   initial-2025033005225054810000002a    us-west-2b
ip-10-0-30-83.us-west-2.compute.internal            Ready                      &amp;lt;none&amp;gt;   2m32s   v1.26.15-eks-59bf375   10.0.30.83    &amp;lt;none&amp;gt;        Amazon Linux 2   5.10.234-225.910.amzn2.x86_64   containerd://1.7.25   initial-2025033005225054810000002a    us-west-2b
ip-10-0-32-55.us-west-2.compute.internal            Ready,SchedulingDisabled   &amp;lt;none&amp;gt;   2d10h   v1.25.16-eks-59bf375   10.0.32.55    &amp;lt;none&amp;gt;        Amazon Linux 2   5.10.234-225.910.amzn2.x86_64   containerd://1.7.25   initial-2025033005225054810000002a    us-west-2c
ip-10-0-46-150.us-west-2.compute.internal           Ready                      &amp;lt;none&amp;gt;   100s    v1.26.15-eks-59bf375   10.0.46.150   &amp;lt;none&amp;gt;        Amazon Linux 2   5.10.234-225.910.amzn2.x86_64   containerd://1.7.25   initial-2025033005225054810000002a    us-west-2c

# x.x.x.239 old node NotReady
Tue Apr  1 15:56:07 UTC 2025
ip-10-0-12-239.us-west-2.compute.internal           NotReady,SchedulingDisabled   &amp;lt;none&amp;gt;   2d10h   v1.25.16-eks-59bf375   10.0.12.239   &amp;lt;none&amp;gt;        Amazon Linux 2   5.10.234-225.910.amzn2.x86_64   containerd://1.7.25   initial-2025033005225054810000002a    us-west-2a
ip-10-0-15-190.us-west-2.compute.internal           Ready                         &amp;lt;none&amp;gt;   3m29s   v1.26.15-eks-59bf375   10.0.15.190   &amp;lt;none&amp;gt;        Amazon Linux 2   5.10.234-225.910.amzn2.x86_64   containerd://1.7.25   initial-2025033005225054810000002a    us-west-2a
ip-10-0-28-191.us-west-2.compute.internal           Ready                         &amp;lt;none&amp;gt;   5m29s   v1.26.15-eks-59bf375   10.0.28.191   &amp;lt;none&amp;gt;        Amazon Linux 2   5.10.234-225.910.amzn2.x86_64   containerd://1.7.25   initial-2025033005225054810000002a    us-west-2b
ip-10-0-30-83.us-west-2.compute.internal            Ready                         &amp;lt;none&amp;gt;   5m28s   v1.26.15-eks-59bf375   10.0.30.83    &amp;lt;none&amp;gt;        Amazon Linux 2   5.10.234-225.910.amzn2.x86_64   containerd://1.7.25   initial-2025033005225054810000002a    us-west-2b
ip-10-0-32-55.us-west-2.compute.internal            Ready,SchedulingDisabled      &amp;lt;none&amp;gt;   2d10h   v1.25.16-eks-59bf375   10.0.32.55    &amp;lt;none&amp;gt;        Amazon Linux 2   5.10.234-225.910.amzn2.x86_64   containerd://1.7.25   initial-2025033005225054810000002a    us-west-2c
ip-10-0-46-150.us-west-2.compute.internal           Ready                         &amp;lt;none&amp;gt;   4m36s   v1.26.15-eks-59bf375   10.0.46.150   &amp;lt;none&amp;gt;        Amazon Linux 2   5.10.234-225.910.amzn2.x86_64   containerd://1.7.25   initial-2025033005225054810000002a    us-west-2c

# x.x.x.239 old node removed
Tue Apr  1 15:56:14 UTC 2025
ip-10-0-15-190.us-west-2.compute.internal           Ready                      &amp;lt;none&amp;gt;   3m35s   v1.26.15-eks-59bf375   10.0.15.190   &amp;lt;none&amp;gt;        Amazon Linux 2   5.10.234-225.910.amzn2.x86_64   containerd://1.7.25   initial-2025033005225054810000002a    us-west-2a
ip-10-0-28-191.us-west-2.compute.internal           Ready                      &amp;lt;none&amp;gt;   5m35s   v1.26.15-eks-59bf375   10.0.28.191   &amp;lt;none&amp;gt;        Amazon Linux 2   5.10.234-225.910.amzn2.x86_64   containerd://1.7.25   initial-2025033005225054810000002a    us-west-2b
ip-10-0-30-83.us-west-2.compute.internal            Ready                      &amp;lt;none&amp;gt;   5m34s   v1.26.15-eks-59bf375   10.0.30.83    &amp;lt;none&amp;gt;        Amazon Linux 2   5.10.234-225.910.amzn2.x86_64   containerd://1.7.25   initial-2025033005225054810000002a    us-west-2b
ip-10-0-32-55.us-west-2.compute.internal            Ready,SchedulingDisabled   &amp;lt;none&amp;gt;   2d10h   v1.25.16-eks-59bf375   10.0.32.55    &amp;lt;none&amp;gt;        Amazon Linux 2   5.10.234-225.910.amzn2.x86_64   containerd://1.7.25   initial-2025033005225054810000002a    us-west-2c
ip-10-0-46-150.us-west-2.compute.internal           Ready                      &amp;lt;none&amp;gt;   4m42s   v1.26.15-eks-59bf375   10.0.46.150   &amp;lt;none&amp;gt;        Amazon Linux 2   5.10.234-225.910.amzn2.x86_64   containerd://1.7.25   initial-2025033005225054810000002a    us-west-2c

# x.x.x.55 old node Not ready
Tue Apr  1 15:59:08 UTC 2025
ip-10-0-15-190.us-west-2.compute.internal           Ready                         &amp;lt;none&amp;gt;   6m29s   v1.26.15-eks-59bf375   10.0.15.190   &amp;lt;none&amp;gt;        Amazon Linux 2   5.10.234-225.910.amzn2.x86_64   containerd://1.7.25   initial-2025033005225054810000002a    us-west-2a
ip-10-0-28-191.us-west-2.compute.internal           Ready                         &amp;lt;none&amp;gt;   8m29s   v1.26.15-eks-59bf375   10.0.28.191   &amp;lt;none&amp;gt;        Amazon Linux 2   5.10.234-225.910.amzn2.x86_64   containerd://1.7.25   initial-2025033005225054810000002a    us-west-2b
ip-10-0-30-83.us-west-2.compute.internal            Ready                         &amp;lt;none&amp;gt;   8m28s   v1.26.15-eks-59bf375   10.0.30.83    &amp;lt;none&amp;gt;        Amazon Linux 2   5.10.234-225.910.amzn2.x86_64   containerd://1.7.25   initial-2025033005225054810000002a    us-west-2b
ip-10-0-32-55.us-west-2.compute.internal            NotReady,SchedulingDisabled   &amp;lt;none&amp;gt;   2d10h   v1.25.16-eks-59bf375   10.0.32.55    &amp;lt;none&amp;gt;        Amazon Linux 2   5.10.234-225.910.amzn2.x86_64   containerd://1.7.25   initial-2025033005225054810000002a    us-west-2c
ip-10-0-46-150.us-west-2.compute.internal           Ready                         &amp;lt;none&amp;gt;   7m36s   v1.26.15-eks-59bf375   10.0.46.150   &amp;lt;none&amp;gt;        Amazon Linux 2   5.10.234-225.910.amzn2.x86_64   containerd://1.7.25   initial-2025033005225054810000002a    us-west-2c

# all old nodes removed and all nodes are 1.26.15
Tue Apr  1 15:59:32 UTC 2025
ip-10-0-15-190.us-west-2.compute.internal           Ready    &amp;lt;none&amp;gt;   6m53s   v1.26.15-eks-59bf375   10.0.15.190   &amp;lt;none&amp;gt;        Amazon Linux 2   5.10.234-225.910.amzn2.x86_64   containerd://1.7.25   initial-2025033005225054810000002a    us-west-2a
ip-10-0-28-191.us-west-2.compute.internal           Ready    &amp;lt;none&amp;gt;   8m53s   v1.26.15-eks-59bf375   10.0.28.191   &amp;lt;none&amp;gt;        Amazon Linux 2   5.10.234-225.910.amzn2.x86_64   containerd://1.7.25   initial-2025033005225054810000002a    us-west-2b
ip-10-0-30-83.us-west-2.compute.internal            Ready    &amp;lt;none&amp;gt;   8m52s   v1.26.15-eks-59bf375   10.0.30.83    &amp;lt;none&amp;gt;        Amazon Linux 2   5.10.234-225.910.amzn2.x86_64   containerd://1.7.25   initial-2025033005225054810000002a    us-west-2b
ip-10-0-46-150.us-west-2.compute.internal           Ready    &amp;lt;none&amp;gt;   8m      v1.26.15-eks-59bf375   10.0.46.150   &amp;lt;none&amp;gt;        Amazon Linux 2   5.10.234-225.910.amzn2.x86_64   containerd://1.7.25   initial-2025033005225054810000002a    us-west-2c


# 축소 단계
# new node 도 cordon 상태로 빠짐
Tue Apr  1 16:00:29 UTC 2025
ip-10-0-15-190.us-west-2.compute.internal           Ready                      &amp;lt;none&amp;gt;   7m50s   v1.26.15-eks-59bf375   10.0.15.190   &amp;lt;none&amp;gt;        Amazon Linux 2   5.10.234-225.910.amzn2.x86_64   containerd://1.7.25   initial-2025033005225054810000002a    us-west-2a
ip-10-0-28-191.us-west-2.compute.internal           Ready                      &amp;lt;none&amp;gt;   9m50s   v1.26.15-eks-59bf375   10.0.28.191   &amp;lt;none&amp;gt;        Amazon Linux 2   5.10.234-225.910.amzn2.x86_64   containerd://1.7.25   initial-2025033005225054810000002a    us-west-2b
ip-10-0-30-83.us-west-2.compute.internal            Ready,SchedulingDisabled   &amp;lt;none&amp;gt;   9m49s   v1.26.15-eks-59bf375   10.0.30.83    &amp;lt;none&amp;gt;        Amazon Linux 2   5.10.234-225.910.amzn2.x86_64   containerd://1.7.25   initial-2025033005225054810000002a    us-west-2b
ip-10-0-46-150.us-west-2.compute.internal           Ready                      &amp;lt;none&amp;gt;   8m57s   v1.26.15-eks-59bf375   10.0.46.150   &amp;lt;none&amp;gt;        Amazon Linux 2   5.10.234-225.910.amzn2.x86_64   containerd://1.7.25   initial-2025033005225054810000002a    us-west-2c

# 노드 4대 -&amp;gt; 3대
Tue Apr  1 16:02:22 UTC 2025
ip-10-0-15-190.us-west-2.compute.internal           Ready    &amp;lt;none&amp;gt;   9m43s   v1.26.15-eks-59bf375   10.0.15.190   &amp;lt;none&amp;gt;        Amazon Linux 2   5.10.234-225.910.amzn2.x86_64   containerd://1.7.25   initial-2025033005225054810000002a    us-west-2a
ip-10-0-28-191.us-west-2.compute.internal           Ready    &amp;lt;none&amp;gt;   11m     v1.26.15-eks-59bf375   10.0.28.191   &amp;lt;none&amp;gt;        Amazon Linux 2   5.10.234-225.910.amzn2.x86_64   containerd://1.7.25   initial-2025033005225054810000002a    us-west-2b
ip-10-0-46-150.us-west-2.compute.internal           Ready    &amp;lt;none&amp;gt;   10m     v1.26.15-eks-59bf375   10.0.46.150   &amp;lt;none&amp;gt;        Amazon Linux 2   5.10.234-225.910.amzn2.x86_64   containerd://1.7.25   initial-2025033005225054810000002a    us-west-2c

# 한대 더 corndon
Tue Apr  1 16:03:31 UTC 2025
ip-10-0-15-190.us-west-2.compute.internal           Ready,SchedulingDisabled   &amp;lt;none&amp;gt;   10m     v1.26.15-eks-59bf375   10.0.15.190   &amp;lt;none&amp;gt;        Amazon Linux 2   5.10.234-225.910.amzn2.x86_64   containerd://1.7.25   initial-2025033005225054810000002a    us-west-2a
ip-10-0-28-191.us-west-2.compute.internal           Ready                      &amp;lt;none&amp;gt;   12m     v1.26.15-eks-59bf375   10.0.28.191   &amp;lt;none&amp;gt;        Amazon Linux 2   5.10.234-225.910.amzn2.x86_64   containerd://1.7.25   initial-2025033005225054810000002a    us-west-2b
ip-10-0-46-150.us-west-2.compute.internal           Ready                      &amp;lt;none&amp;gt;   11m     v1.26.15-eks-59bf375   10.0.46.150   &amp;lt;none&amp;gt;        Amazon Linux 2   5.10.234-225.910.amzn2.x86_64   containerd://1.7.25   initial-2025033005225054810000002a    us-west-2c

# 최종 2대, 1.26 버전으로 업그레이드 됨
Tue Apr  1 16:05:24 UTC 2025
ip-10-0-28-191.us-west-2.compute.internal           Ready    &amp;lt;none&amp;gt;   14m     v1.26.15-eks-59bf375   10.0.28.191   &amp;lt;none&amp;gt;        Amazon Linux 2   5.10.234-225.910.amzn2.x86_64   containerd://1.7.25   initial-2025033005225054810000002a    us-west-2b
ip-10-0-46-150.us-west-2.compute.internal           Ready    &amp;lt;none&amp;gt;   13m     v1.26.15-eks-59bf375   10.0.46.150   &amp;lt;none&amp;gt;        Amazon Linux 2   5.10.234-225.910.amzn2.x86_64   containerd://1.7.25   initial-2025033005225054810000002a    us-west-2c&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;EKS의 업그레이드는 신규 노드가 추가되고 drain&amp;gt;cordon&amp;gt;drain으로 구 노드를 삭제하고, 최종 desired 수로 축소하는 방식으로 이뤄집니다. 이로 인해 결과적으로는 구 노드가 삭제되고, 신규 버전으로 생성된 신규 노드가 남는 방식으로 업그레이드가 진행됩니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;2대 노드를 가진 노드 그룹의 업그레이드 시간은 대략 15분(15:49:52~16:05:24) 정도가 소요되었습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;관리형 노드 그룹 Blue/Green 업그레이드&lt;/h4&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;관리형 노드 그룹도 Blue/Green 업그레이드 방식을 선택할 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;해당 실습에서 blue 노드 그룹은 특정 stateful 워크로드와 PV 사용으로 특정 가용 영역에만 프로비저닝이 되어 있습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1753&quot; data-origin-height=&quot;430&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/cBIt5V/btsM4Qo4onU/FpQ4r6HU8spaJwg75GwGZK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/cBIt5V/btsM4Qo4onU/FpQ4r6HU8spaJwg75GwGZK/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/cBIt5V/btsM4Qo4onU/FpQ4r6HU8spaJwg75GwGZK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FcBIt5V%2FbtsM4Qo4onU%2FFpQ4r6HU8spaJwg75GwGZK%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1753&quot; height=&quot;430&quot; data-origin-width=&quot;1753&quot; data-origin-height=&quot;430&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이 경우의 업그레이드 방식은 먼저 terraform 에 Green 관리형 노드 그룹을 생성하고, 이후 Blue 관리형 노드 그룹을 삭제하는 방식으로 진행됩니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;먼저 &lt;code&gt;base.tf&lt;/code&gt;에 Green 노드 그룹을 생성합니다.&lt;/p&gt;
&lt;pre class=&quot;nix&quot;&gt;&lt;code&gt;  eks_managed_node_groups = {
    initial = {
      instance_types = [&quot;m5.large&quot;, &quot;m6a.large&quot;, &quot;m6i.large&quot;]
      min_size     = 2
      max_size     = 10
      desired_size = 2
      update_config = {
        max_unavailable_percentage = 35
      }
    }

    blue-mng={
      instance_types = [&quot;m5.large&quot;, &quot;m6a.large&quot;, &quot;m6i.large&quot;]
      cluster_version = &quot;1.25&quot;
      min_size     = 1
      max_size     = 2
      desired_size = 1
      update_config = {
        max_unavailable_percentage = 35
      }
      labels = {
        type = &quot;OrdersMNG&quot;
      }
      subnet_ids = [module.vpc.private_subnets[0]] # 해당 MNG은 프라이빗서브넷1 에서 동작(ebs pv 사용 중)
      taints = [
        {
          key    = &quot;dedicated&quot;
          value  = &quot;OrdersApp&quot;
          effect = &quot;NO_SCHEDULE&quot;
        }
      ]
    }

    green-mng={
      instance_types = [&quot;m5.large&quot;, &quot;m6a.large&quot;, &quot;m6i.large&quot;]
      subnet_ids = [module.vpc.private_subnets[0]]
      min_size     = 1
      max_size     = 2
      desired_size = 1
      update_config = {
        max_unavailable_percentage = 35
      }
      labels = {
        type = &quot;OrdersMNG&quot;
      }
      taints = [
        {
          key    = &quot;dedicated&quot;
          value  = &quot;OrdersApp&quot;
          effect = &quot;NO_SCHEDULE&quot;
        }
      ]
    }
  }
&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;그리고 &lt;code&gt;terraform apply -auto-approve&lt;/code&gt;을 수행하고, 노드가 증가한 상태를 확인 합니다. 동일한 가용 영역에 생성된 것을 확인할 수 있습니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;ec2-user:~/environment:$ while true; do date; kubectl get nodes -o wide --label-columns=eks.amazonaws.com/nodegroup,topology.kubernetes.io/zone |egrep &quot;green|blue&quot;; sleep 5; echo; done
Tue Apr  1 16:24:49 UTC 2025
ip-10-0-3-145.us-west-2.compute.internal            Ready    &amp;lt;none&amp;gt;   2d11h   v1.25.16-eks-59bf375   10.0.3.145    &amp;lt;none&amp;gt;        Amazon Linux 2   5.10.234-225.910.amzn2.x86_64   containerd://1.7.25   blue-mng-2025033005225055020000002c   us-west-2a
...
Tue Apr  1 16:27:44 UTC 2025
ip-10-0-3-145.us-west-2.compute.internal            Ready    &amp;lt;none&amp;gt;   2d11h   v1.25.16-eks-59bf375   10.0.3.145    &amp;lt;none&amp;gt;        Amazon Linux 2   5.10.234-225.910.amzn2.x86_64   containerd://1.7.25   blue-mng-2025033005225055020000002c    us-west-2a
ip-10-0-3-227.us-west-2.compute.internal            Ready    &amp;lt;none&amp;gt;   40s     v1.26.15-eks-59bf375   10.0.3.227    &amp;lt;none&amp;gt;        Amazon Linux 2   5.10.234-225.910.amzn2.x86_64   containerd://1.7.25   green-mng-20250401162553095800000007   us-west-2a&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이제 &lt;code&gt;base.tf&lt;/code&gt;에서 blue 노드 그룹을 삭제하고 &lt;code&gt;terraform apply -auto-approve&lt;/code&gt;을 수행합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;현재 blue 노드 그룹에는 orders 파드들이 실행 중인 것을 알 수 있습니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;ec2-user:~/environment:$ kubectl get po -A -owide |grep ip-10-0-3-145.us-west-2.compute.internal
kube-system   aws-node-g9sk9                                              2/2     Running   0               2d11h   10.0.3.145    ip-10-0-3-145.us-west-2.compute.internal            &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system   ebs-csi-node-8jqbj                                          3/3     Running   0               2d11h   10.0.0.162    ip-10-0-3-145.us-west-2.compute.internal            &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system   efs-csi-node-x546f                                          3/3     Running   0               2d11h   10.0.3.145    ip-10-0-3-145.us-west-2.compute.internal            &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system   kube-proxy-6jlqj                                            1/1     Running   0               92m     10.0.3.145    ip-10-0-3-145.us-west-2.compute.internal            &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
orders        orders-5b97745747-7rwdl                                     1/1     Running   2 (2d10h ago)   2d11h   10.0.3.163    ip-10-0-3-145.us-west-2.compute.internal            &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
orders        orders-mysql-b9b997d9d-bnbmn                                1/1     Running   0               2d11h   10.0.7.229    ip-10-0-3-145.us-west-2.compute.internal            &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;노드와 이들 파드들을 모니터링 하겠습니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;# 최초 상태
ec2-user:~/environment:$  while true; do date; kubectl get nodes -o wide --label-columns=eks.amazonaws.com/nodegroup,topology.kubernetes.io/zone |egrep &quot;green|blue&quot;;echo;  kubectl get po -A -owide |grep orders; sleep 5; echo; done
Tue Apr  1 16:34:39 UTC 2025
ip-10-0-3-145.us-west-2.compute.internal            Ready    &amp;lt;none&amp;gt;   2d11h   v1.25.16-eks-59bf375   10.0.3.145    &amp;lt;none&amp;gt;        Amazon Linux 2   5.10.234-225.910.amzn2.x86_64   containerd://1.7.25   blue-mng-2025033005225055020000002c    us-west-2a
ip-10-0-3-227.us-west-2.compute.internal            Ready    &amp;lt;none&amp;gt;   7m35s   v1.26.15-eks-59bf375   10.0.3.227    &amp;lt;none&amp;gt;        Amazon Linux 2   5.10.234-225.910.amzn2.x86_64   containerd://1.7.25   green-mng-20250401162553095800000007   us-west-2a

orders        orders-5b97745747-7rwdl                                     1/1     Running   2 (2d11h ago)   2d11h   10.0.3.163    ip-10-0-3-145.us-west-2.compute.internal            &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
orders        orders-mysql-b9b997d9d-bnbmn                                1/1     Running   0               2d11h   10.0.7.229    ip-10-0-3-145.us-west-2.compute.internal            &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;

# Blue 노드 cordon&amp;gt;drain으로 Green 노드로 이전함
Tue Apr  1 16:35:04 UTC 2025
ip-10-0-3-145.us-west-2.compute.internal            Ready    &amp;lt;none&amp;gt;   2d11h   v1.25.16-eks-59bf375   10.0.3.145    &amp;lt;none&amp;gt;        Amazon Linux 2   5.10.234-225.910.amzn2.x86_64   containerd://1.7.25   blue-mng-2025033005225055020000002c    us-west-2a
ip-10-0-3-227.us-west-2.compute.internal            Ready    &amp;lt;none&amp;gt;   8m      v1.26.15-eks-59bf375   10.0.3.227    &amp;lt;none&amp;gt;        Amazon Linux 2   5.10.234-225.910.amzn2.x86_64   containerd://1.7.25   green-mng-20250401162553095800000007   us-west-2a

orders        orders-5b97745747-7rwdl                                     1/1     Running   2 (2d11h ago)   2d11h   10.0.3.163    ip-10-0-3-145.us-west-2.compute.internal            &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
orders        orders-mysql-b9b997d9d-bnbmn                                1/1     Running   0               2d11h   10.0.7.229    ip-10-0-3-145.us-west-2.compute.internal            &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;

Tue Apr  1 16:35:11 UTC 2025
ip-10-0-3-145.us-west-2.compute.internal            Ready,SchedulingDisabled   &amp;lt;none&amp;gt;   2d11h   v1.25.16-eks-59bf375   10.0.3.145    &amp;lt;none&amp;gt;        Amazon Linux 2   5.10.234-225.910.amzn2.x86_64   containerd://1.7.25   blue-mng-2025033005225055020000002c    us-west-2a
ip-10-0-3-227.us-west-2.compute.internal            Ready                      &amp;lt;none&amp;gt;   8m7s    v1.26.15-eks-59bf375   10.0.3.227    &amp;lt;none&amp;gt;        Amazon Linux 2   5.10.234-225.910.amzn2.x86_64   containerd://1.7.25   green-mng-20250401162553095800000007   us-west-2a

orders        orders-5b97745747-7ctj4                                     0/1     ContainerCreating   0               5s      &amp;lt;none&amp;gt;        ip-10-0-3-227.us-west-2.compute.internal            &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
orders        orders-mysql-b9b997d9d-wc9vn                                0/1     ContainerCreating   0               5s      &amp;lt;none&amp;gt;        ip-10-0-3-227.us-west-2.compute.internal            &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;

# 최종 상태
Tue Apr  1 16:37:03 UTC 2025
ip-10-0-3-227.us-west-2.compute.internal            Ready    &amp;lt;none&amp;gt;   10m     v1.26.15-eks-59bf375   10.0.3.227    &amp;lt;none&amp;gt;        Amazon Linux 2   5.10.234-225.910.amzn2.x86_64   containerd://1.7.25   green-mng-20250401162553095800000007   us-west-2a

orders        orders-5b97745747-7ctj4                                     1/1     Running   2 (81s ago)     118s    10.0.8.104    ip-10-0-3-227.us-west-2.compute.internal            &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
orders        orders-mysql-b9b997d9d-wc9vn                                1/1     Running   0               118s    10.0.12.108   ip-10-0-3-227.us-west-2.compute.internal            &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;관리형 노드 그룹에 대한 실습을 마무리 하겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;3.4. Karpenter 노드 업그레이드&lt;/h3&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;앞서 살펴본 실습에서 EBS PV를 사용하는 경우와 같이 관리형 노드 그룹을 사용하는 경우에는 신규 노드 그룹을 생성하는 시점 고려하는 사항이 많습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;반면 Karpenter 노드는 내부적으로 신규 노드를 추가하는 시점 이러한 PV의 위치까지 고려하여 보다 사용 편의가 높습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Karpenter에서는 노드 업그레이드를 위해서 Drift 혹은 TTL(expireAfter)과 같은 기능을 사용할 수 있습니다. 동작 방식이 다를 뿐 결과적으로는 EC2NodeClass에 업그레이드할 AMI로 변경하면, 원하는 사양으로 유도하거나 혹은 TTL이 지난 시점 변경되도록 하는 방식입니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;그러하므로 이 실습은 terraform이 아닌 EC2NodeClass를 수정하는 방식을 사용합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;여기서는 code commit과 argoCD가 연결되어 있기 때문에, 로컬에서 수정하고 code commit으로 push한 뒤 karpenter 애플리케이션을 sync하도록 하겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;먼저 1.26에 해당하는 AMI ID를 확인합니다.&lt;/p&gt;
&lt;pre class=&quot;dsconfig&quot;&gt;&lt;code&gt;## AMI ID 확인
ec2-user:~/environment/terraform:$  aws ssm get-parameter --name /aws/service/eks/optimized-ami/1.26/amazon-linux-2/recommended/image_id \
    --region ${AWS_REGION} --query &quot;Parameter.Value&quot; --output text
ami-086414611b43bb691&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이후 로컬의 &lt;code&gt;eks-gitops-repo\apps\karpenter&lt;/code&gt;로 이동하여 &lt;code&gt;default-ec2nc.yaml&lt;/code&gt;의 AMI ID를 확인한 AMI ID로 변경합니다.&lt;/p&gt;
&lt;pre class=&quot;less&quot;&gt;&lt;code&gt;apiVersion: karpenter.k8s.aws/v1beta1
kind: EC2NodeClass
metadata:
  name: default
spec:
  amiFamily: AL2
  amiSelectorTerms:
  - id: &quot;ami-0ee947a6f4880da75&quot; # Latest EKS 1.25 AMI
  role: karpenter-eksworkshop-eksctl
  securityGroupSelectorTerms:
  - tags:
      karpenter.sh/discovery: eksworkshop-eksctl
  subnetSelectorTerms:
  - tags:
      karpenter.sh/discovery: eksworkshop-eksctl
  tags:
    intent: apps
    managed-by: karpenter
    team: checkout&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;변경된 파일을 code commit으로 push하고, argo CD를 sync 합니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;# 10분 소요 (예상) 실습 포함
cd ~/environment/eks-gitops-repo
git add apps/karpenter/default-ec2nc.yaml apps/karpenter/default-np.yaml
git commit -m &quot;disruption changes&quot;
git push --set-upstream origin main
argocd app sync karpenter

# 모니터링
while true; do date; kubectl get nodeclaim; echo ; kubectl get nodes -l team=checkout; echo ; kubectl get nodes -l team=checkout -o jsonpath=&quot;{range .items[*]}{.metadata.name} {.spec.taints}{\&quot;\n\&quot;}{end}&quot;; echo ; kubectl get pods -n checkout -o wide; echo ; sleep 1; echo; done

# 최초 상태
ec2-user:~/environment:$ 
while true; do date; kubectl get nodeclaim; echo ; kubectl get nodes -l team=checkout; echo ; kubectec2-user:~/environment:$ while true; do date; kubectl get nodeclaim; echo ; kubectl get nodes -l team=checkout; echo ; kubectl get nodes -l team=checkout -o jsonpath=&quot;{range .items[*]}{.metadata.name} {.spec.taints}{\&quot;\n\&quot;}{end}&quot;; echo ; kubectl get pods -n checkout -o wide; echo ; sleep 1; echo; done
Tue Apr  1 16:54:08 UTC 2025
NAME            TYPE       ZONE         NODE                                        READY   AGE
default-6css4   c4.large   us-west-2b   ip-10-0-24-100.us-west-2.compute.internal   True    19h

NAME                                        STATUS   ROLES    AGE   VERSION
ip-10-0-24-100.us-west-2.compute.internal   Ready    &amp;lt;none&amp;gt;   19h   v1.25.16-eks-59bf375

ip-10-0-24-100.us-west-2.compute.internal [{&quot;effect&quot;:&quot;NoSchedule&quot;,&quot;key&quot;:&quot;dedicated&quot;,&quot;value&quot;:&quot;CheckoutApp&quot;}]

NAME                             READY   STATUS    RESTARTS   AGE   IP            NODE                                        NOMINATED NODE   READINESS GATES
checkout-558f7777c-z5qvh         1/1     Running   0          19h   10.0.29.195   ip-10-0-24-100.us-west-2.compute.internal   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
checkout-redis-f54bf7cb5-r2sdp   1/1     Running   0          19h   10.0.19.67    ip-10-0-24-100.us-west-2.compute.internal   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;

# 신규 노드 생성
Tue Apr  1 16:58:58 UTC 2025
NAME            TYPE       ZONE         NODE                                        READY     AGE
default-6css4   c4.large   us-west-2b   ip-10-0-24-100.us-west-2.compute.internal   True      19h
default-pflq6   c4.large   us-west-2b                                               Unknown   3s

NAME                                        STATUS   ROLES    AGE   VERSION
ip-10-0-24-100.us-west-2.compute.internal   Ready    &amp;lt;none&amp;gt;   19h   v1.25.16-eks-59bf375

# 노드 Ready
Tue Apr  1 17:00:35 UTC 2025
NAME            TYPE       ZONE         NODE                                        READY   AGE
default-6css4   c4.large   us-west-2b   ip-10-0-24-100.us-west-2.compute.internal   True    19h
default-pflq6   c4.large   us-west-2b   ip-10-0-28-136.us-west-2.compute.internal   True    99s

NAME                                        STATUS   ROLES    AGE   VERSION
ip-10-0-24-100.us-west-2.compute.internal   Ready    &amp;lt;none&amp;gt;   19h   v1.25.16-eks-59bf375
ip-10-0-28-136.us-west-2.compute.internal   Ready    &amp;lt;none&amp;gt;   27s   v1.26.15-eks-59bf375

# 신규 노드로 파드 생성
Tue Apr  1 17:00:35 UTC 2025
NAME            TYPE       ZONE         NODE                                        READY   AGE
default-6css4   c4.large   us-west-2b   ip-10-0-24-100.us-west-2.compute.internal   True    19h
default-pflq6   c4.large   us-west-2b   ip-10-0-28-136.us-west-2.compute.internal   True    99s

NAME                                        STATUS   ROLES    AGE   VERSION
ip-10-0-24-100.us-west-2.compute.internal   Ready    &amp;lt;none&amp;gt;   19h   v1.25.16-eks-59bf375
ip-10-0-28-136.us-west-2.compute.internal   Ready    &amp;lt;none&amp;gt;   27s   v1.26.15-eks-59bf375

ip-10-0-24-100.us-west-2.compute.internal [{&quot;effect&quot;:&quot;NoSchedule&quot;,&quot;key&quot;:&quot;dedicated&quot;,&quot;value&quot;:&quot;CheckoutApp&quot;},{&quot;effect&quot;:&quot;NoSchedule&quot;,&quot;key&quot;:&quot;karpenter.sh/disruption&quot;,&quot;value&quot;:&quot;disrupting&quot;}]
ip-10-0-28-136.us-west-2.compute.internal [{&quot;effect&quot;:&quot;NoSchedule&quot;,&quot;key&quot;:&quot;dedicated&quot;,&quot;value&quot;:&quot;CheckoutApp&quot;}]

NAME                             READY   STATUS    RESTARTS   AGE   IP            NODE                                        NOMINATED NODE   READINESS GATES
checkout-558f7777c-z5qvh         1/1     Running   0          19h   10.0.29.195   ip-10-0-24-100.us-west-2.compute.internal   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
checkout-redis-f54bf7cb5-r2sdp   1/1     Running   0          19h   10.0.19.67    ip-10-0-24-100.us-west-2.compute.internal   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;


Tue Apr  1 17:00:41 UTC 2025
NAME            TYPE       ZONE         NODE                                        READY   AGE
default-6css4   c4.large   us-west-2b   ip-10-0-24-100.us-west-2.compute.internal   True    19h
default-pflq6   c4.large   us-west-2b   ip-10-0-28-136.us-west-2.compute.internal   True    105s

NAME                                        STATUS   ROLES    AGE   VERSION
ip-10-0-24-100.us-west-2.compute.internal   Ready    &amp;lt;none&amp;gt;   19h   v1.25.16-eks-59bf375
ip-10-0-28-136.us-west-2.compute.internal   Ready    &amp;lt;none&amp;gt;   33s   v1.26.15-eks-59bf375

ip-10-0-24-100.us-west-2.compute.internal [{&quot;effect&quot;:&quot;NoSchedule&quot;,&quot;key&quot;:&quot;dedicated&quot;,&quot;value&quot;:&quot;CheckoutApp&quot;},{&quot;effect&quot;:&quot;NoSchedule&quot;,&quot;key&quot;:&quot;karpenter.sh/disruption&quot;,&quot;value&quot;:&quot;disrupting&quot;}]
ip-10-0-28-136.us-west-2.compute.internal [{&quot;effect&quot;:&quot;NoSchedule&quot;,&quot;key&quot;:&quot;dedicated&quot;,&quot;value&quot;:&quot;CheckoutApp&quot;}]

NAME                             READY   STATUS              RESTARTS   AGE   IP       NODE                                        NOMINATED NODE   READINESS GATES
checkout-558f7777c-hddnc         0/1     ContainerCreating   0          2s    &amp;lt;none&amp;gt;   ip-10-0-28-136.us-west-2.compute.internal   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
checkout-redis-f54bf7cb5-tqj6q   0/1     ContainerCreating   0          2s    &amp;lt;none&amp;gt;   ip-10-0-28-136.us-west-2.compute.internal   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;

# 구 노드 사라짐
Tue Apr  1 17:00:46 UTC 2025
NAME            TYPE       ZONE         NODE                                        READY   AGE
default-pflq6   c4.large   us-west-2b   ip-10-0-28-136.us-west-2.compute.internal   True    109s

NAME                                        STATUS   ROLES    AGE   VERSION
ip-10-0-28-136.us-west-2.compute.internal   Ready    &amp;lt;none&amp;gt;   37s   v1.26.15-eks-59bf375

ip-10-0-28-136.us-west-2.compute.internal [{&quot;effect&quot;:&quot;NoSchedule&quot;,&quot;key&quot;:&quot;dedicated&quot;,&quot;value&quot;:&quot;CheckoutApp&quot;}]

NAME                             READY   STATUS              RESTARTS   AGE   IP       NODE                                        NOMINATED NODE   READINESS GATES
checkout-558f7777c-hddnc         0/1     ContainerCreating   0          6s    &amp;lt;none&amp;gt;   ip-10-0-28-136.us-west-2.compute.internal   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
checkout-redis-f54bf7cb5-tqj6q   0/1     ContainerCreating   0          6s    &amp;lt;none&amp;gt;   ip-10-0-28-136.us-west-2.compute.internal   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;

&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Karpenter 동작의 세부 과정은 로그를 통해서 확인할 수 있습니다.&lt;/p&gt;
&lt;pre class=&quot;routeros&quot;&gt;&lt;code&gt;kubectl -n karpenter logs deployment/karpenter -c controller --tail=33 -f
...

# drift 진행 &amp;gt; nodeClaim 생성 &amp;gt; nodeClaim launch
{&quot;level&quot;:&quot;INFO&quot;,&quot;time&quot;:&quot;2025-04-01T16:58:57.282Z&quot;,&quot;logger&quot;:&quot;controller&quot;,&quot;message&quot;:&quot;disrupting via drift replace, terminating 1 nodes (2 pods) ip-10-0-24-100.us-west-2.compute.internal/c4.large/spot and replacing with node from types c5.large, c4.large, m6a.large, r4.large, m6i.large and 40 other(s)&quot;,&quot;commit&quot;:&quot;490ef94&quot;,&quot;controller&quot;:&quot;disruption&quot;,&quot;command-id&quot;:&quot;3a295ac9-2a0d-4ddf-a6cb-e8d08915cff2&quot;}
{&quot;level&quot;:&quot;INFO&quot;,&quot;time&quot;:&quot;2025-04-01T16:58:57.318Z&quot;,&quot;logger&quot;:&quot;controller&quot;,&quot;message&quot;:&quot;created nodeclaim&quot;,&quot;commit&quot;:&quot;490ef94&quot;,&quot;controller&quot;:&quot;disruption&quot;,&quot;NodePool&quot;:{&quot;name&quot;:&quot;default&quot;},&quot;NodeClaim&quot;:{&quot;name&quot;:&quot;default-pflq6&quot;},&quot;requests&quot;:{&quot;cpu&quot;:&quot;430m&quot;,&quot;memory&quot;:&quot;632Mi&quot;,&quot;pods&quot;:&quot;6&quot;},&quot;instance-types&quot;:&quot;c4.2xlarge, c4.4xlarge, c4.8xlarge, c4.large, c4.xlarge and 40 other(s)&quot;}
{&quot;level&quot;:&quot;INFO&quot;,&quot;time&quot;:&quot;2025-04-01T16:59:00.052Z&quot;,&quot;logger&quot;:&quot;controller&quot;,&quot;message&quot;:&quot;launched nodeclaim&quot;,&quot;commit&quot;:&quot;490ef94&quot;,&quot;controller&quot;:&quot;nodeclaim.lifecycle&quot;,&quot;controllerGroup&quot;:&quot;karpenter.sh&quot;,&quot;controllerKind&quot;:&quot;NodeClaim&quot;,&quot;NodeClaim&quot;:{&quot;name&quot;:&quot;default-pflq6&quot;},&quot;namespace&quot;:&quot;&quot;,&quot;name&quot;:&quot;default-pflq6&quot;,&quot;reconcileID&quot;:&quot;a314c3cc-925a-4039-a313-b10e3d762fed&quot;,&quot;provider-id&quot;:&quot;aws:///us-west-2b/i-05f149a8dcf7d844d&quot;,&quot;instance-type&quot;:&quot;c4.large&quot;,&quot;zone&quot;:&quot;us-west-2b&quot;,&quot;capacity-type&quot;:&quot;spot&quot;,&quot;allocatable&quot;:{&quot;cpu&quot;:&quot;1930m&quot;,&quot;ephemeral-storage&quot;:&quot;17Gi&quot;,&quot;memory&quot;:&quot;2878Mi&quot;,&quot;pods&quot;:&quot;29&quot;}}

# 노드 register &amp;gt; initialize
{&quot;level&quot;:&quot;INFO&quot;,&quot;time&quot;:&quot;2025-04-01T17:00:10.779Z&quot;,&quot;logger&quot;:&quot;controller&quot;,&quot;message&quot;:&quot;registered nodeclaim&quot;,&quot;commit&quot;:&quot;490ef94&quot;,&quot;controller&quot;:&quot;nodeclaim.lifecycle&quot;,&quot;controllerGroup&quot;:&quot;karpenter.sh&quot;,&quot;controllerKind&quot;:&quot;NodeClaim&quot;,&quot;NodeClaim&quot;:{&quot;name&quot;:&quot;default-pflq6&quot;},&quot;namespace&quot;:&quot;&quot;,&quot;name&quot;:&quot;default-pflq6&quot;,&quot;reconcileID&quot;:&quot;5a16a55e-9a6f-434f-b59e-c7daf0a93bf3&quot;,&quot;provider-id&quot;:&quot;aws:///us-west-2b/i-05f149a8dcf7d844d&quot;,&quot;Node&quot;:{&quot;name&quot;:&quot;ip-10-0-28-136.us-west-2.compute.internal&quot;}}
{&quot;level&quot;:&quot;INFO&quot;,&quot;time&quot;:&quot;2025-04-01T17:00:33.021Z&quot;,&quot;logger&quot;:&quot;controller&quot;,&quot;message&quot;:&quot;initialized nodeclaim&quot;,&quot;commit&quot;:&quot;490ef94&quot;,&quot;controller&quot;:&quot;nodeclaim.lifecycle&quot;,&quot;controllerGroup&quot;:&quot;karpenter.sh&quot;,&quot;controllerKind&quot;:&quot;NodeClaim&quot;,&quot;NodeClaim&quot;:{&quot;name&quot;:&quot;default-pflq6&quot;},&quot;namespace&quot;:&quot;&quot;,&quot;name&quot;:&quot;default-pflq6&quot;,&quot;reconcileID&quot;:&quot;c4e40956-a02a-4337-bd69-6b9be1d72d5f&quot;,&quot;provider-id&quot;:&quot;aws:///us-west-2b/i-05f149a8dcf7d844d&quot;,&quot;Node&quot;:{&quot;name&quot;:&quot;ip-10-0-28-136.us-west-2.compute.internal&quot;},&quot;allocatable&quot;:{&quot;cpu&quot;:&quot;1930m&quot;,&quot;ephemeral-storage&quot;:&quot;18242267924&quot;,&quot;hugepages-1Gi&quot;:&quot;0&quot;,&quot;hugepages-2Mi&quot;:&quot;0&quot;,&quot;memory&quot;:&quot;3119300Ki&quot;,&quot;pods&quot;:&quot;29&quot;}}
{&quot;level&quot;:&quot;INFO&quot;,&quot;time&quot;:&quot;2025-04-01T17:00:42.921Z&quot;,&quot;logger&quot;:&quot;controller&quot;,&quot;message&quot;:&quot;command succeeded&quot;,&quot;commit&quot;:&quot;490ef94&quot;,&quot;controller&quot;:&quot;disruption.queue&quot;,&quot;command-id&quot;:&quot;3a295ac9-2a0d-4ddf-a6cb-e8d08915cff2&quot;}

# 노드 삭제
{&quot;level&quot;:&quot;INFO&quot;,&quot;time&quot;:&quot;2025-04-01T17:00:42.963Z&quot;,&quot;logger&quot;:&quot;controller&quot;,&quot;message&quot;:&quot;tainted node&quot;,&quot;commit&quot;:&quot;490ef94&quot;,&quot;controller&quot;:&quot;node.termination&quot;,&quot;controllerGroup&quot;:&quot;&quot;,&quot;controllerKind&quot;:&quot;Node&quot;,&quot;Node&quot;:{&quot;name&quot;:&quot;ip-10-0-24-100.us-west-2.compute.internal&quot;},&quot;namespace&quot;:&quot;&quot;,&quot;name&quot;:&quot;ip-10-0-24-100.us-west-2.compute.internal&quot;,&quot;reconcileID&quot;:&quot;73867503-c843-4373-b03e-a3406d6f60b3&quot;}
{&quot;level&quot;:&quot;INFO&quot;,&quot;time&quot;:&quot;2025-04-01T17:00:45.441Z&quot;,&quot;logger&quot;:&quot;controller&quot;,&quot;message&quot;:&quot;deleted node&quot;,&quot;commit&quot;:&quot;490ef94&quot;,&quot;controller&quot;:&quot;node.termination&quot;,&quot;controllerGroup&quot;:&quot;&quot;,&quot;controllerKind&quot;:&quot;Node&quot;,&quot;Node&quot;:{&quot;name&quot;:&quot;ip-10-0-24-100.us-west-2.compute.internal&quot;},&quot;namespace&quot;:&quot;&quot;,&quot;name&quot;:&quot;ip-10-0-24-100.us-west-2.compute.internal&quot;,&quot;reconcileID&quot;:&quot;03122384-44e7-4a0b-b28b-372ec6e10f1b&quot;}
{&quot;level&quot;:&quot;INFO&quot;,&quot;time&quot;:&quot;2025-04-01T17:00:45.808Z&quot;,&quot;logger&quot;:&quot;controller&quot;,&quot;message&quot;:&quot;deleted nodeclaim&quot;,&quot;commit&quot;:&quot;490ef94&quot;,&quot;controller&quot;:&quot;nodeclaim.termination&quot;,&quot;controllerGroup&quot;:&quot;karpenter.sh&quot;,&quot;controllerKind&quot;:&quot;NodeClaim&quot;,&quot;NodeClaim&quot;:{&quot;name&quot;:&quot;default-6css4&quot;},&quot;namespace&quot;:&quot;&quot;,&quot;name&quot;:&quot;default-6css4&quot;,&quot;reconcileID&quot;:&quot;85396b01-d267-4a9a-b995-fcabaaf5e423&quot;,&quot;Node&quot;:{&quot;name&quot;:&quot;ip-10-0-24-100.us-west-2.compute.internal&quot;},&quot;provider-id&quot;:&quot;aws:///us-west-2b/i-0cc37fd17692cedac&quot;}&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Karpenter의 Drift 방식으로 업그레이드가 완료되었습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;3.5. Self-managed 노드 업그레이드&lt;/h3&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Self-managed 노드 업그레이드는 사용자가 직접 AMI를 업데이트 해야하며, 이후 변경된 AMI ID를 업데이트 하는 방식으로 업그레이드를 수행합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;code&gt;base.tf&lt;/code&gt;의 에서 self-managed 노드 그룹의 ami_id를 변경하고, &lt;code&gt;terraform apply -auto-approve&lt;/code&gt;를 통해서 적용합니다.&lt;/p&gt;
&lt;pre class=&quot;nix&quot;&gt;&lt;code&gt;self_managed_node_groups = {
  self-managed-group = {
    instance_type = &quot;m5.large&quot;

...

    # Additional configurations
    ami_id           = &quot;ami-086414611b43bb691&quot; # Replaced the latest AMI ID for EKS 1.26
    subnet_ids       = module.vpc.private_subnets
    .
    .
    .
    launch_template_use_name_prefix = true
  }
}&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이후 노드가 변경된 것을 확인할 수 있습니다. 다만 terraform apply 이 종료된 것 처럼 보이지만, 실제 노드 그룹이 재생성 되는 시간은 조금 더 걸리는 것으로 확인됩니다. 관리형 노드 그룹은 terraform apply가 종료되는 시점과 업그레이드가 일치하지만, Self-managed 노드 업그레이드는 terraform apply가 종료되는 시점과 다르다는 점에 차이가 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;3.6. Fargate 노드 업그레이드&lt;/h3&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Fargate는 가상 머신 그룹을 직접 프로비저닝하거나 관리할 필요가 없습니다. 그러하므로, 업그레이드를 하려면 단순히 파드를 재시작해 Fargate 컨트롤러가 최신 쿠버네티스 버전으로 업데이트를 하도록 예약합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아래와 같이 진행 합니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;# 최초 상태
ec2-user:~/environment:$ kubectl get pods -n assets -o wide
NAME                      READY   STATUS    RESTARTS   AGE     IP            NODE                                                NOMINATED NODE   READINESS GATES
assets-7ccc84cb4d-2p284   1/1     Running   0          2d11h   10.0.37.152   fargate-ip-10-0-37-152.us-west-2.compute.internal   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
ec2-user:~/environment:$ kubectl get node $(kubectl get pods -n assets -o jsonpath='{.items[0].spec.nodeName}') -o wide
NAME                                                STATUS   ROLES    AGE     VERSION                INTERNAL-IP   EXTERNAL-IP   OS-IMAGE         KERNEL-VERSION                  CONTAINER-RUNTIME
fargate-ip-10-0-37-152.us-west-2.compute.internal   Ready    &amp;lt;none&amp;gt;   2d11h   v1.25.16-eks-2d5f260   10.0.37.152   &amp;lt;none&amp;gt;        Amazon Linux 2   5.10.234-225.910.amzn2.x86_64   containerd://1.7.25

# 디플로이먼트 재시작
ec2-user:~/environment:$ kubectl rollout restart deployment assets -n assets
deployment.apps/assets restarted&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;신규 파드가 Running 상태가 되면 노드 또한 1.26.15로 변경된 것을 알 수 있습니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;ec2-user:~/environment:$ kubectl get pods -n assets -o wide
NAME                      READY   STATUS    RESTARTS   AGE   IP           NODE                                               NOMINATED NODE   READINESS GATES
assets-66c4799cfc-4s7s6   1/1     Running   0          78s   10.0.28.67   fargate-ip-10-0-28-67.us-west-2.compute.internal   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
ec2-user:~/environment:$ kubectl get node $(kubectl get pods -n assets -o jsonpath='{.items[0].spec.nodeName}') -o wide
NAME                                               STATUS   ROLES    AGE   VERSION                INTERNAL-IP   EXTERNAL-IP   OS-IMAGE         KERNEL-VERSION                  CONTAINER-RUNTIME
fargate-ip-10-0-28-67.us-west-2.compute.internal   Ready    &amp;lt;none&amp;gt;   33s   v1.26.15-eks-2d5f260   10.0.28.67    &amp;lt;none&amp;gt;        Amazon Linux 2   5.10.234-225.910.amzn2.x86_64   containerd://1.7.25&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;여기까지 In-place 클러스터 업그레이드를 살펴봤습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;모든 노드들이 1.26.15 버전으로 업그레이드가 완료되었습니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;ec2-user:~/environment:$ kubectl get no 
NAME                                               STATUS   ROLES    AGE    VERSION
fargate-ip-10-0-28-67.us-west-2.compute.internal   Ready    &amp;lt;none&amp;gt;   10m    v1.26.15-eks-2d5f260
ip-10-0-28-136.us-west-2.compute.internal          Ready    &amp;lt;none&amp;gt;   27m    v1.26.15-eks-59bf375
ip-10-0-28-191.us-west-2.compute.internal          Ready    &amp;lt;none&amp;gt;   96m    v1.26.15-eks-59bf375
ip-10-0-3-227.us-west-2.compute.internal           Ready    &amp;lt;none&amp;gt;   60m    v1.26.15-eks-59bf375
ip-10-0-31-15.us-west-2.compute.internal           Ready    &amp;lt;none&amp;gt;   7m9s   v1.26.15-eks-59bf375
ip-10-0-35-1.us-west-2.compute.internal            Ready    &amp;lt;none&amp;gt;   12m    v1.26.15-eks-59bf375
ip-10-0-46-150.us-west-2.compute.internal          Ready    &amp;lt;none&amp;gt;   95m    v1.26.15-eks-59bf375&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;4. Blue/Green 클러스터 업그레이드&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Blue/Green 클러스터 업그레이드는 앞서 살펴본 관리형 노드 그룹의 Blue/Green 업그레이드와 동일합니다. 신규 Green 클러스터를 생성하고, 이후 트래픽을 라우팅 하는 방식으로 업그레이드를 완료할 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;결국 In-place와의 차이점은 Blue/Green 클러스터는 별개의 클러스터이기 때문에 한번에 원하는 버전으로 업그레이드를 할 수 있다는 장점이 있습니다. 또한 기존 클러스터를 유지하여 간단하게 Rollback을 가능하게 합니다. 다만 동시에 2개 클러스터를 구성하게 되어 추가 비용이 발생할 수 있다는 점과 신규 클러스터로 Stateful 워크로드의 이전과 트래픽 라우팅의 복잡성이 존재합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;실습 방식 자체는 어렵지 않기 때문에 워크샵의 대략적인 개요만 설명드리겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;1) Green 클러스터 생성: Terraform 코드로 생성하고 배포 합니다. 사전에 적합한 쿠버네티스 버전과 대응하는 애드온을 업데이트 해야합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;2) Stateless 워크로드 마이그레이션: 상태가 없는 애플리케이션은 신규 클러스터에 배포합니다. 다만 업그레이드 버전에 deprecated API 등이 없는지 확인 후 미리 변경해야 합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;3) Stateful 워크로드 마이그레이션: Sateful 워크로드는 데이터 동기화 이슈가 있습니다. 이 때문에 사전에 스토리지 동기화나 데이터 동기화를 통해서 신규 클러스터에서 동일한 상태를 가지도록 해야 합니다. 이 부분은 간단하지 않기 때문에 많은 고민이 필요해 보입니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;4) 트래픽 전환: 신규 클러스터 구성이 완료되면 트래픽을 Green으로 라우팅 합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;그럼 이상으로 EKS Upgrade 에 대한 포스트를 마무리 하겠습니다.&lt;/p&gt;</description>
      <category>EKS</category>
      <category>EKS</category>
      <category>upgrade</category>
      <category>업그레이드</category>
      <author>한명</author>
      <guid isPermaLink="true">https://a-person.tistory.com/44</guid>
      <comments>https://a-person.tistory.com/44#entry44comment</comments>
      <pubDate>Wed, 2 Apr 2025 02:49:38 +0900</pubDate>
    </item>
    <item>
      <title>Jenkins와 Argo CD를 활용한 Kubernetes 환경 CI/CD 구성</title>
      <link>https://a-person.tistory.com/43</link>
      <description>&lt;p data-ke-size=&quot;size16&quot;&gt;이번 포스트에서는 Kubernetes 환경에서 애플리케이션 배포를 위한 CI(Continous Intergration)/CD(Continous Deployment) 구성을 예제를 통해서 살펴보겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;쿠버네티스를 사용하는 환경은 소소 코드를 코드 리파지터리에 반영하면, 이를 통해서 컨테이너 이미지를 빌드하고, 신규 컨테이너 이미지를 바탕으로 쿠버네티스에 워크로드가 배포가 이뤄집니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이 과정을 한땀 한땀 개발자의 PC에서 진행한다는 것도 어려운 일이고, 한편으로는 잦은 코드 변경에 따른 반복 작업으로 오히려 애플리케이션 개발 보다는 배포를 진행하는데 더 많은 시간을 소요할 수 있습니다. 또 각 절차에서 사용자가 개입하게 되면 오히려 실수에 의해 배포 문제로 서비스 이슈가 생긴다면 더 큰일입니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;본 포스트에서 Jenkins를 통한 CI와 Argo CD를 통한 CD를 구성해보면서, CI/CD 과정을 어떻게 간결하고 자동화를 할 수 있는지 아이디어를 얻어가셨으면 좋겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;목차&lt;/h2&gt;
&lt;ol style=&quot;list-style-type: decimal;&quot; data-ke-list-type=&quot;decimal&quot;&gt;
&lt;li&gt;실습 환경 구성&lt;/li&gt;
&lt;li&gt;Jenkins를 통한 CI 구성&lt;/li&gt;
&lt;li&gt;Argo CD를 통한 CD 구성&lt;/li&gt;
&lt;/ol&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;1. 실습 환경 구성&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;먼저 실습 환경은 아래와 같은 플로우를 완성하는데 목표를 두고 있습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1155&quot; data-origin-height=&quot;570&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/dJwYPt/btsM2ZyMvxy/do8RpnlZEIKXZKd429IPX0/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/dJwYPt/btsM2ZyMvxy/do8RpnlZEIKXZKd429IPX0/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/dJwYPt/btsM2ZyMvxy/do8RpnlZEIKXZKd429IPX0/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FdJwYPt%2FbtsM2ZyMvxy%2Fdo8RpnlZEIKXZKd429IPX0%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1155&quot; height=&quot;570&quot; data-origin-width=&quot;1155&quot; data-origin-height=&quot;570&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;개발자가 코드를 개발팀 리파지터리에 배포하면, Jenkins가 CI 역할로 코드를 fetch하고 컨테이너 이미지를 빌드해 컨테이너 레지스트리(Docker Hub)로 업로드(Push)를 하고, 변경 결과를 DevOps팀의 소스 리파지터리에 반영합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;CD를 담당하는 Argo CD는 DevOps팀의 리파지터리의 변경 사항을 전달 받아, 대상 쿠버네티스가 Desired State와 일치하는지를 확인해 상태를 Sycn하게 됩니다. 이때, 신규 업로드된 컨테이너 이미지를 다운로드(Pull)해 신규 배포가 이뤄집니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;본 실습에서 사용하는 각 구성요소는 아래와 같습니다.&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;SCM(Source Code Management): Gogs(&lt;a href=&quot;https://gogs.io/&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://gogs.io/&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;컨테이너 이미지 레지스트리: Docker Hub(&lt;a href=&quot;https://hub.docker.com/&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://hub.docker.com/&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;CI(Continous Intergration): Jenkins(&lt;a href=&quot;https://www.jenkins.io/&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://www.jenkins.io/&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;CD(Continous Deployment): Argo CD(&lt;a href=&quot;https://argo-cd.readthedocs.io/&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://argo-cd.readthedocs.io/&lt;/a&gt;)&lt;/li&gt;
&lt;/ul&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;실습에서 쿠버네티스 환경은 kind를 통하여 간단하게 구성하도록 하겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Gogs, Jenkins은 docker compose로 구성하고, Argo CD는 쿠버네티스에 in-cluster 방식으로 설치하겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;Docker Hub 설정&lt;/h3&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;먼저 컨테이너 이미지 저장소로 사용할 Docker Hub에 dev-app 리파지터리를 생성하고, 토큰을 발급하도록 하겠습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1034&quot; data-origin-height=&quot;676&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/3WkmC/btsM1dE2lKK/K21pZuIQULkqJ7mAt4U1j1/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/3WkmC/btsM1dE2lKK/K21pZuIQULkqJ7mAt4U1j1/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/3WkmC/btsM1dE2lKK/K21pZuIQULkqJ7mAt4U1j1/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2F3WkmC%2FbtsM1dE2lKK%2FK21pZuIQULkqJ7mAt4U1j1%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1034&quot; height=&quot;676&quot; data-origin-width=&quot;1034&quot; data-origin-height=&quot;676&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;토큰은 우측 상단 계정에서 Account settings에서 Personal access tokens를 통해서 생성합니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1911&quot; data-origin-height=&quot;447&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/c9Rv8C/btsM2aguClQ/TfNGtnk4TTV3muQhMKwMd0/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/c9Rv8C/btsM2aguClQ/TfNGtnk4TTV3muQhMKwMd0/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/c9Rv8C/btsM2aguClQ/TfNGtnk4TTV3muQhMKwMd0/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2Fc9Rv8C%2FbtsM2aguClQ%2FTfNGtnk4TTV3muQhMKwMd0%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1911&quot; height=&quot;447&quot; data-origin-width=&quot;1911&quot; data-origin-height=&quot;447&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이때 Access permissions는 Read, Write, Delete로 선택합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;생성된 토큰을 잘 기록해 둡니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;Kubernetes 환경 구성&lt;/h3&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;먼저 실습을 위해 Windows 환경의 WSL을 구성하고, kind로 쿠버네티스를 생성하겠습니다.&lt;/p&gt;
&lt;pre class=&quot;yaml&quot;&gt;&lt;code&gt;# 클러스터 배포 전 확인
mkdir cicd-labs
cd ~/cicd-labs

# WSL2의 eth0 IP를 지정
ip -br -c a

MyIP=&amp;lt;각자 자신의 WSL2의 eth0 IP&amp;gt;
MyIP=172.28.157.42

# cicd-labs 디렉터리에서 아래 파일 작성
cat &amp;gt; kind-3node.yaml &amp;lt;&amp;lt;EOF
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
networking:
  apiServerAddress: &quot;$MyIP&quot;
nodes:
- role: control-plane
  extraPortMappings:
  - containerPort: 30000
    hostPort: 30000
  - containerPort: 30001
    hostPort: 30001
  - containerPort: 30002
    hostPort: 30002
  - containerPort: 30003
    hostPort: 30003
- role: worker
- role: worker
EOF
kind create cluster --config kind-3node.yaml --name myk8s --image kindest/node:v1.32.2

# 확인
kind get nodes --name myk8s
kind get nodes --name myk8s
myk8s-control-plane
myk8s-worker
myk8s-worker2

kubectl get no
NAME                  STATUS     ROLES           AGE   VERSION
myk8s-control-plane   Ready      control-plane   36s   v1.32.2
myk8s-worker          NotReady   &amp;lt;none&amp;gt;          19s   v1.32.2
myk8s-worker2         NotReady   &amp;lt;none&amp;gt;          19s   v1.32.2


# k8s api 주소 확인 
kubectl cluster-info
Kubernetes control plane is running at https://172.28.157.42:33215
CoreDNS is running at https://172.28.157.42:33215/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;해당 쿠버네티스 환경에 앞서 생성한 Docker Hub에 생성된 이미지를 가져오기 위해서 docker-registry 시크릿을 생성합니다.&lt;/p&gt;
&lt;pre class=&quot;routeros&quot;&gt;&lt;code&gt;# k8s secret : 도커 자격증명 설정 
kubectl get secret -A  # 생성 시 타입 지정

DHUSER=&amp;lt;도커 허브 계정&amp;gt;
DHPASS=&amp;lt;도커 허브 암호 혹은 토큰&amp;gt;
echo $DHUSER $DHPASS

kubectl create secret docker-registry dockerhub-secret \
  --docker-server=https://index.docker.io/v1/ \
  --docker-username=$DHUSER \
  --docker-password=$DHPASS&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;Gogs, Jenkins 설치&lt;/h3&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;앞선 과정에서 kind로 클러스터를 설치하면 docker network에 kind라는 Bridge가 생성됩니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이 kind 네트워크를 활용해 Gogs, Jenkins를 아래와 같이 설치합니다. 참고로 이 실습에서는 쿠버네티스, Gogs, Jenkins가 모두 생성된 kind 네트워크를 사용하며, 노출된 IP를 WSL의 eth0로 설정하여 서로간 통신이 가능하도록 설정합니다.&lt;/p&gt;
&lt;pre class=&quot;yaml&quot;&gt;&lt;code&gt;# 작업 디렉토리로 이동
cd cicd-labs

# kind 설치를 먼저 진행하여 docker network(kind) 생성 후 아래 Jenkins,gogs 생성해야 합니다.
# docker network 확인 : kind 를 사용
docker network ls
...
d91da96d2114   kind      bridge    local
...

# 
cat &amp;lt;&amp;lt;EOT &amp;gt; docker-compose.yaml
services:

  jenkins:
    container_name: jenkins
    image: jenkins/jenkins
    restart: unless-stopped
    networks:
      - kind
    ports:
      - &quot;8080:8080&quot;
      - &quot;50000:50000&quot;
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
      - jenkins_home:/var/jenkins_home

  gogs:
    container_name: gogs
    image: gogs/gogs
    restart: unless-stopped
    networks:
      - kind
    ports:
      - &quot;10022:22&quot;
      - &quot;3000:3000&quot;
    volumes:
      - gogs-data:/data

volumes:
  jenkins_home:
  gogs-data:

networks:
  kind:
    external: true
EOT


# 배포
docker compose up -d
[+] Running 21/21
 ✔ gogs Pulled                                                                                                                                                                        12.3s
 ✔ jenkins Pulled                                                                                                                                                                     33.5s

[+] Running 4/4
 ✔ Volume &quot;cicd-labs_jenkins_home&quot;  Created                                                                                                                                            0.0s
 ✔ Volume &quot;cicd-labs_gogs-data&quot;     Created                                                                                                                                            0.0s
 ✔ Container gogs                   Started                                                                                                                                            2.8s
 ✔ Container jenkins                Started

docker compose ps
NAME      IMAGE             COMMAND                  SERVICE   CREATED          STATUS                    PORTS
gogs      gogs/gogs         &quot;/app/gogs/docker/st&amp;hellip;&quot;   gogs      58 seconds ago   Up 55 seconds (healthy)   0.0.0.0:3000-&amp;gt;3000/tcp, :::3000-&amp;gt;3000/tcp, 0.0.0.0:10022-&amp;gt;22/tcp, [::]:10022-&amp;gt;22/tcp
jenkins   jenkins/jenkins   &quot;/usr/bin/tini -- /u&amp;hellip;&quot;   jenkins   58 seconds ago   Up 55 seconds             0.0.0.0:8080-&amp;gt;8080/tcp, :::8080-&amp;gt;8080/tcp, 0.0.0.0:50000-&amp;gt;50000/tcp, :::50000-&amp;gt;50000/tcp
&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;Gogs 초기 설정&lt;/h3&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;설치된 gogs를 초기화 하기 위해서 브라우저에서 아래와 같이 접근합니다.&lt;/p&gt;
&lt;pre class=&quot;dts&quot;&gt;&lt;code&gt;http://127.0.0.1:3000/install &lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;초기 설정을 위한 아래와 같은 화면이 표시됩니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1253&quot; data-origin-height=&quot;410&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/b2AcWG/btsM2pkpevW/z7w7gKKD5WJFfqzi66SZKk/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/b2AcWG/btsM2pkpevW/z7w7gKKD5WJFfqzi66SZKk/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/b2AcWG/btsM2pkpevW/z7w7gKKD5WJFfqzi66SZKk/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2Fb2AcWG%2FbtsM2pkpevW%2Fz7w7gKKD5WJFfqzi66SZKk%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1253&quot; height=&quot;410&quot; data-origin-width=&quot;1253&quot; data-origin-height=&quot;410&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;초기 설정에서 아래와 같은 정보를 설정하였습니다.&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;Database: SQLite3&lt;/li&gt;
&lt;li&gt;Application URL: http://&amp;lt;WSL의 eth0 IP&amp;gt;:3000/&lt;/li&gt;
&lt;li&gt;Default Branch: main&lt;/li&gt;
&lt;li&gt;Admin Account Settings: Username, Password, Admin Email 입력&lt;/li&gt;
&lt;/ul&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이후 [Gogs 설치하기]를 통해서 설치를 진행하고 화면이 전환되면 관리자 계정을 통해서 접속합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;앞서 설명드린 개발팀 Repository와 DevOps팀 Repository를 생성하고, 이후 인증에 활용할 Token 생성을 진행합니다. (참고로 번역이 이상한 부분이 있을 수 있어 하단의 언어 설정을 영어로 변경하고 진행하시기 바랍니다)&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;리파지터리 생성&lt;/h4&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;실습에 필요한 2개의 리파지터리를 생성하겠습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1255&quot; data-origin-height=&quot;379&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/b968Bz/btsM04Ix7IY/4UWrFEiaXPKv5cCdkyuK9K/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/b968Bz/btsM04Ix7IY/4UWrFEiaXPKv5cCdkyuK9K/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/b968Bz/btsM04Ix7IY/4UWrFEiaXPKv5cCdkyuK9K/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2Fb968Bz%2FbtsM04Ix7IY%2F4UWrFEiaXPKv5cCdkyuK9K%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1255&quot; height=&quot;379&quot; data-origin-width=&quot;1255&quot; data-origin-height=&quot;379&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;[개발팀 리파지터리]&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;Repository Name: dev-app&lt;/li&gt;
&lt;li&gt;Visibility: Private&lt;/li&gt;
&lt;li&gt;.gitignore: Python&lt;/li&gt;
&lt;li&gt;Readme: Default로 두고, initialize this repository with selected files and template 체크&lt;/li&gt;
&lt;/ul&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;[DevOps팀 리파지터리]&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;Repository Name: ops-deploy&lt;/li&gt;
&lt;li&gt;Visibility: Private&lt;/li&gt;
&lt;li&gt;.gitignore: Python&lt;/li&gt;
&lt;li&gt;Readme: Default로 두고, initialize this repository with selected files and template 체크&lt;/li&gt;
&lt;/ul&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;생성을 완료하면 아래와 같은 환경이 생성됩니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1259&quot; data-origin-height=&quot;770&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/BB9WK/btsM2JwaC9Q/lnJ0gBfReCCwFLMqPpuDTK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/BB9WK/btsM2JwaC9Q/lnJ0gBfReCCwFLMqPpuDTK/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/BB9WK/btsM2JwaC9Q/lnJ0gBfReCCwFLMqPpuDTK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FBB9WK%2FbtsM2JwaC9Q%2FlnJ0gBfReCCwFLMqPpuDTK%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1259&quot; height=&quot;770&quot; data-origin-width=&quot;1259&quot; data-origin-height=&quot;770&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;토큰 생성&lt;/h4&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;로컬 PC에서 git을 통해 Gogs로 접근하기 위해서 토큰을 생성하겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Gogs의 우측 상단 계정을 눌러서 Your Settings&amp;gt;Application&amp;gt;Generate New Token을 통해서 아래와 같이 토큰을 생성하고, 생성된 Token 값을 기록해 둡니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1248&quot; data-origin-height=&quot;648&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/dmIG3w/btsM1o7C8Bj/7d1QsIYfFqhWU1l97iekd1/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/dmIG3w/btsM1o7C8Bj/7d1QsIYfFqhWU1l97iekd1/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/dmIG3w/btsM1o7C8Bj/7d1QsIYfFqhWU1l97iekd1/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FdmIG3w%2FbtsM1o7C8Bj%2F7d1QsIYfFqhWU1l97iekd1%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1248&quot; height=&quot;648&quot; data-origin-width=&quot;1248&quot; data-origin-height=&quot;648&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이제 기본 코드를 작성하여 개발팀 리파지터리(dev-app)에 코드를 Push하겠습니다.&lt;/p&gt;
&lt;pre class=&quot;livecodeserver&quot;&gt;&lt;code&gt;TOKEN=&amp;lt;각자 Gogs Token&amp;gt;
TOKEN=8cdf5569aedd230503abea67b0794b4d1e931c10 

MyIP=&amp;lt;각자 자신의 PC IP&amp;gt; # Windows (WSL2) 사용자는 자신의 WSL2 Ubuntu eth0 IP 입력 할 것!
MyIP=172.28.157.42

git clone http://devops:$TOKEN@$MyIP:3000/devops/dev-app.git
Cloning into 'dev-app'...
...

cd dev-app

# git 초기 설정
git --no-pager config --local --list
git config --local user.name &quot;devops&quot;
git config --local user.email &quot;a@a.com&quot;
git config --local init.defaultBranch main
git config --local credential.helper store
git --no-pager config --local --list

# 정보 확인
git --no-pager branch
* main
git remote -v
origin  http://devops:8cdf5569aedd230503abea67b0794b4d1e931c10@172.28.157.42:3000/devops/dev-app.git (fetch)
origin  http://devops:8cdf5569aedd230503abea67b0794b4d1e931c10@172.28.157.42:3000/devops/dev-app.git (push)

# server.py 파일 작성
cat &amp;gt; server.py &amp;lt;&amp;lt;EOF
from http.server import ThreadingHTTPServer, BaseHTTPRequestHandler
from datetime import datetime
import socket

class RequestHandler(BaseHTTPRequestHandler):
    def do_GET(self):
        match self.path:
            case '/':
                now = datetime.now()
                hostname = socket.gethostname()
                response_string = now.strftime(&quot;The time is %-I:%M:%S %p, VERSION 0.0.1\n&quot;)
                response_string += f&quot;Server hostname: {hostname}\n&quot;                
                self.respond_with(200, response_string)
            case '/healthz':
                self.respond_with(200, &quot;Healthy&quot;)
            case _:
                self.respond_with(404, &quot;Not Found&quot;)

    def respond_with(self, status_code: int, content: str) -&amp;gt; None:
        self.send_response(status_code)
        self.send_header('Content-type', 'text/plain')
        self.end_headers()
        self.wfile.write(bytes(content, &quot;utf-8&quot;)) 

def startServer():
    try:
        server = ThreadingHTTPServer(('', 80), RequestHandler)
        print(&quot;Listening on &quot; + &quot;:&quot;.join(map(str, server.server_address)))
        server.serve_forever()
    except KeyboardInterrupt:
        server.shutdown()

if __name__== &quot;__main__&quot;:
    startServer()
EOF


# (참고) python 실행 확인: 아래와 같이 /와 /healthz 에 대해서 응답하는 간단한 웹 서버
python3 server.py
Listening on 0.0.0.0:80
127.0.0.1 - - [29/Mar/2025 23:56:19] &quot;GET / HTTP/1.1&quot; 200 -
127.0.0.1 - - [29/Mar/2025 23:56:27] &quot;GET /healthz HTTP/1.1&quot; 200 -


# Dockerfile 생성
cat &amp;gt; Dockerfile &amp;lt;&amp;lt;EOF
FROM python:3.12
ENV PYTHONUNBUFFERED 1
COPY . /app
WORKDIR /app 
CMD python3 server.py
EOF


# VERSION 파일 생성
echo &quot;0.0.1&quot; &amp;gt; VERSION

# 결과 파일 확인
tree
.
├── Dockerfile
├── README.md
├── VERSION
└── server.py

# remote에 push 진행
git status
git add .
git commit -m &quot;Add dev-app&quot;
git push -u origin main
...&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;작업을 마치면 아래와 같이 파일이 반영된 것을 확인할 수 있습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1240&quot; data-origin-height=&quot;319&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/b3bIMb/btsM14t3pq7/LJrIw8pic027Exo4ikxzCk/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/b3bIMb/btsM14t3pq7/LJrIw8pic027Exo4ikxzCk/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/b3bIMb/btsM14t3pq7/LJrIw8pic027Exo4ikxzCk/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2Fb3bIMb%2FbtsM14t3pq7%2FLJrIw8pic027Exo4ikxzCk%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1240&quot; height=&quot;319&quot; data-origin-width=&quot;1240&quot; data-origin-height=&quot;319&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;Jenkins 초기 설정&lt;/h3&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;생성된 Jenkins 컨테이너에서 초기 패스워드를 확인하고 접속합니다.&lt;/p&gt;
&lt;pre class=&quot;vala&quot;&gt;&lt;code&gt;# 작업 디렉토리로 이동
cd cicd-labs

# Jenkins 초기 암호 확인
docker compose exec jenkins cat /var/jenkins_home/secrets/initialAdminPassword
cf7605f7b5ff45349b65e9fc682ab5ca

# Jenkins 웹 접속 &amp;gt; 초기 암호 입력 &amp;gt; Plugin 설치 &amp;gt; admin 계정 정보 입력 
# &amp;gt; Jenkins URL에 WSL의 eth0 입력
웹 브라우저에서 http://127.0.0.1:8080 접속 

# (참고) 로그 확인 : 플러그인 설치 과정 확인
docker compose logs jenkins -f&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이때 실습 과정에서는 Jenkins 에서 docker build를 수행하기 때문에 jenkins 내부에 docker 를 설치하겠습니다.&lt;/p&gt;
&lt;pre class=&quot;properties&quot;&gt;&lt;code&gt;# Jenkins 컨테이너 내부에 도커 실행 파일 설치
docker compose exec --privileged -u root jenkins bash
-----------------------------------------------------
id

curl -fsSL https://download.docker.com/linux/debian/gpg -o /etc/apt/keyrings/docker.asc
chmod a+r /etc/apt/keyrings/docker.asc
echo \
  &quot;deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/debian \
  $(. /etc/os-release &amp;amp;&amp;amp; echo &quot;$VERSION_CODENAME&quot;) stable&quot; | \
  tee /etc/apt/sources.list.d/docker.list &amp;gt; /dev/null
apt-get update &amp;amp;&amp;amp; apt install docker-ce-cli curl tree jq yq -y

# 확인 (아래 명령이 정상 수행되어야 함)
docker info
docker ps
which docker

# Jenkins 컨테이너 내부에서 root가 아닌 jenkins 유저도 docker를 실행할 수 있도록 권한을 부여
groupadd -g 1001 -f docker  # Windows WSL2(Container) &amp;gt;&amp;gt; cat /etc/group 에서 docker 그룹ID를 지정

chgrp docker /var/run/docker.sock
ls -l /var/run/docker.sock
usermod -aG docker jenkins
cat /etc/group | grep docker
docker:x:1001:jenkins

exit
--------------------------------------------

# Jenkins 컨테이너 재기동으로 위 설정 내용을 Jenkins app 에도 적용 필요
docker compose restart jenkins
[+] Restarting 1/1
 ✔ Container jenkins  Started     

# jenkins user로 docker 명령 실행 확인
docker compose exec jenkins id
uid=1000(jenkins) gid=1000(jenkins) groups=1000(jenkins),1001(docker)
docker compose exec jenkins docker info
docker compose exec jenkins docker ps&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이후 Jenkins를 웹 화면으로 접속하여 해당 실습에서 사용할 플러그인을 설치합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;좌측 Jenkins 관리 &amp;gt; Plugins로 이동하여 Pipeline Stage View, Docker Pipeline, Gogs 를 각 설치합니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1853&quot; data-origin-height=&quot;583&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/DaDPW/btsM2qwROoa/vtbpvKTkKs850kom8EyRT0/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/DaDPW/btsM2qwROoa/vtbpvKTkKs850kom8EyRT0/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/DaDPW/btsM2qwROoa/vtbpvKTkKs850kom8EyRT0/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FDaDPW%2FbtsM2qwROoa%2FvtbpvKTkKs850kom8EyRT0%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1853&quot; height=&quot;583&quot; data-origin-width=&quot;1853&quot; data-origin-height=&quot;583&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;예를 들어, Available plugins에서 아래와 같이 검색하고 선택한 뒤 Install 진행합니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1431&quot; data-origin-height=&quot;286&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/cx8bwi/btsM1P4YlYa/o9UgH1vCIVxdRnkrH8S5Y0/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/cx8bwi/btsM1P4YlYa/o9UgH1vCIVxdRnkrH8S5Y0/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/cx8bwi/btsM1P4YlYa/o9UgH1vCIVxdRnkrH8S5Y0/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2Fcx8bwi%2FbtsM1P4YlYa%2Fo9UgH1vCIVxdRnkrH8S5Y0%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1431&quot; height=&quot;286&quot; data-origin-width=&quot;1431&quot; data-origin-height=&quot;286&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;마지막으로 Jenkins에서 자격 증명 설정하겠습니다. 생성하는 자격 증명은 Jenkins에서 Gogs, Docker Hub, Kubernetes에 대해 접근에 사용되는 인증 정보를 담고 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;다시 좌측 메뉴의 Jenkins 관리 &amp;gt; Credentials로 이동합니다. 아래 Domains에서 global을 선택합니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1577&quot; data-origin-height=&quot;196&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/b9LhMs/btsM2h7KZcg/IAmdTJlkRvmxNwkez0OF2K/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/b9LhMs/btsM2h7KZcg/IAmdTJlkRvmxNwkez0OF2K/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/b9LhMs/btsM2h7KZcg/IAmdTJlkRvmxNwkez0OF2K/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2Fb9LhMs%2FbtsM2h7KZcg%2FIAmdTJlkRvmxNwkez0OF2K%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1577&quot; height=&quot;196&quot; data-origin-width=&quot;1577&quot; data-origin-height=&quot;196&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Add Credentials를 통해서 각 자격 증명을 생성합니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1570&quot; data-origin-height=&quot;244&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/USeu3/btsM1KCkYwJ/N6TPcHCzzpkMYQsxgvTB7K/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/USeu3/btsM1KCkYwJ/N6TPcHCzzpkMYQsxgvTB7K/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/USeu3/btsM1KCkYwJ/N6TPcHCzzpkMYQsxgvTB7K/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FUSeu3%2FbtsM1KCkYwJ%2FN6TPcHCzzpkMYQsxgvTB7K%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1570&quot; height=&quot;244&quot; data-origin-width=&quot;1570&quot; data-origin-height=&quot;244&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;1) Gogs 자격증명 ( gogs-crd)&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;Kind : &lt;b&gt;Username with password&lt;/b&gt;&lt;/li&gt;
&lt;li&gt;Username : devops&lt;/li&gt;
&lt;li&gt;Password : *&amp;lt;토큰&amp;gt;*&lt;/li&gt;
&lt;li&gt;ID : &lt;b&gt;gogs-crd&lt;/b&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;2) 도커 허브 자격증명 (dockerhub-crd)&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;Kind : &lt;b&gt;Username with password&lt;/b&gt;&lt;/li&gt;
&lt;li&gt;Username : *&amp;lt;도커 계정명&amp;gt;*&lt;/li&gt;
&lt;li&gt;Password : *&amp;lt;토큰&amp;gt;*&lt;/li&gt;
&lt;li&gt;&lt;b&gt;ID&lt;/b&gt; : &lt;b&gt;dockerhub-crd&lt;/b&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;3) 쿠버네티스(kind) 자격증명 (k8s-crd)&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;Kind : &lt;b&gt;Secret file&lt;/b&gt;&lt;/li&gt;
&lt;li&gt;File : *&amp;lt;kubeconfig 파일&amp;gt;*&lt;/li&gt;
&lt;li&gt;ID: &lt;b&gt;k8s-crd&lt;/b&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;최종 아래와 같이 자격 증명이 생성되었습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1573&quot; data-origin-height=&quot;380&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bWTEpw/btsM0Jq7X7k/FyndnF8qaKUkfOgEJyK0C0/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bWTEpw/btsM0Jq7X7k/FyndnF8qaKUkfOgEJyK0C0/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bWTEpw/btsM0Jq7X7k/FyndnF8qaKUkfOgEJyK0C0/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbWTEpw%2FbtsM0Jq7X7k%2FFyndnF8qaKUkfOgEJyK0C0%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1573&quot; height=&quot;380&quot; data-origin-width=&quot;1573&quot; data-origin-height=&quot;380&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;Argo CD 초기 설정&lt;/h3&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아래와 같이 설치된 kind 클러스터에 Argo CD를 배포합니다.&lt;/p&gt;
&lt;pre class=&quot;yaml&quot;&gt;&lt;code&gt;# 네임스페이스 생성 및 파라미터 파일 작성
cd cicd-labs

kubectl create ns argocd
cat &amp;lt;&amp;lt;EOF &amp;gt; argocd-values.yaml
dex:
  enabled: false

server:
  service:
    type: NodePort
    nodePortHttps: 30002
  extraArgs:
    - --insecure  # HTTPS 대신 HTTP 사용
EOF

# 설치
helm repo add argo https://argoproj.github.io/argo-helm
helm install argocd argo/argo-cd --version 7.8.13 -f argocd-values.yaml --namespace argocd # 7.7.10

# 확인
kubectl get pod,svc -n argocd
NAME                                                   READY   STATUS    RESTARTS   AGE
pod/argocd-application-controller-0                    1/1     Running   0          3m51s
pod/argocd-applicationset-controller-cccb64dc8-wsd7w   1/1     Running   0          3m51s
pod/argocd-notifications-controller-7cd4d88cd4-4s789   1/1     Running   0          3m51s
pod/argocd-redis-6c5698fc46-njwtf                      1/1     Running   0          3m51s
pod/argocd-repo-server-5f6c4f4cf4-d4twk                1/1     Running   0          3m51s
pod/argocd-server-7cb958f5fb-str77                     1/1     Running   0          3m51s

NAME                                       TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)                      AGE
service/argocd-applicationset-controller   ClusterIP   10.96.90.65    &amp;lt;none&amp;gt;        7000/TCP                     3m52s
service/argocd-redis                       ClusterIP   10.96.155.55   &amp;lt;none&amp;gt;        6379/TCP                     3m52s
service/argocd-repo-server                 ClusterIP   10.96.3.228    &amp;lt;none&amp;gt;        8081/TCP                     3m52s
service/argocd-server                      NodePort    10.96.84.116   &amp;lt;none&amp;gt;        80:30080/TCP,443:30002/TCP   3m52s

kubectl get crd | grep argo
applications.argoproj.io      2025-03-29T15:40:20Z
applicationsets.argoproj.io   2025-03-29T15:40:20Z
appprojects.argoproj.io       2025-03-29T15:40:20Z

# 최초 접속 암호 확인
kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath=&quot;{.data.password}&quot; | base64 -d ;echo
LM991sRTVwk7xTPu&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;웹 브라우저에서 &lt;a href=&quot;http://127.0.0.1:30002&quot;&gt;http://127.0.0.1:30002&lt;/a&gt; 접속하고 admin에 확인된 암호로 접속합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;접근 후 Settings&amp;gt;Repositories에서 CONNECT REPO를 통해 앞서 생성한 Gogs의 ops-deploy 리파지터리를 연결합니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;678&quot; data-origin-height=&quot;335&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/vnfWD/btsMZ511jDQ/4TmDjf4U5Z23pNEhz8Yee1/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/vnfWD/btsMZ511jDQ/4TmDjf4U5Z23pNEhz8Yee1/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/vnfWD/btsMZ511jDQ/4TmDjf4U5Z23pNEhz8Yee1/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FvnfWD%2FbtsMZ511jDQ%2F4TmDjf4U5Z23pNEhz8Yee1%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;678&quot; height=&quot;335&quot; data-origin-width=&quot;678&quot; data-origin-height=&quot;335&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아래와 같이 연결정보를 입력합니다. 아래 password는 앞서 Gogs에서 생성한 Token을 사용하시면 됩니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;491&quot; data-origin-height=&quot;706&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/dPqN7b/btsMZ511jEp/6CzpnskCY3vkXgHWbJqTG0/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/dPqN7b/btsMZ511jEp/6CzpnskCY3vkXgHWbJqTG0/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/dPqN7b/btsMZ511jEp/6CzpnskCY3vkXgHWbJqTG0/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FdPqN7b%2FbtsMZ511jEp%2F6CzpnskCY3vkXgHWbJqTG0%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;491&quot; height=&quot;706&quot; data-origin-width=&quot;491&quot; data-origin-height=&quot;706&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아래와 같이 정상적으로 연결이 완료되어야 합니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1604&quot; data-origin-height=&quot;249&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/SDP64/btsM2epEkxi/8pAVLywToO939tIbVKfjM0/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/SDP64/btsM2epEkxi/8pAVLywToO939tIbVKfjM0/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/SDP64/btsM2epEkxi/8pAVLywToO939tIbVKfjM0/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FSDP64%2FbtsM2epEkxi%2F8pAVLywToO939tIbVKfjM0%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1604&quot; height=&quot;249&quot; data-origin-width=&quot;1604&quot; data-origin-height=&quot;249&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;여기까지 Docker Hub에 리파지터리를 생성하고, 이후 쿠버네티스, Gogs, Jenkins, Argo CD 를 설치하고 초기 설정을 진행하였습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;2. Jenkins를 통한 CI 구성&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아래 Jenkins 화면을 통해서 기본적인 jenkins의 용어를 살펴보겠습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;425&quot; data-origin-height=&quot;565&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/b3fjHh/btsM05Htve9/cUCSy6ePqY9MC4NdGSKtIK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/b3fjHh/btsM05Htve9/cUCSy6ePqY9MC4NdGSKtIK/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/b3fjHh/btsM05Htve9/cUCSy6ePqY9MC4NdGSKtIK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2Fb3fjHh%2FbtsM05Htve9%2FcUCSy6ePqY9MC4NdGSKtIK%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;425&quot; height=&quot;565&quot; data-origin-width=&quot;425&quot; data-origin-height=&quot;565&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Item이나 빌드라는 용어가 확인됩니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Jenkins에서 작업의 기본 단위를 &lt;b&gt;Item&lt;/b&gt;이라고 합니다. 이를 Project, Job, Pipeline 등으로 표현하기도 합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Item에는 아래와 같은 지시 사항을 포함합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;1) Trigger: 작업을 수행하는 시점 (작업 수행 Task가 언제 시작될지를 지시)&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;2) Build step: 작업을 구성하는 단계별 Task를 단계별 step으로 구성할 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;3) Post-build action: Task가 완료된 후 수행할 명령&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이때 Jenkins에서는 '&lt;b&gt;빌드&lt;/b&gt;'라는 용어로 해당 작업의 특정 실행 버전을 가집니다. 하나의 작업이 여러 번 실행된다고 할 때, 실행될 때마다 고유 빌드 번호가 부여됩니다. 작업 실행 중에 생성된 아티팩트, 콘솔 로그 등 특정 실행 버전과 관련된 세부 정보가 해당 빌드 번호로 저장됩니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h3 style=&quot;color: #000000; text-align: start;&quot; data-ke-size=&quot;size23&quot;&gt;수동으로 빌드하는 Item 생성&lt;/h3&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이제 Jenkins에서 CI를 수행하기 위해서 웹 화면의 [+ 새로운 Item]을 눌러 Item을 생성하겠습니다.&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;Item Name: pipeline-ci&lt;/li&gt;
&lt;li&gt;Item type: Pipeline&lt;/li&gt;
&lt;li&gt;pipeline script:&lt;/li&gt;
&lt;/ul&gt;
&lt;pre class=&quot;dart&quot;&gt;&lt;code&gt;pipeline {
    agent any
    environment {
        DOCKER_IMAGE = '&amp;lt;자신의 도커 허브 계정&amp;gt;/dev-app' // Docker 이미지 이름
    }
    stages {
        stage('Checkout') {
            steps {
                 git branch: 'main',
                 url: 'http://&amp;lt;자신의 집 IP&amp;gt;:3000/devops/dev-app.git',  // Git에서 코드 체크아웃
                 credentialsId: 'gogs-crd'  // Credentials ID
            }
        }
        stage('Read VERSION') {
            steps {
                script {
                    // VERSION 파일 읽기
                    def version = readFile('VERSION').trim()
                    echo &quot;Version found: ${version}&quot;
                    // 환경 변수 설정
                    env.DOCKER_TAG = version
                }
            }
        }
        stage('Docker Build and Push') {
            steps {
                script {
                    docker.withRegistry('https://index.docker.io/v1/', 'dockerhub-crd') {
                        // DOCKER_TAG 사용
                        def appImage = docker.build(&quot;${DOCKER_IMAGE}:${DOCKER_TAG}&quot;)
                        appImage.push()
                        appImage.push(&quot;latest&quot;)
                    }
                }
            }
        }
    }
    post {
        success {
            echo &quot;Docker image ${DOCKER_IMAGE}:${DOCKER_TAG} has been built and pushed successfully!&quot;
        }
        failure {
            echo &quot;Pipeline failed. Please check the logs.&quot;
        }
    }
}&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이 과정은 크게 environment, stages, post로 나눠집니다. enviroment는 환경 변수를 정의한 것으로 이해할 수 있으며, post는 앞서 설명한 Post-build action를 정의하였습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;stage 단계를 다시 CHECKOUT &amp;gt; READ VERSION &amp;gt; Docker Build and Push 단계로, 지정된 git을 checkout하고 VERSION 파일을 읽어 DOCKER_TAG에 사용할 버전 정보로 사용하면서, 마지막으로 docker build와 push를 수행합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;또 앞서 초기 설정에서 생성한 자격 증명을 credentialsId로 참조하는 것을 알 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;위 스크립트를 수정하여 하단의 Pipleline의 Pipeline script에 입력하고 저장합니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1275&quot; data-origin-height=&quot;710&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/tm6RY/btsM21XGFfI/ouDxw54wZ7PcDv2899ukZK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/tm6RY/btsM21XGFfI/ouDxw54wZ7PcDv2899ukZK/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/tm6RY/btsM21XGFfI/ouDxw54wZ7PcDv2899ukZK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2Ftm6RY%2FbtsM21XGFfI%2FouDxw54wZ7PcDv2899ukZK%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1275&quot; height=&quot;710&quot; data-origin-width=&quot;1275&quot; data-origin-height=&quot;710&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이렇게 생성된 Item에서 [지금 빌드]를 수행하면 해당 Pipeline을 수동으로 수행할 수 있습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1419&quot; data-origin-height=&quot;780&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/budUCi/btsM1P4YlYG/eI7RAqa9WPprdDEB6WZKJ0/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/budUCi/btsM1P4YlYG/eI7RAqa9WPprdDEB6WZKJ0/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/budUCi/btsM1P4YlYG/eI7RAqa9WPprdDEB6WZKJ0/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbudUCi%2FbtsM1P4YlYG%2FeI7RAqa9WPprdDEB6WZKJ0%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1419&quot; height=&quot;780&quot; data-origin-width=&quot;1419&quot; data-origin-height=&quot;780&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Docker Hub에서도 새로 업로드된 이미지가 확인됩니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;881&quot; data-origin-height=&quot;503&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/cQ6h0L/btsM1PKEptB/N0BazojmRBduHK4FHGZnz1/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/cQ6h0L/btsM1PKEptB/N0BazojmRBduHK4FHGZnz1/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/cQ6h0L/btsM1PKEptB/N0BazojmRBduHK4FHGZnz1/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FcQ6h0L%2FbtsM1PKEptB%2FN0BazojmRBduHK4FHGZnz1%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;881&quot; height=&quot;503&quot; data-origin-width=&quot;881&quot; data-origin-height=&quot;503&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;테스트로 디플로이먼트를 생성해보고, 정상적으로 파드가 실행되는지 확인합니다.&lt;/p&gt;
&lt;pre class=&quot;yaml&quot;&gt;&lt;code&gt;cat &amp;lt;&amp;lt;EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
  name: timeserver
spec:
  replicas: 2
  selector:
    matchLabels:
      pod: timeserver-pod
  template:
    metadata:
      labels:
        pod: timeserver-pod
    spec:
      containers:
      - name: timeserver-container
        image: docker.io/$DHUSER/dev-app:0.0.1
        livenessProbe:
          initialDelaySeconds: 30
          periodSeconds: 30
          httpGet:
            path: /healthz
            port: 80
            scheme: HTTP
          timeoutSeconds: 5
          failureThreshold: 3
          successThreshold: 1
      imagePullSecrets:
      - name: dockerhub-secret
EOF

kubectl get po -w
NAME                          READY   STATUS              RESTARTS   AGE
timeserver-565559b4bf-pbd2q   0/1     ContainerCreating   0          14s
timeserver-565559b4bf-sd76g   0/1     ContainerCreating   0          14s
timeserver-565559b4bf-pbd2q   1/1     Running             0          62s
timeserver-565559b4bf-sd76g   1/1     Running             0          63s

kubectl get po -owide
NAME                          READY   STATUS    RESTARTS   AGE    IP           NODE            NOMINATED NODE   READINESS GATES
timeserver-565559b4bf-pbd2q   1/1     Running   0          110s   10.244.1.5   myk8s-worker2   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
timeserver-565559b4bf-sd76g   1/1     Running   0          110s   10.244.2.6   myk8s-worker    &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;


# 접속 테스트
kubectl run curl-pod --image=curlimages/curl:latest --command -- sh -c &quot;while true; do sleep 3600; done&quot;

kubectl exec -it curl-pod -- curl 10.244.1.5
The time is 4:57:46 PM, VERSION 0.0.1
Server hostname: timeserver-565559b4bf-pbd2q&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;수동 빌드를 통해서 생성된 이미지가 정상적으로 동작합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h3 style=&quot;color: #000000; text-align: start;&quot; data-ke-size=&quot;size23&quot;&gt;자동 빌드 수행되는 Item 생성&lt;/h3&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이제 Jenkins의 Item에서 빌드가 자동으로 수행되도록, Gogs의 개발팀 리파지터리에 변경이 발생하면 Webhook을 통해서 Jenkins의 Item을 트리거 하도록 설정하겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;먼저 Jenkins에서 새로운 Item을 생성하고, 아래와 같이 pipeline script를 입력합니다.&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;Item name: SCM-Pipeline&lt;/li&gt;
&lt;li&gt;Item type: Pipeline&lt;/li&gt;
&lt;li&gt;GitHub project: &lt;a href=&quot;http://172.28.157.42:3000/devops/dev-app&quot;&gt;http://172.28.157.42:3000/devops/dev-app&lt;/a&gt; (Gogs 리파지터리)&lt;/li&gt;
&lt;li&gt;Gogs Webhook&amp;gt;Use Gogs secret: 임의로 설정&lt;/li&gt;
&lt;li&gt;Triggers&amp;gt;Build when a change is pushed to Gogs 체크&lt;/li&gt;
&lt;li&gt;Pipeline: Pipeline script from SCM 으로 설정해 해당 리파지터리의 Jenkinsfile을 사용하도록 합니다.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1368&quot; data-origin-height=&quot;1579&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/lvuRE/btsM03bTkmP/384Gh3YSO57KS0TdeT3MH0/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/lvuRE/btsM03bTkmP/384Gh3YSO57KS0TdeT3MH0/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/lvuRE/btsM03bTkmP/384Gh3YSO57KS0TdeT3MH0/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FlvuRE%2FbtsM03bTkmP%2F384Gh3YSO57KS0TdeT3MH0%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1368&quot; height=&quot;1579&quot; data-origin-width=&quot;1368&quot; data-origin-height=&quot;1579&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;참고로 아래는 현재 실습 환경 구성 때문에 수행하는 내용으로 Gogs에서 Webhook을 동일한 IP로 수행하기 위해서 아래와 같이 설정합니다.&lt;/p&gt;
&lt;pre class=&quot;ini&quot;&gt;&lt;code&gt;# gogs 컨테이너의 /data/gogs/conf/app.ini 파일 수정
[security]
INSTALL_LOCK = true
SECRET_KEY   = j2xaUPQcbAEwpIu
LOCAL_NETWORK_ALLOWLIST = 172.28.157.42 # WSL2 Ubuntu eth0 IP&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;안탁깝게도 gogs 이미지에 shell이 포함되어 있지 않아서 docker exec로는 수행이 어렵습니다. vscode의 Docker extension을 통해서 파일을 수정합니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;920&quot; data-origin-height=&quot;507&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/b0UDAZ/btsM0JScvbf/rqHkRP4O4XAKF0W4md4CE0/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/b0UDAZ/btsM0JScvbf/rqHkRP4O4XAKF0W4md4CE0/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/b0UDAZ/btsM0JScvbf/rqHkRP4O4XAKF0W4md4CE0/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2Fb0UDAZ%2FbtsM0JScvbf%2FrqHkRP4O4XAKF0W4md4CE0%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;920&quot; height=&quot;507&quot; data-origin-width=&quot;920&quot; data-origin-height=&quot;507&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이후 아래 명령으로 gogs 재시작 하시면 됩니다.&lt;/p&gt;
&lt;pre class=&quot;ebnf&quot;&gt;&lt;code&gt;docker compose restart gogs&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이제 Gogs의 Settings&amp;gt;Webhooks 에서 아래와 같이 webhook을 추가합니다.&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;Palyload URL: http://**:8080/gogs-webhook/?job=SCM-Pipeline/&lt;/li&gt;
&lt;li&gt;Secret: 임의로 설정 (Gogs와 Jenkins간 동일한 Secret을 세팅해 서로 신뢰하도록 해야함)&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1254&quot; data-origin-height=&quot;966&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/cTjVDL/btsM2GGdM4a/ry9VKFqQOfy83hYIozKCE0/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/cTjVDL/btsM2GGdM4a/ry9VKFqQOfy83hYIozKCE0/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/cTjVDL/btsM2GGdM4a/ry9VKFqQOfy83hYIozKCE0/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FcTjVDL%2FbtsM2GGdM4a%2Fry9VKFqQOfy83hYIozKCE0%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1254&quot; height=&quot;966&quot; data-origin-width=&quot;1254&quot; data-origin-height=&quot;966&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;마지막으로 해당 리파지터리에 실제로 Jenkinsfile을 작성해 Push하고, 설정된 Webhook을 통해서 정상 수행되는지 살펴보겠습니다.&lt;/p&gt;
&lt;pre class=&quot;vala&quot;&gt;&lt;code&gt;# Jenkinsfile 빈 파일 작성
touch Jenkinsfile

# VERSION 파일 : 0.0.3 수정
# server.py 파일 : 0.0.3 수정&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Jenkinsfile에 아래를 작성합니다.&lt;/p&gt;
&lt;pre class=&quot;dart&quot;&gt;&lt;code&gt;pipeline {
    agent any
    environment {
        DOCKER_IMAGE = '&amp;lt;자신의 도커 허브 계정&amp;gt;/dev-app' // Docker 이미지 이름
    }
    stages {
        stage('Checkout') {
            steps {
                 git branch: 'main',
                 url: 'http://&amp;lt;자신의 집 IP&amp;gt;:3000/devops/dev-app.git',  // Git에서 코드 체크아웃
                 credentialsId: 'gogs-crd'  // Credentials ID
            }
        }
        stage('Read VERSION') {
            steps {
                script {
                    // VERSION 파일 읽기
                    def version = readFile('VERSION').trim()
                    echo &quot;Version found: ${version}&quot;
                    // 환경 변수 설정
                    env.DOCKER_TAG = version
                }
            }
        }
        stage('Docker Build and Push') {
            steps {
                script {
                    docker.withRegistry('https://index.docker.io/v1/', 'dockerhub-crd') {
                        // DOCKER_TAG 사용
                        def appImage = docker.build(&quot;${DOCKER_IMAGE}:${DOCKER_TAG}&quot;)
                        appImage.push()
                        appImage.push(&quot;latest&quot;)
                    }
                }
            }
        }
    }
    post {
        success {
            echo &quot;Docker image ${DOCKER_IMAGE}:${DOCKER_TAG} has been built and pushed successfully!&quot;
        }
        failure {
            echo &quot;Pipeline failed. Please check the logs.&quot;
        }
    }
}&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;해당 파일을 push하여 Job이 수행되는지 확인합니다.&lt;/p&gt;
&lt;pre class=&quot;dockerfile&quot;&gt;&lt;code&gt;git add . &amp;amp;&amp;amp; git commit -m &quot;VERSION $(cat VERSION) Changed&quot; &amp;amp;&amp;amp; git push -u origin main&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;정상 수행되는 것으로 확인됩니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1581&quot; data-origin-height=&quot;1000&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bhMhN8/btsM0GabiUU/d2plucvJbgG0g2HdY0LeY1/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bhMhN8/btsM0GabiUU/d2plucvJbgG0g2HdY0LeY1/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bhMhN8/btsM0GabiUU/d2plucvJbgG0g2HdY0LeY1/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbhMhN8%2FbtsM0GabiUU%2Fd2plucvJbgG0g2HdY0LeY1%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1581&quot; height=&quot;1000&quot; data-origin-width=&quot;1581&quot; data-origin-height=&quot;1000&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;빌드된 이미지도 정상적으로 업로드되었습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;796&quot; data-origin-height=&quot;260&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bNSWuN/btsM2JCWBaj/Xakw0MjKjcJJbI81XADEJk/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bNSWuN/btsM2JCWBaj/Xakw0MjKjcJJbI81XADEJk/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bNSWuN/btsM2JCWBaj/Xakw0MjKjcJJbI81XADEJk/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbNSWuN%2FbtsM2JCWBaj%2FXakw0MjKjcJJbI81XADEJk%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;796&quot; height=&quot;260&quot; data-origin-width=&quot;796&quot; data-origin-height=&quot;260&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이로써 Jenkins를 통해서 Gogs의 개발팀 리파지터리에 변경 사항이 발생하면 자동으로 CI가 수행되도록 구성이 완료되었습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;현재 dev-app의 파일 구조는 아래와 같습니다.&lt;/p&gt;
&lt;pre class=&quot;css&quot;&gt;&lt;code&gt;tree
.
├── Dockerfile
├── Jenkinsfile
├── README.md
├── VERSION
└── server.py&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Jenkinsfile을 개발팀 리파지터리에서 관리해야할 필요가 없다면, 이는 Jenkins의 Item 생성 시 Pipeline script from SCM 설정에서 다른 리파지터리를 참조하는 것도 좋을 것 같습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;3. Argo CD를 통한 CD 구성&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;앞서 CI 테스트에서는 수동으로 쿠버네티스 환경에 디플로이먼트를 배포했습니다. 이 과정을 Jenkins를 통해 CD 구성을 할 수도 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;예를 들어, 아래와 같이 stages에 &lt;code&gt;k8s deployment blue version&lt;/code&gt; 단계를 수행하는 것과 같습니다.&lt;/p&gt;
&lt;pre class=&quot;puppet&quot;&gt;&lt;code&gt;pipeline {
    agent any

    environment {
        KUBECONFIG = credentials('k8s-crd')
    }

    stages {
        stage('Checkout') {
            steps {
                 git branch: 'main',
                 url: 'http://&amp;lt;자신의 집 IP&amp;gt;:3000/devops/dev-app.git',  // Git에서 코드 체크아웃
                 credentialsId: 'gogs-crd'  // Credentials ID
            }
        }

        stage('container image build') {
            steps {
                echo &quot;container image build&quot; // 생략
            }
        }

        stage('container image upload') {
            steps {
                echo &quot;container image upload&quot; // 생략
            }
        }

        stage('k8s deployment blue version') {
            steps {
                sh &quot;kubectl apply -f ./deploy/echo-server-blue.yaml --kubeconfig $KUBECONFIG&quot;
                sh &quot;kubectl apply -f ./deploy/echo-server-service.yaml --kubeconfig $KUBECONFIG&quot;
            }
        }

        stage('approve green version') {
            steps {
                input message: 'approve green version', ok: &quot;Yes&quot;
            }
        }

        stage('k8s deployment green version') {
            steps {
                sh &quot;kubectl apply -f ./deploy/echo-server-green.yaml --kubeconfig $KUBECONFIG&quot;
            }
        }

        stage('approve version switching') {
            steps {
                script {
                    returnValue = input message: 'Green switching?', ok: &quot;Yes&quot;, parameters: [booleanParam(defaultValue: true, name: 'IS_SWITCHED')]
                    if (returnValue) {
                        sh &quot;kubectl patch svc echo-server-service -p '{\&quot;spec\&quot;: {\&quot;selector\&quot;: {\&quot;version\&quot;: \&quot;green\&quot;}}}' --kubeconfig $KUBECONFIG&quot;
                    }
                }
            }
        }

        stage('Blue Rollback') {
            steps {
                script {
                    returnValue = input message: 'Blue Rollback?', parameters: [choice(choices: ['done', 'rollback'], name: 'IS_ROLLBACk')]
                    if (returnValue == &quot;done&quot;) {
                        sh &quot;kubectl delete -f ./deploy/echo-server-blue.yaml --kubeconfig $KUBECONFIG&quot;
                    }
                    if (returnValue == &quot;rollback&quot;) {
                        sh &quot;kubectl patch svc echo-server-service -p '{\&quot;spec\&quot;: {\&quot;selector\&quot;: {\&quot;version\&quot;: \&quot;blue\&quot;}}}' --kubeconfig $KUBECONFIG&quot;
                    }
                }
            }
        }
    }
}&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;다만 이러한 과정은 Jenkins에 kubectl 같은 바이너리가 있어야 하거나, 혹은 다른 plugin을 사용해야 하는 불편함도 있을 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;또 결국에는 각 리소스에 대한 yaml 파일을 개발팀 리파지터리에 포함하다보니, 하나의 리파지터리에 파일을 관리하는 역할이 분리되지 않는 문제도 있을 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;다른 관점으로는 사용자가 임의로 클러스터의 오브젝트를 컨트롤 하는 상황을 배제하고 싶은 경우도 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;예를 들어, 배포는 끝났고 운영 중의 상태이지만, 사용자가 디플로이와 같은 오브젝트를 변경한다면 이것은 배포 시점의 상태와는 다른 상태가 됩니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;즉, GitOps 관점에서 선언적인 상태를 Git 리파지터리에 정의(Desired Manifest)하고, 운영 중인 상태(Live Manifest)를 항상 유지하도록 하는 방식이 필요합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이러한 방식을 Argo CD를 통해서 구현할 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;먼저 Gogs의 ops-deploy에서 Settings&amp;gt;Webhooks에서 Webhook을 추가합니다.&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;Palyload URL: http://:30002/api/webhook&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1254&quot; data-origin-height=&quot;993&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/chUsKn/btsM2qcyHtf/5Zyh7NHdHFbiK7TzcPCAf1/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/chUsKn/btsM2qcyHtf/5Zyh7NHdHFbiK7TzcPCAf1/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/chUsKn/btsM2qcyHtf/5Zyh7NHdHFbiK7TzcPCAf1/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FchUsKn%2FbtsM2qcyHtf%2F5Zyh7NHdHFbiK7TzcPCAf1%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1254&quot; height=&quot;993&quot; data-origin-width=&quot;1254&quot; data-origin-height=&quot;993&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이후 Gogs의 DevOps팀 리파지터리에 실습에 필요한 파일을 작성하여 Push합니다.&lt;/p&gt;
&lt;pre class=&quot;properties&quot;&gt;&lt;code&gt;cd cicd-labs

TOKEN=&amp;lt;&amp;gt;
MyIP=172.28.157.42
git clone http://devops:$TOKEN@$MyIP:3000/devops/ops-deploy.git
cd ops-deploy

# git 기본 설정
git --no-pager config --local --list
git config --local user.name &quot;devops&quot;
git config --local user.email &quot;a@a.com&quot;
git config --local init.defaultBranch main
git config --local credential.helper store
git --no-pager config --local --list
cat .git/config

# git 확인
git --no-pager branch
 -v* main

git remote -v
origin  http://devops:8cdf5569aedd230503abea67b0794b4d1e931c10@172.28.157.42:3000/devops/ops-deploy.git (fetch)
origin  http://devops:8cdf5569aedd230503abea67b0794b4d1e931c10@172.28.157.42:3000/devops/ops-deploy.git (push)


# 폴더 생성
mkdir dev-app

# 도커 계정 정보
DHUSER=&amp;lt;도커 허브 계정&amp;gt;

# 버전 정보 
VERSION=0.0.1

# VERSION, yaml 파일 생성
cat &amp;gt; dev-app/VERSION &amp;lt;&amp;lt;EOF
$VERSION
EOF

cat &amp;gt; dev-app/timeserver.yaml &amp;lt;&amp;lt;EOF
apiVersion: apps/v1
kind: Deployment
metadata:
  name: timeserver
spec:
  replicas: 2
  selector:
    matchLabels:
      pod: timeserver-pod
  template:
    metadata:
      labels:
        pod: timeserver-pod
    spec:
      containers:
      - name: timeserver-container
        image: docker.io/$DHUSER/dev-app:$VERSION
        livenessProbe:
          initialDelaySeconds: 30
          periodSeconds: 30
          httpGet:
            path: /healthz
            port: 80
            scheme: HTTP
          timeoutSeconds: 5
          failureThreshold: 3
          successThreshold: 1
      imagePullSecrets:
      - name: dockerhub-secret
EOF

cat &amp;gt; dev-app/service.yaml &amp;lt;&amp;lt;EOF
apiVersion: v1
kind: Service
metadata:
  name: timeserver
spec:
  selector:
    pod: timeserver-pod
  ports:
  - port: 80
    targetPort: 80
    protocol: TCP
    nodePort: 30000
  type: NodePort
EOF

# Git Push
git add . &amp;amp;&amp;amp; git commit -m &quot;Add dev-app deployment yaml&quot; &amp;amp;&amp;amp; git push -u origin main
&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Argo CD는 Application이라는 CRD를 통해서 쿠버네티스 클러스터에 배포할 선언적 설정과 이를 동기화 하는 방법을 정의합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아래와 같이 ops-deploy를 바라보는 Application을 생성합니다.&lt;/p&gt;
&lt;pre class=&quot;yaml&quot;&gt;&lt;code&gt;# Application 생성
cat &amp;lt;&amp;lt;EOF | kubectl apply -f -
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: timeserver
  namespace: argocd
  finalizers:
  - resources-finalizer.argocd.argoproj.io
spec:
  project: default
  source:
    path: dev-app
    repoURL: http://$MyIP:3000/devops/ops-deploy
    targetRevision: HEAD
  syncPolicy:
    automated:
      prune: true
    syncOptions:
    - CreateNamespace=true
  destination:
    namespace: default
    server: https://kubernetes.default.svc
EOF&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Application 을 생성하면 바로 Argo CD에서 ops-deploy의 yaml을 바탕으로 sync를 하는 것을 알 수 잇습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;2158&quot; data-origin-height=&quot;775&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/mHyr6/btsM18XhlHQ/jzUlykRd7W1iNFmwJR98Ik/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/mHyr6/btsM18XhlHQ/jzUlykRd7W1iNFmwJR98Ik/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/mHyr6/btsM18XhlHQ/jzUlykRd7W1iNFmwJR98Ik/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FmHyr6%2FbtsM18XhlHQ%2FjzUlykRd7W1iNFmwJR98Ik%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;2158&quot; height=&quot;775&quot; data-origin-width=&quot;2158&quot; data-origin-height=&quot;775&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아래와 같이 명령으로 정상 실행되었는지 확인할 수 있습니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;# 확인
kubectl get applications -n argocd timeserver
NAME         SYNC STATUS   HEALTH STATUS
timeserver   Synced        Healthy

# 서비스 테스트
curl http://127.0.0.1:30000
The time is 6:40:06 PM, VERSION 0.0.1
Server hostname: timeserver-565559b4bf-sd76g
curl http://127.0.0.1:30000/healthz
Healthy&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h3 style=&quot;color: #000000; text-align: start;&quot; data-ke-size=&quot;size23&quot;&gt;조금 더 개선해보기&lt;/h3&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;여기서 더 나아가, 개발팀 리파지터리에 반영이 되면 변경 내용이 DevOps팀 리파지터리에 반영되고, 이 변경을 통해서 쿠버네티스 클러스터에 Sync가 되도록 최종 반영해 보겠습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1155&quot; data-origin-height=&quot;570&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/duWGKI/btsM0xcZmLW/2to6ems7rmfK162JjAZ2ck/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/duWGKI/btsM0xcZmLW/2to6ems7rmfK162JjAZ2ck/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/duWGKI/btsM0xcZmLW/2to6ems7rmfK162JjAZ2ck/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FduWGKI%2FbtsM0xcZmLW%2F2to6ems7rmfK162JjAZ2ck%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1155&quot; height=&quot;570&quot; data-origin-width=&quot;1155&quot; data-origin-height=&quot;570&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;개발팀 리파지터리(dev-app)의 Jenkinsfile을 아래와 같이 수정합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;기존 Jenkins pipeline에서 &lt;code&gt;ops-deploy Checkout&lt;/code&gt;를 통해 ops-deploy를 checkout하고, 변경된 VERSION 정보를 반영해 &lt;code&gt;ops-deploy version update push&lt;/code&gt; 과정에서 ops-deploy 로 push하는 것을 확인할 수 있습니다.&lt;/p&gt;
&lt;pre class=&quot;dart&quot;&gt;&lt;code&gt;pipeline {
    agent any
    environment {
        DOCKER_IMAGE = '&amp;lt;자신의 도커 허브 계정&amp;gt;/dev-app' // Docker 이미지 이름
        GOGSCRD = credentials('gogs-crd')
    }
    stages {
        stage('dev-app Checkout') {
            steps {
                 git branch: 'main',
                 url: 'http://&amp;lt;자신의 집 IP&amp;gt;:3000/devops/dev-app.git',  // Git에서 코드 체크아웃
                 credentialsId: 'gogs-crd'  // Credentials ID
            }
        }
        stage('Read VERSION') {
            steps {
                script {
                    // VERSION 파일 읽기
                    def version = readFile('VERSION').trim()
                    echo &quot;Version found: ${version}&quot;
                    // 환경 변수 설정
                    env.DOCKER_TAG = version
                }
            }
        }
        stage('Docker Build and Push') {
            steps {
                script {
                    docker.withRegistry('https://index.docker.io/v1/', 'dockerhub-crd') {
                        // DOCKER_TAG 사용
                        def appImage = docker.build(&quot;${DOCKER_IMAGE}:${DOCKER_TAG}&quot;)
                        appImage.push()
                        appImage.push(&quot;latest&quot;)
                    }
                }
            }
        }
        stage('ops-deploy Checkout') {
            steps {
                 git branch: 'main',
                 url: 'http://&amp;lt;자신의 집 IP&amp;gt;:3000/devops/ops-deploy.git',  // Git에서 코드 체크아웃
                 credentialsId: 'gogs-crd'  // Credentials ID
            }
        }
        stage('ops-deploy version update push') {
            steps {
                sh '''
                OLDVER=$(cat dev-app/VERSION)
                NEWVER=$(echo ${DOCKER_TAG})
                sed -i &quot;s/$OLDVER/$NEWVER/&quot; dev-app/timeserver.yaml
                sed -i &quot;s/$OLDVER/$NEWVER/&quot; dev-app/VERSION
                git add ./dev-app
                git config user.name &quot;devops&quot;
                git config user.email &quot;a@a.com&quot;
                git commit -m &quot;version update ${DOCKER_TAG}&quot;
                git push http://${GOGSCRD_USR}:${GOGSCRD_PSW}@&amp;lt;자신의 집 IP&amp;gt;:3000/devops/ops-deploy.git
                '''
            }
        }
    }
    post {
        success {
            echo &quot;Docker image ${DOCKER_IMAGE}:${DOCKER_TAG} has been built and pushed successfully!&quot;
        }
        failure {
            echo &quot;Pipeline failed. Please check the logs.&quot;
        }
    }
}&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;개발팀 코드 배포는 Jenkins의 Job을 트리거하고, Jenkins Pipeline을 통해 컨테이너 이미지 빌드 작업을 수행하고, 다시 ops-deploy에 버전 변경 내용을 반영 시켜, 최종 쿠버네티스 환경에 Sync 되도록 Argo CD가 동작합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이제 아래와 같이 VERSION 정보까지 수정하고 push를 진행합니다.&lt;/p&gt;
&lt;pre class=&quot;dockerfile&quot;&gt;&lt;code&gt;# VERSION 파일 수정 : 0.0.3
# server.py 파일 수정 : 0.0.3

# git push : VERSION, server.py, Jenkinsfile
git add . &amp;amp;&amp;amp; git commit -m &quot;VERSION $(cat VERSION) Changed&quot; &amp;amp;&amp;amp; git push -u origin main&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;테스트 과정에서 몇 가지 에러가 발생하였지만, 최종 잘 반영되는 것으로 확인됩니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;2148&quot; data-origin-height=&quot;928&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/OKI4W/btsM0NzPDic/ikOllCPK9NT9wKm4b7Uq10/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/OKI4W/btsM0NzPDic/ikOllCPK9NT9wKm4b7Uq10/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/OKI4W/btsM0NzPDic/ikOllCPK9NT9wKm4b7Uq10/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FOKI4W%2FbtsM0NzPDic%2FikOllCPK9NT9wKm4b7Uq10%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;2148&quot; height=&quot;928&quot; data-origin-width=&quot;2148&quot; data-origin-height=&quot;928&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;ArgoCD에서도 Sync를 통해서 다른 ReplicaSet이 생성되는 것을 확인할 수 있습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;873&quot; data-origin-height=&quot;366&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/KkGZi/btsM2ane8Pw/juMlJd6nNeYlVriX2ltK0K/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/KkGZi/btsM2ane8Pw/juMlJd6nNeYlVriX2ltK0K/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/KkGZi/btsM2ane8Pw/juMlJd6nNeYlVriX2ltK0K/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FKkGZi%2FbtsM2ane8Pw%2FjuMlJd6nNeYlVriX2ltK0K%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;873&quot; height=&quot;366&quot; data-origin-width=&quot;873&quot; data-origin-height=&quot;366&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이것으로 실습을 마무리하겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;마무리&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;실습을 통해 Jenkins를 통한 CI와 Argo CD를 통한 CD를 구성해보았습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;쿠버네티스 환경에서 CI/CD 과정을 어떻게 간결하고 자동화를 할 수 있는지 대략적인 아이디어를 얻어가셨으면 좋겠습니다.&lt;/p&gt;</description>
      <category>Kubernetes</category>
      <category>argo cd</category>
      <category>CICD</category>
      <category>Jenkins</category>
      <category>kubernetes</category>
      <category>배포</category>
      <category>쿠버네티스</category>
      <author>한명</author>
      <guid isPermaLink="true">https://a-person.tistory.com/43</guid>
      <comments>https://a-person.tistory.com/43#entry43comment</comments>
      <pubDate>Sun, 30 Mar 2025 04:37:35 +0900</pubDate>
    </item>
    <item>
      <title>[7] EKS Fargate</title>
      <link>https://a-person.tistory.com/42</link>
      <description>&lt;p data-ke-size=&quot;size16&quot;&gt;이번 포스트에서는 EKS의 Fargate에 대해서 살펴보겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;EKS Fargate는 EKS의 노드 그룹을 사용하지 않고 컨테이너를 서버리스 컴퓨팅 엔진에 실행하는 방식입니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;먼저 EKS Fargate를 살펴보고, 이와 유사한 AKS의 Virtual Nodes를 통해 각 Managed Kubernetes Service에서 노드를 사용하지 않고 컨테이너를 실행하기 위한 구현 방식을 살펴보고, 실습을 통해 확인해보습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;목차&lt;/h2&gt;
&lt;ol style=&quot;list-style-type: decimal;&quot; data-ke-list-type=&quot;decimal&quot;&gt;
&lt;li&gt;EKS의 Fargate&lt;/li&gt;
&lt;li&gt;AKS의 Virtual Nodes&lt;/li&gt;
&lt;/ol&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;1. EKS Fargate&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;일반적으로 EKS에서는 노드 그룹을 생성하여 워커 노드를 사용할 수 있습니다. EKS의 컴퓨팅을 제공하는 옵션 중 노드인 EC2 인스턴스를 활용하지 않는 방식으로 EKS Fargate가 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;먼저 AWS Fargate를 이해하기 위해서 Amazon ECS를 먼저 살펴보겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;AWS에서 컨테이너를 실행하는 방식 중 하나로 Amazon ECS(Elastic Container Service)라는 완전 관리형 컨테이너 오케스트레이션 서비스를 제공하고 있습니다. 사용자는 Amazon ECS를 통해서 컨테이너화된 애플리케이션을 쉽게 배포하고 관리할 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Amazon ECS는 아래와 같이 세 가지 계층을 가지고 있는데, 이 중 ECS가 실행되는 인프라를 의미하는 Capacity options에 AWS Fargate가 있다는 것을 알 수 있습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;841&quot; data-origin-height=&quot;552&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/L0VNQ/btsMSUMAVZm/VLnKdoisqOQmkeL5t9PoJk/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/L0VNQ/btsMSUMAVZm/VLnKdoisqOQmkeL5t9PoJk/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/L0VNQ/btsMSUMAVZm/VLnKdoisqOQmkeL5t9PoJk/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FL0VNQ%2FbtsMSUMAVZm%2FVLnKdoisqOQmkeL5t9PoJk%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;841&quot; height=&quot;552&quot; data-origin-width=&quot;841&quot; data-origin-height=&quot;552&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;출처: &lt;a href=&quot;https://docs.aws.amazon.com/ko_kr/AmazonECS/latest/developerguide/Welcome.html&quot;&gt;https://docs.aws.amazon.com/ko_kr/AmazonECS/latest/developerguide/Welcome.html&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;ECS의 용량 옵션에서 EC2를 선택하면 실제 EC2 인스턴스를 통해 컨테이너가 실행됩니다. 반면 Fargate는 서버리스 종량제 컴퓨팅 엔진을 의미합니다. 즉 가상 머신 자체를 배포하지 않는 형태이기 때문에 경량이라는 장점이 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;EKS의 Fargate도 동일합니다. EKS는 노드 그룹을 통해서 EC2를 통해 사용자 워커 노드를 제공하는데, 서버리스 컴퓨팅 엔진인 Fargate를 활용할 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아래와 같이 EKS의 파드가 실행되는 Data Plane을 위한 개별 옵션입니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;800&quot; data-origin-height=&quot;435&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/vFzXs/btsMS93Rk3G/wDzUtX42wKTImZGyWKBJp1/img.webp&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/vFzXs/btsMS93Rk3G/wDzUtX42wKTImZGyWKBJp1/img.webp&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/vFzXs/btsMS93Rk3G/wDzUtX42wKTImZGyWKBJp1/img.webp&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FvFzXs%2FbtsMS93Rk3G%2FwDzUtX42wKTImZGyWKBJp1%2Fimg.webp&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;800&quot; height=&quot;435&quot; data-origin-width=&quot;800&quot; data-origin-height=&quot;435&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;출처: &lt;a href=&quot;https://www.eksworkshop.com/docs/fundamentals/fargate/&quot;&gt;https://www.eksworkshop.com/docs/fundamentals/fargate/&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Fargate와 같은 컴퓨팅 옵션은 보통 지속적으로 실행하지 않아도 되는 유형이면서, stateless 한 애플리케이션에 적합합니다. 특정 Job을 수행하고 종료하는 워크로드 혹은 빠른 배포가 필요하고 필요없는 경우 종료가 가능한 유형의 워크로드라면 서버리스 컴퓨팅 엔진을 활용하는 Fargate를 고려할 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;실습을 통해서 EKS Fargate를 더 살펴보겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;해당 실습은 Amazone EKS Blueprints for Terraform의 예제를 통해서 진행하겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;참고: &lt;a href=&quot;https://aws-ia.github.io/terraform-aws-eks-blueprints/&quot;&gt;https://aws-ia.github.io/terraform-aws-eks-blueprints/&lt;/a&gt;&lt;/p&gt;
&lt;pre class=&quot;properties&quot;&gt;&lt;code&gt;# 테라폼 코드 가져오기
git clone https://github.com/aws-ia/terraform-aws-eks-blueprints
cd terraform-aws-eks-blueprints/patterns/fargate-serverless

# 테라폼 초기화
terraform init

# 테라폼 Plan 확인
terraform plan

# 테라폼 배포
# 배포 : EKS, Add-ons, fargate profile - 13분 소요
terraform apply -auto-approve


# 배포 완료 후 확인
terraform state list
module.eks.data.aws_caller_identity.current
...

terraform output
...&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;생성된 리소스를 살펴보면 fargate 형태의 노드가 4대 확인되며, 또한 파드가 각 노드에 실행 중인 것을 알 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이때 파드 IP와 노드 IP가 같은 것을 알 수 있는데, EKS fargate에서는 각 파드를 위해서 하나의 fargate노드가 실행되는 구조라는 것을 알 수 있습니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;# kubeconfig 획득
aws eks --region us-west-2 update-kubeconfig --name fargate-serverless

# 노드, 파드 정보 확인
kubectl get no -o wide
NAME                                                STATUS   ROLES    AGE   VERSION               INTERNAL-IP   EXTERNAL-IP   OS-IMAGE         KERNEL-VERSION                  CONTAINER-RUNTIME
fargate-ip-10-0-1-239.us-west-2.compute.internal    Ready    &amp;lt;none&amp;gt;   48m   v1.30.8-eks-2d5f260   10.0.1.239    &amp;lt;none&amp;gt;        Amazon Linux 2   5.10.234-225.910.amzn2.x86_64   containerd://1.7.25
fargate-ip-10-0-18-94.us-west-2.compute.internal    Ready    &amp;lt;none&amp;gt;   48m   v1.30.8-eks-2d5f260   10.0.18.94    &amp;lt;none&amp;gt;        Amazon Linux 2   5.10.234-225.910.amzn2.x86_64   containerd://1.7.25
fargate-ip-10-0-20-74.us-west-2.compute.internal    Ready    &amp;lt;none&amp;gt;   48m   v1.30.8-eks-2d5f260   10.0.20.74    &amp;lt;none&amp;gt;        Amazon Linux 2   5.10.234-225.910.amzn2.x86_64   containerd://1.7.25
fargate-ip-10-0-35-232.us-west-2.compute.internal   Ready    &amp;lt;none&amp;gt;   48m   v1.30.8-eks-2d5f260   10.0.35.232   &amp;lt;none&amp;gt;        Amazon Linux 2   5.10.234-225.910.amzn2.x86_64   containerd://1.7.25

kubectl get pod -A -o wide
NAMESPACE     NAME                                           READY   STATUS    RESTARTS   AGE   IP            NODE
kube-system   aws-load-balancer-controller-c946d85dd-2n65t   1/1     Running   0          48m   10.0.35.232   fargate-ip-10-0-35-232.us-west-2.compute.internal   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system   aws-load-balancer-controller-c946d85dd-2t662   1/1     Running   0          48m   10.0.18.94    fargate-ip-10-0-18-94.us-west-2.compute.internal    &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system   coredns-69fd949db7-95njt                       1/1     Running   0          49m   10.0.20.74    fargate-ip-10-0-20-74.us-west-2.compute.internal    &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system   coredns-69fd949db7-b5jpf                       1/1     Running   0          49m   10.0.1.239    fargate-ip-10-0-1-239.us-west-2.compute.internal    &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;노드 정보를 살펴보면 comput-type에 대해서 Label과 Taint가 적용된 것을 알 수 있습니다.&lt;/p&gt;
&lt;pre class=&quot;routeros&quot;&gt;&lt;code&gt;kubectl describe node | grep -A 3 Labels
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    eks.amazonaws.com/compute-type=fargate
...
kubectl describe node | grep Taints
Taints:             eks.amazonaws.com/compute-type=fargate:NoSchedule
...&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;EKS에서 Fargate를 사용하기 위해서 Fargate Profile을 생성해야 합니다. 이 프로파일은 Fargate를 사용할 리소스의 네임스페이스와 Label을 사전에 지정(selectors)합니다. 또한 파드가 배포되는 서브넷과 IAM Role에 대한 정보도 Fargate Profile에 포함됩니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;879&quot; data-origin-height=&quot;483&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bUxCE8/btsMT4nbDqu/MTGJeBs6RhUsqW3I6DWv40/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bUxCE8/btsMT4nbDqu/MTGJeBs6RhUsqW3I6DWv40/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bUxCE8/btsMT4nbDqu/MTGJeBs6RhUsqW3I6DWv40/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbUxCE8%2FbtsMT4nbDqu%2FMTGJeBs6RhUsqW3I6DWv40%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;879&quot; height=&quot;483&quot; data-origin-width=&quot;879&quot; data-origin-height=&quot;483&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;출처: &lt;a href=&quot;https://aws.amazon.com/ko/blogs/containers/use-cloudformation-to-automate-management-of-the-fargate-profile-in-amazon-eks/&quot;&gt;https://aws.amazon.com/ko/blogs/containers/use-cloudformation-to-automate-management-of-the-fargate-profile-in-amazon-eks/&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;실습의 terraform 코드에서는 아래와 같이 fargate profile을 지정한 것을 알 수 있습니다.&lt;/p&gt;
&lt;pre class=&quot;nix&quot;&gt;&lt;code&gt;
  fargate_profiles = {
    app_wildcard = {
      selectors = [
        { namespace = &quot;app-*&quot; }
      ]
    }
    kube_system = {
      name = &quot;kube-system&quot;
      selectors = [
        { namespace = &quot;kube-system&quot; }
      ]
    }
  }
&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;웹 콘솔에서는 아래와 같이 확인할 수 있습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1752&quot; data-origin-height=&quot;230&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/cecW8X/btsMUjEonRL/1sKKr11SwNTNtlZrBKrrCK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/cecW8X/btsMUjEonRL/1sKKr11SwNTNtlZrBKrrCK/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/cecW8X/btsMUjEonRL/1sKKr11SwNTNtlZrBKrrCK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FcecW8X%2FbtsMUjEonRL%2F1sKKr11SwNTNtlZrBKrrCK%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1752&quot; height=&quot;230&quot; data-origin-width=&quot;1752&quot; data-origin-height=&quot;230&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;kube-system 이라는 프로파일로 인해서 실제로 kube-system 또한 관리형 노드 그룹을 사용하지 않고 모두 fargate 형태로 실행되었습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;API 서버에 Fargate에 해당하는 파드가 요청되면 Admission controller에 의해 Mutating Webhook으로 Fargate로 스케줄링되도록 정보가 변경됩니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이 과정을 세부적으로 살펴보면 아래와 같이 파드가 요청되면 Mutating Webhook에 의해서 Fargate Profile에 대한 정보와 schedulerName이 Fargate-scheduler로 지정됩니다. 이 정보를 바탕으로 Fargate Scheduler는 Fagate 환경에 파드가 스케줄링하고 파드가 실행됩니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1636&quot; data-origin-height=&quot;797&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/btR3ft/btsMS7kETyr/5Lyp7V4LcJbGmFjxoKXIoK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/btR3ft/btsMS7kETyr/5Lyp7V4LcJbGmFjxoKXIoK/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/btR3ft/btsMS7kETyr/5Lyp7V4LcJbGmFjxoKXIoK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbtR3ft%2FbtsMS7kETyr%2F5Lyp7V4LcJbGmFjxoKXIoK%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1636&quot; height=&quot;797&quot; data-origin-width=&quot;1636&quot; data-origin-height=&quot;797&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;출처: &lt;a href=&quot;https://aws.amazon.com/ko/blogs/containers/the-role-of-aws-fargate-in-the-container-world/&quot;&gt;https://aws.amazon.com/ko/blogs/containers/the-role-of-aws-fargate-in-the-container-world/&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이를 coredns 파드를 통해서 살펴보면 아래와 같이 fargate-profile과 또한 schedulerName이 지정된 것을 확인할 수 있습니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;kubectl get po -n kube-system   coredns-69fd949db7-95njt -oyaml |grep fargate
    eks.amazonaws.com/fargate-profile: kube-system
  nodeName: fargate-ip-10-0-20-74.us-west-2.compute.internal
  schedulerName: fargate-scheduler&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이렇게 요청된 파드의 정보가 Fargate Profile의 Selector에서 지정한 정보와 일치하는 지를 바탕으로 스케줄링을 수행하기 때문에 Fargate로 스케줄링된 리소스는 일반 노드에는 배포되지 않습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;일반 노드에 실행되는 워크로드와 Fargate에 실행되는 워크로드는 스케줄링에 있어 배타적인 관계입니다. 예를 들어, 노드가 부족한 경우라도 파드가 Fargate로 Burst해서 실행할 수 있는 구조가 아닙니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Fargate 자체는 사용자가 생성한 노드 리소스가 아니기 때문에 EC2 인스턴스에서는 인스턴스가 확인되지 않습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이때 Network Interface는 확인이 가능합니다. 다만 아래의 정보와 같이 Network Interface이 Owner와 Instance의 Owner가 다르다는 것을 알 수 있습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1770&quot; data-origin-height=&quot;914&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/RvnXr/btsMS2qjiJb/RVCfgvsP4JJqj1oK3bklFk/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/RvnXr/btsMS2qjiJb/RVCfgvsP4JJqj1oK3bklFk/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/RvnXr/btsMS2qjiJb/RVCfgvsP4JJqj1oK3bklFk/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FRvnXr%2FbtsMS2qjiJb%2FRVCfgvsP4JJqj1oK3bklFk%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1770&quot; height=&quot;914&quot; data-origin-width=&quot;1770&quot; data-origin-height=&quot;914&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;EKS Fargate가 사용자 VPC와 연계되는 방식은 아래와 같은 형태로 구성됩니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1461&quot; data-origin-height=&quot;795&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/nWWXb/btsMUsut34N/Wh3HZwWP9REl59xI1UcO8k/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/nWWXb/btsMUsut34N/Wh3HZwWP9REl59xI1UcO8k/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/nWWXb/btsMUsut34N/Wh3HZwWP9REl59xI1UcO8k/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FnWWXb%2FbtsMUsut34N%2FWh3HZwWP9REl59xI1UcO8k%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1461&quot; height=&quot;795&quot; data-origin-width=&quot;1461&quot; data-origin-height=&quot;795&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;출처: &lt;a href=&quot;https://www.kiranjthomas.com/posts/fargate-under-the-hood/&quot;&gt;https://www.kiranjthomas.com/posts/fargate-under-the-hood/&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;1) Fargate를 위한 EC2 인스턴스가 별도의 Fargate VPC에서 실행됩니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;2) 이 인스턴스의 Primary Network Interface는 Fargate VPC에 위치하여, Container Runtime, Fargate Agent, Guest Kernel&amp;amp; OS를 위한 네트워크 트래픽을 처리합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;3) 이 인스턴스의 Secondary Network Interface가 사용자 VPC에 연결되어 컨테이너간 통신과 Image Pulling과 같은 네트워크 트래픽을 처리합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;위의 그림과 설명에서는 Fargate가 EC2로 표현되어 있지만 이는 Lightweight VM으로 알려진 Firecracker를 사용하고 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;EKS의 Fargate는 EC2 인스턴스를 유지하지 않아도 되기 때문에 비용 효과적이라고 생각할 수 있지만, 일반적으로 Fargate는 동일한 용량의 EC2에 비해서는 비용이 더 비싸게 책정됩니다. 이는 실행되는 파드를 위해서 노드에서 실행되는 kube-proxy, containerd, kubelet 컴포넌트가 배포되어 일부 추가적인 리소스를 사용하기 때문입니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이를 아래 장표에서 보시면 256MB 정도가 추가되는 것을 확인할 수 있습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1106&quot; data-origin-height=&quot;525&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/4ezR1/btsMSU0cd2z/lMGaK2o46AZxrEaxnXuHk0/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/4ezR1/btsMSU0cd2z/lMGaK2o46AZxrEaxnXuHk0/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/4ezR1/btsMSU0cd2z/lMGaK2o46AZxrEaxnXuHk0/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2F4ezR1%2FbtsMSU0cd2z%2FlMGaK2o46AZxrEaxnXuHk0%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1106&quot; height=&quot;525&quot; data-origin-width=&quot;1106&quot; data-origin-height=&quot;525&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;출처: &lt;a href=&quot;https://www.youtube.com/watch?v=N0uLK5syctU&quot;&gt;https://www.youtube.com/watch?v=N0uLK5syctU&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;추가로 이러한 리소스는 Fargate 리소스 타입에 맞춰 반올림되어 구성되기 때문에, 실제 파드 spec의 request 용량보다 큰 사이즈의 Fargate 리소스가 사용되는 점도 아실 필요가 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;그러하므로 EKS의 Fargate 옵션은 비용 측면보다는 서버리스 워크로드에 적합한지 여부를 바탕으로 판단할 필요가 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;또한 EKS Fargate에는 다수 고려사항이 있으므로, 제약사항을 문서를 통해 사전에 확인하시기 바랍니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;https://docs.aws.amazon.com/ko_kr/eks/latest/userguide/fargate.html#fargate-considerations&quot;&gt;https://docs.aws.amazon.com/ko_kr/eks/latest/userguide/fargate.html#fargate-considerations&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아래와 같이 리소스를 정리하고 실습을 마무리 하겠습니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;terraform destroy -auto-approve&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;2. AKS Virtual Nodes&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;AKS에서는 노드를 사용하지 않고 Virtual Nodes를 사용하여 파드를 실행할 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Azure에서는 ACI(Azure Container Instance)라는 서버리스 컨테이너 서비스를 가지고 있습니다(이는 AWS의 ECS와 같은 서비스 입니다). AKS에서 Virtual Nodes를 통해 파드를 실행하면 파드는 ACI의 형태로 실행된다고 볼 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;참고: &lt;a href=&quot;https://learn.microsoft.com/ko-kr/azure/container-instances/container-instances-overview&quot;&gt;https://learn.microsoft.com/ko-kr/azure/container-instances/container-instances-overview&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;AKS에서 Virtual Nodes를 사용하면 실제로 노드를 확인 했을 때 Virtual Nodes가 추가되는 형태로 보이는데, 이는 Virtual Kubelet이라는 오픈 소스를 기반으로 동작합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Virtual Kubelet은 kubelet과 같이 동작하면서 쿠버네티스가 다른 API와 연계되도록 동작합니다. 이 방식을 통해서 다른 ACI, AWS Fargate 등과 같은 서비스를 통해서 노드를 사용하는 것 처럼 할 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아래 그림은 Virtual Kubelet의 동작 방식으로, Virtual Kublet은 kubelet과 같이 자신을 노드로 등록하여, 실제로 파드가 Virtual Node에 스케줄링될 수도록 API를 구현하고 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;https://github.com/virtual-kubelet/virtual-kubelet/blob/master/website/static/img/diagram.svg&quot;&gt;&lt;img src=&quot;https://github.com/virtual-kubelet/virtual-kubelet/raw/master/website/static/img/diagram.svg&quot; alt=&quot;diagram&quot; /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;출처: &lt;a href=&quot;https://github.com/virtual-kubelet/virtual-kubelet?tab=readme-ov-file&quot;&gt;https://github.com/virtual-kubelet/virtual-kubelet?tab=readme-ov-file&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;AKS에서 Virtual Nodes를 사용하게 되면 Virtual Nodes에 스케줄링이 되고, virtual kubelet이 ACI와 연계하여 파드를 실행하는 방식으로 동작하게 됩니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;AKS에서는 addon 형태로 Virtual Nodes를 지원합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아래 실습 문서를 바탕으로 진행하면서 AKS Virtual Nodes에 대해서 살펴보겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;https://docs.azure.cn/en-us/aks/virtual-nodes-cli&quot;&gt;https://docs.azure.cn/en-us/aks/virtual-nodes-cli&lt;/a&gt;&lt;/p&gt;
&lt;pre class=&quot;routeros&quot;&gt;&lt;code&gt;# 변수 선언
PREFIX=aks-vn
RG=${PREFIX}-rg
AKSNAME=${PREFIX}
LOC=koreacentral
VNET=aks-vnet
AKSSUBNET=aks-subnet
VNSUBNET=vn-subnet

# 리소스 그룹 생성
az group create --name $RG --location $LOC -o none

az network vnet create --resource-group $RG --name $VNET --address-prefixes 10.0.0.0/8 --subnet-name $AKSSUBNET --subnet-prefix 10.240.0.0/16 -o none
az network vnet subnet create --resource-group $RG --vnet-name $VNET --name $VNSUBNET --address-prefixes 10.241.0.0/16 -o none

SUBNET_ID=$(az network vnet subnet show --resource-group $RG --vnet-name $VNET --name $AKSSUBNET --query id -o tsv)

# AKS 클러스터 설치
az aks create --resource-group $RG --name $AKSNAME --node-count 2 --network-plugin azure --vnet-subnet-id $SUBNET_ID --generate-ssh-keys

# 노드 정보 확인
az aks get-credentials --resource-group $RG --name $AKSNAME
kubectl get nodes
NAME                                STATUS   ROLES    AGE    VERSION
aks-nodepool1-14565790-vmss000000   Ready    &amp;lt;none&amp;gt;   100s   v1.30.9
aks-nodepool1-14565790-vmss000001   Ready    &amp;lt;none&amp;gt;   100s   v1.30.9&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;AKS를 생성하면 기본 노드 2대가 확인됩니다. EKS는 addon 컴포넌트들도 Fargate로 실행될수 있는 반면, AKS는 기본적인 시스템 컴포넌트는 여전히 일반 노드에서 실행이 필요합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이제 Virtual Nodes addon을 활성화하고 다시 노드를 살펴보면 virtual node에 해당하는 노드가 확인됩니다.&lt;/p&gt;
&lt;pre class=&quot;crmsh&quot;&gt;&lt;code&gt;# Virtual Nodes addon 활성화
az aks enable-addons --resource-group $RG --name $AKSNAME --addons virtual-node --subnet-name $VNSUBNET

# 노드 정보 확인
kubectl get nodes
NAME                                STATUS   ROLES    AGE     VERSION
aks-nodepool1-14565790-vmss000000   Ready    &amp;lt;none&amp;gt;   14m     v1.30.9
aks-nodepool1-14565790-vmss000001   Ready    &amp;lt;none&amp;gt;   14m     v1.30.9
virtual-node-aci-linux              Ready    agent    2m51s   v1.25.0-vk-azure-aci-1.6.2&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;실행 중인 파드를 살펴보면 aci-connector-linux라는 파드가 실행되는 것을 알 수 있는데, virtual kubelet의 역할을 수행하며 AKS 클러스터와 ACI의 Management API 간의 가교 역할을 수행합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아래 명령으로 살펴보면 aci-connector-linux 와 노드의 IP가 10.240.0.32으로 동일한 것을 알 수 있습니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;kubectl get po -A -owide
NAMESPACE     NAME                                   READY   STATUS    RESTARTS   AGE     IP            NODE                                NOMINATED NODE   READINESS GATES
kube-system   aci-connector-linux-79d9bf8946-7hv8s   1/1     Running   0          17m     10.240.0.32   aks-nodepool1-14565790-vmss000001   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
..

kubectl get no -A -owide
NAME                                STATUS   ROLES    AGE   VERSION                      INTERNAL-IP   EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION      CONTAINER-RUNTIME
..
virtual-node-aci-linux              Ready    agent    15m   v1.25.0-vk-azure-aci-1.6.2   10.240.0.32   &amp;lt;none&amp;gt;        &amp;lt;unknown&amp;gt;            &amp;lt;unknown&amp;gt;           &amp;lt;unknown&amp;gt;&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;또한 포탈에서 확인해보면 AKS 노드를 위한 서브넷과 다르게, Virtual Node를 위해 생성된 서브넷은 실제로 ACI에서 배포를 진행하게 되므로 Azure Container Instance에 위임된 상태임을 알 수 있습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1702&quot; data-origin-height=&quot;143&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/cl5aMn/btsMTOdOF3T/tnA7MyKC5eAw6ROlGw6VE0/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/cl5aMn/btsMTOdOF3T/tnA7MyKC5eAw6ROlGw6VE0/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/cl5aMn/btsMTOdOF3T/tnA7MyKC5eAw6ROlGw6VE0/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2Fcl5aMn%2FbtsMTOdOF3T%2FtnA7MyKC5eAw6ROlGw6VE0%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1702&quot; height=&quot;143&quot; data-origin-width=&quot;1702&quot; data-origin-height=&quot;143&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;EKS에서는 Fargate Profile을 생성하고, 특정 파드가 이 프로파일에 적용 가능하면 Fargate Scheduler에 의해서 Fargate로 배포가 되는 형태였습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;반면 Virtual Nodes에는 기본적으로 아래와 같은 Taint가 적용되어 있고, 기본적인 Taint, Toleration 방식을 통해서 일반 노드나 혹은 Virtual Nodes로 배포되도록 할 수 있습니다. 이는 일반적인 스케줄링 기법과 다르지 않습니다.&lt;/p&gt;
&lt;pre class=&quot;yaml&quot;&gt;&lt;code&gt;$ kubectl describe no virtual-node-aci-linux |grep -A 1 -B 1 Taint
CreationTimestamp:  Sat, 22 Mar 2025 15:57:33 +0000
Taints:             virtual-kubelet.io/provider=azure:NoSchedule
Unschedulable:      false&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;그러하므로 Virtual Nodes에 실행되는 워크로드는 Toleration이 필요합니다. 만약 파드의 스케줄링을 Virtual nodes로 강제하지 않으면 일반 노드에서도 실행될 수 있다는 것을 알 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아래로 샘플 애플리케이션을 배포해서 실제로 어떻게 배포되는지 살펴보겠습니다.&lt;/p&gt;
&lt;pre class=&quot;yaml&quot;&gt;&lt;code&gt;apiVersion: apps/v1
kind: Deployment
metadata:
  name: aci-helloworld
spec:
  replicas: 4
  selector:
    matchLabels:
      app: aci-helloworld
  template:
    metadata:
      labels:
        app: aci-helloworld
    spec:
      containers:
      - name: aci-helloworld
        image: mcr.microsoft.com/azuredocs/aci-helloworld
        ports:
        - containerPort: 80
        resources:
          requests:
            cpu: 200m
      tolerations:
      - key: virtual-kubelet.io/provider
        operator: Exists
      - key: azure.com/aci
        effect: NoSchedule
      affinity:
        nodeAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
            - weight: 1
              preference:
                matchExpressions:
                  - key: type
                    operator: NotIn
                    values:
                      - virtual-kubelet&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;파드의 Toleration과 Afffinity를 살펴볼 필요가 있습니다. 먼저 Virtual Nodes의 Taint에 대한 toleration이 지정되어 있습니다.&lt;/p&gt;
&lt;pre class=&quot;less&quot;&gt;&lt;code&gt;      tolerations:
      - key: virtual-kubelet.io/provider
        operator: Exists
      - key: azure.com/aci
        effect: NoSchedule&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이 경우에는 파드가 바로 Virtual Nodes로 배포될 수 있으므로, 아래와 같이 nodeAffinity를 임의로 지정했습니다.&lt;/p&gt;
&lt;pre class=&quot;routeros&quot;&gt;&lt;code&gt;      affinity:
        nodeAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
            - weight: 1
              preference:
                matchExpressions:
                  - key: type
                    operator: NotIn
                    values:
                      - virtual-kubelet&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이렇게 배포하면 nodeAffinity에 따라 virtual-kubelet이 아닌 노드에 먼저 스케줄링이 되고, 배포되지 못한 나머지 파드가 virtual node에 배포된 것을 확인할 수 있습니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;kubectl get po -owide
NAME                              READY   STATUS    RESTARTS   AGE   IP            NODE                                NOMINATED NODE   READINESS GATES
aci-helloworld-86c987d849-9pw8r   1/1     Running   0          52s   10.240.0.55   aks-nodepool1-14565790-vmss000000   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
aci-helloworld-86c987d849-hp5nv   1/1     Running   0          53s   10.240.0.8    aks-nodepool1-14565790-vmss000001   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
aci-helloworld-86c987d849-rh9tx   1/1     Running   0          52s   10.241.0.4    virtual-node-aci-linux              &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
aci-helloworld-86c987d849-v8kdx   1/1     Running   0          52s   10.240.0.18   aks-nodepool1-14565790-vmss000001   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;즉, 해당 파드는 toleration이 지정되어 있기 때문에 virtual node에도 배포가 가능하므로, unschedulable 파드가 virtual node로 배포가 됩니다. Cluster Autoscaler를 사용하지 않고도 Virtual Nodes를 통해 확장성을 가질 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이때 aci-connector-linux 파드 로그를 살펴보면 실제로 ACI에서 container group을 생성하는 로그를 확인할 수 있습니다. 마지막에 컨테이너가 Started 된 로그를 ACI를 통해 전달받은 것을 확인할 수 있습니다.&lt;/p&gt;
&lt;pre class=&quot;routeros&quot;&gt;&lt;code&gt;time=&quot;2025-03-22T16:05:41Z&quot; level=info msg=&quot;creating container group with name: default-aci-helloworld-6d49f9cfbc-h76bc&quot; addedViaRedirty=false azure.region=koreacentral azure.resourceGroup=MC_aks-vn-rg_aks-vn_koreacentral delayedViaRateLimit=5ms key=default/aci-helloworld-6d49f9cfbc-h76bc method=CreateContainerGroup name=aci-helloworld-6d49f9cfbc-h76bc namespace=default originallyAdded=&quot;2025-03-22 16:05:41.362846244 +0000 UTC m=+488.605163852&quot; phase=Pending plannedForWork=&quot;2025-03-22 16:05:41.367846244 +0000 UTC m=+488.610163852&quot; pod=aci-helloworld-6d49f9cfbc-h76bc queue=syncPodsFromKubernetes reason= requeues=0 uid=d6a836b6-6b7d-4b57-90db-a5c109d17d6a workerId=49
...
time=&quot;2025-03-22T16:05:43Z&quot; level=warning msg=&quot;cannot fetch aci events for pod aci-helloworld-6d49f9cfbc-h76bc in namespace default&quot; error=&quot;cg is not found&quot; method=PodsTracker.processPodUpdates
time=&quot;2025-03-22T16:05:43Z&quot; level=info msg=&quot;Created pod in provider&quot; addedViaRedirty=false delayedViaRateLimit=5ms key=default/aci-helloworld-6d49f9cfbc-h76bc method=createOrUpdatePod name=aci-helloworld-6d49f9cfbc-h76bc namespace=default originallyAdded=&quot;2025-03-22 16:05:41.362846244 +0000 UTC m=+488.605163852&quot; phase=Pending plannedForWork=&quot;2025-03-22 16:05:41.367846244 +0000 UTC m=+488.610163852&quot; pod=aci-helloworld-6d49f9cfbc-h76bc queue=syncPodsFromKubernetes reason= requeues=0 uid=d6a836b6-6b7d-4b57-90db-a5c109d17d6a workerId=49
time=&quot;2025-03-22T16:05:43Z&quot; level=info msg=&quot;Event(v1.ObjectReference{Kind:\&quot;Pod\&quot;, Namespace:\&quot;default\&quot;, Name:\&quot;aci-helloworld-6d49f9cfbc-h76bc\&quot;, UID:\&quot;d6a836b6-6b7d-4b57-90db-a5c109d17d6a\&quot;, APIVersion:\&quot;v1\&quot;, ResourceVersion:\&quot;5821\&quot;, FieldPath:\&quot;\&quot;}): type: 'Normal' reason: 'ProviderCreateSuccess' Create pod in provider successfully&quot;
E0322 16:05:43.818182       1 event.go:346] &quot;Server rejected event (will not retry!)&quot; err=&quot;events is forbidden: User \&quot;system:serviceaccount:kube-system:aci-connector-linux\&quot; cannot create resource \&quot;events\&quot; in API group \&quot;\&quot; in the namespace \&quot;default\&quot;&quot; event=&quot;&amp;amp;Event{ObjectMeta:{aci-helloworld-6d49f9cfbc-h76bc.182f2b9f418838d9  default    0 0001-01-01 00:00:00 +0000 UTC &amp;lt;nil&amp;gt; &amp;lt;nil&amp;gt; map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:default,Name:aci-helloworld-6d49f9cfbc-h76bc,UID:d6a836b6-6b7d-4b57-90db-a5c109d17d6a,APIVersion:v1,ResourceVersion:5821,FieldPath:,},Reason:ProviderCreateSuccess,Message:Create pod in provider successfully,Source:EventSource{Component:virtual-node-aci-linux/pod-controller,Host:,},FirstTimestamp:2025-03-22 16:05:43.814912217 +0000 UTC m=+491.057229925,LastTimestamp:2025-03-22 16:05:43.814912217 +0000 UTC m=+491.057229925,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:virtual-node-aci-linux/pod-controller,ReportingInstance:,}&quot;
...
time=&quot;2025-03-22T16:06:50Z&quot; level=error msg=&quot;failed to retrieve pod aci-helloworld-6d49f9cfbc-h76bc status from provider&quot; error=&quot;container aci-helloworld properties CurrentState StartTime cannot be nil&quot; method=PodsTracker.processPodUpdates
time=&quot;2025-03-22T16:06:55Z&quot; level=info msg=&quot;Event(v1.ObjectReference{Kind:\&quot;Pod\&quot;, Namespace:\&quot;default\&quot;, Name:\&quot;aci-helloworld-6d49f9cfbc-h76bc\&quot;, UID:\&quot;d6a836b6-6b7d-4b57-90db-a5c109d17d6a\&quot;, APIVersion:\&quot;v1\&quot;, ResourceVersion:\&quot;5821\&quot;, FieldPath:\&quot;spec.containers{aci-helloworld}\&quot;}): type: 'Normal' reason: 'Pulling' pulling image \&quot;mcr.microsoft.com/azuredocs/aci-helloworld@sha256:b9cec4d6b50c6bf25e3f7f93bdc1628e5dca972cf132d38ed8f5bc955bb179c3\&quot;&quot;
time=&quot;2025-03-22T16:06:55Z&quot; level=info msg=&quot;Event(v1.ObjectReference{Kind:\&quot;Pod\&quot;, Namespace:\&quot;default\&quot;, Name:\&quot;aci-helloworld-6d49f9cfbc-h76bc\&quot;, UID:\&quot;d6a836b6-6b7d-4b57-90db-a5c109d17d6a\&quot;, APIVersion:\&quot;v1\&quot;, ResourceVersion:\&quot;5821\&quot;, FieldPath:\&quot;spec.containers{aci-helloworld}\&quot;}): type: 'Normal' reason: 'Pulled' Successfully pulled image \&quot;mcr.microsoft.com/azuredocs/aci-helloworld@sha256:b9cec4d6b50c6bf25e3f7f93bdc1628e5dca972cf132d38ed8f5bc955bb179c3\&quot;&quot;
time=&quot;2025-03-22T16:06:55Z&quot; level=info msg=&quot;Event(v1.ObjectReference{Kind:\&quot;Pod\&quot;, Namespace:\&quot;default\&quot;, Name:\&quot;aci-helloworld-6d49f9cfbc-h76bc\&quot;, UID:\&quot;d6a836b6-6b7d-4b57-90db-a5c109d17d6a\&quot;, APIVersion:\&quot;v1\&quot;, ResourceVersion:\&quot;5821\&quot;, FieldPath:\&quot;spec.containers{aci-helloworld}\&quot;}): type: 'Normal' reason: 'Started' Started container&quot;
...&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;앞서 살펴본바와 같이 EKS의 Fargate에 실행되는 워크로드는 일반 노드에 배포되지 않는 배타적인 성격의 스케줄링이 된다면, AKS의 Virtual Nodes에 실행은 일반 노드에 대한 보완적인 관계가 됩니다. 즉 일반 노드에 배포되고, 그 이상의 리소스가 필요할 때 Cluster Autoscaler가 없어도 Virtual Nodes를 활용하는 시나리오를 사용할 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;물론 특별한 요구사항이 있는 경우에는 Virtual Nodes에만 배포되도록 아래와 같이 NodeSelector와 같은 스케줄링 기법을 사용하실 수 있습니다. 혹은 tolerance를 사용하시는 경우에도 Virtual Nodes에 먼저 스케줄링 되게 됩니다.&lt;/p&gt;
&lt;pre class=&quot;lua&quot;&gt;&lt;code&gt;...
      nodeSelector:
        kubernetes.io/role: agent
        beta.kubernetes.io/os: linux
        type: virtual-kubelet
...&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;EKS는 Fargate 파드와 노드가 1:1로 맵핑되는 반면, AKS의 Virtual Nodes는 해당 노드에 실행되는 파드가 많아져도 대응하는 노드는 1대입니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아래와 같이 디플로이먼트를 6개로 스케일링하고 Virtual Nodes에 여러 개의 파드가 배포되도록 유도합니다. 노드 정보를 확인해보면 virtual node는 한대만 있는 것을 알 수 있습니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;kubectl scale deployment aci-helloworld --replicas 6
deployment.apps/aci-helloworld scaled

kubectl get po -owide
NAME                              READY   STATUS    RESTARTS   AGE     IP            NODE                                NOMINATED NODE   READINESS GATES
aci-helloworld-86c987d849-9dbdx   1/1     Running   0          79s     10.240.0.42   aks-nodepool1-14565790-vmss000000   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
aci-helloworld-86c987d849-9pw8r   1/1     Running   0          7m49s   10.240.0.55   aks-nodepool1-14565790-vmss000000   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
aci-helloworld-86c987d849-fchwx   1/1     Running   0          79s     10.241.0.5    virtual-node-aci-linux              &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
aci-helloworld-86c987d849-rh9tx   1/1     Running   0          7m49s   10.241.0.4    virtual-node-aci-linux              &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
..

kubectl get no
NAME                                STATUS   ROLES    AGE   VERSION
aks-nodepool1-14565790-vmss000000   Ready    &amp;lt;none&amp;gt;   41m   v1.30.9
aks-nodepool1-14565790-vmss000001   Ready    &amp;lt;none&amp;gt;   41m   v1.30.9
virtual-node-aci-linux              Ready    agent    29m   v1.25.0-vk-azure-aci-1.6.2&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;마지막으로 포탈에서 확인하면 AKS의 인프라스트럭처 리소스 그룹에 Virtual Nodes에 해당하는 파드들이 ACI의 형태로 실행되고 있는 것을 알 수 있습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1282&quot; data-origin-height=&quot;372&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/zWpXJ/btsMSQXYov3/RY1byeCZQIFLtvzy5KAkJK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/zWpXJ/btsMSQXYov3/RY1byeCZQIFLtvzy5KAkJK/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/zWpXJ/btsMSQXYov3/RY1byeCZQIFLtvzy5KAkJK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FzWpXJ%2FbtsMSQXYov3%2FRY1byeCZQIFLtvzy5KAkJK%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1282&quot; height=&quot;372&quot; data-origin-width=&quot;1282&quot; data-origin-height=&quot;372&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Virtual Node의 파드는 Azure 입장에서는 ACI의 형태로 실행되기 때문에 포탈에서 접근해서 ACI의 UI를 통해 로그 확인/콘솔 접속 등을 사용할 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;AKS의 Virtual Nodes 또한 몇 가지 제약사항을 가지고 있습니다. 이는 ACI의 제약사항을 상속받은 것일 수 있으며, daemonset이나 initContainer와 같은 사용이 불가한 점도 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;AKS Virtual Nodes에 대해서 문서의 제약사항을 살펴보시기 부탁드립니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;https://learn.microsoft.com/en-us/azure/aks/virtual-nodes#limitations&quot;&gt;https://learn.microsoft.com/en-us/azure/aks/virtual-nodes#limitations&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;리소스를 정리하고 실습을 마무리 하겠습니다.&lt;/p&gt;
&lt;pre class=&quot;routeros&quot;&gt;&lt;code&gt;az group delete --name $RG&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;마무리&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;EKS와 AKS에서 노드를 사용하지 않고 파드를 실행할 수 있는 방식을 살펴보았습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;EKS Fargate는 Admission controller를 통하여 Fargate scheduler를 통해 스케줄링을 하는 방식이었다면, AKS는 virtual kubelet을 통해서 Virtual Nodes를 등록하고 해당 노드의 Taint를 통해서 스케줄링을 유도하는 방식을 사용할 수 있었습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;그럼 이번 포스트를 마무리 하겠습니다.&lt;/p&gt;</description>
      <category>EKS</category>
      <category>AKS</category>
      <category>EKS</category>
      <category>fargate</category>
      <category>virtual nodes</category>
      <author>한명</author>
      <guid isPermaLink="true">https://a-person.tistory.com/42</guid>
      <comments>https://a-person.tistory.com/42#entry42comment</comments>
      <pubDate>Sun, 23 Mar 2025 02:39:19 +0900</pubDate>
    </item>
    <item>
      <title>CKA 취득 후기 (2025년 2월 18일 리뉴얼)</title>
      <link>https://a-person.tistory.com/41</link>
      <description>&lt;p data-ke-size=&quot;size16&quot;&gt;이번 포스트는 CKA(Certified Kubernetes Administrator) 시험 후기 입니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;241&quot; data-origin-height=&quot;240&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bsnkpz/btsMSB0Ydou/euYlhmAUsSyUhzQJ7xwX0k/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bsnkpz/btsMSB0Ydou/euYlhmAUsSyUhzQJ7xwX0k/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bsnkpz/btsMSB0Ydou/euYlhmAUsSyUhzQJ7xwX0k/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2Fbsnkpz%2FbtsMSB0Ydou%2FeuYlhmAUsSyUhzQJ7xwX0k%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;241&quot; height=&quot;240&quot; data-origin-width=&quot;241&quot; data-origin-height=&quot;240&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이전 CKA를 2018년에 취득했기 때문에 이미 예전에 만료되었습니다. 최근 Kubernetes 관련 자격증을 갱신하면서 만료된 CKA 자격 시험을 응시하였고 3/19일에 취득하여 후기를 남겨드립니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;기본정보&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;CKA는 실습형 시험으로 2시간 동안 16문제를 풀어야 합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;시험 환경은 Secure Browser 환경에서 시험을 보게 됩니다. 시험 UI는 아래와 같으며, 좌측 시험 문제가 있고, 우측에 브라우저와 터미널을 실행할 수 있는 창을 띄울 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1902&quot; data-origin-height=&quot;844&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/dbCxMV/btsMSBNpPk1/9TzQBPR26TzuQMf2WZYDbk/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/dbCxMV/btsMSBNpPk1/9TzQBPR26TzuQMf2WZYDbk/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/dbCxMV/btsMSBNpPk1/9TzQBPR26TzuQMf2WZYDbk/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FdbCxMV%2FbtsMSBNpPk1%2F9TzQBPR26TzuQMf2WZYDbk%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1902&quot; height=&quot;844&quot; data-origin-width=&quot;1902&quot; data-origin-height=&quot;844&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;출처: &lt;a href=&quot;https://docs.linuxfoundation.org/tc-docs/certification/tips-cka-and-ckad#adjusting-font-and-windows-in-the-examui&quot;&gt;https://docs.linuxfoundation.org/tc-docs/certification/tips-cka-and-ckad#adjusting-font-and-windows-in-the-examui&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;시험 UI가 다소 무거운 편이며, 특히 제공되는 메모장은 엉망이었습니다. 그래서 실제로 메모장을 제대로 활용하기는 어렵습니다. 이때문에 vi를 잘 사용해야합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;그 외 UI에 특이점은 없지만 터미널에서 복사/붙여넣기는 ctrl+shift+c, ctrl+shift+v 를 해야하는 정도만 알고 계시면 될 것 같습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;스크린 화면이 작으면 시험을 보는데 불리합니다. 가능하면 모니터를 통해서 시험을 보시기 바랍니다. 듀얼 모니터 사용은 불가하지만 외부 모니터를 사용하는 것은 가능합니다. 단, 모니터를 사용하려면 웹캠(감시자를 통한 시험 환경 모니터링 용)이 있어야 합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;과거는 kubeconfig의 contexts를 변경하면서 각 문제별 다른 컨텍스트를 활용했다면, 현재 시험 시스템은 각 문제별로 ssh로 VM으로 접속하는 방식입니다. 시험 문제에 각 문제에 해당하는 ssh 커맨드와 토픽에 대한 링크가 제공됩니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;ssh로 노드에 접속하면 kubectl 등이 사용 가능합니다. 문제를 풀면 다시 최초 터미널로 돌아와서 다시 ssh로 접속하는 방식을 반복하시면 됩니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;컨텐츠 리뉴얼&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;CKA를 준비하고 있다면 CKA의 토픽과 문제들이 2025년 2월 18일에 리뉴얼 되었다는 것을 이미 알고 있을 것입니다. (리뉴얼 전에 시험을 봤어야 했는데 많이 당황하기는 했습니다)&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;리뉴얼된 CKA에서 기존에 알려진 시험 문제 유형 중 2~3문제 정도는 동일한 것 같습니다. 나머지는 모두 다른 토픽과 유형이었습니다. 오브젝트의 정보를 확인 후 작성하는 방식의 쉬운 문제는 거의 없어졌습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;리뉴얼된 시험은 이전 버전 보다 다소 어려워진 느낌입니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아래와 같은 피드백을 살펴보시면, 2/18일 이후 리뉴얼된 시험을 응시한 사람들의 공통적 의견은 시험이 많이 어려워졌다 입니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;https://kodekloud.com/community/t/just-took-the-latest-version-of-the-cka-exam-failed-miserably-need-some-advice/474751&quot;&gt;https://kodekloud.com/community/t/just-took-the-latest-version-of-the-cka-exam-failed-miserably-need-some-advice/474751&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;https://www.reddit.com/r/CKAExam/comments/1jbi4iw/discussion_of_the_updated_feb_18th_2025_cka_exam/&quot;&gt;https://www.reddit.com/r/CKAExam/comments/1jbi4iw/discussion_of_the_updated_feb_18th_2025_cka_exam/&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;먼저 시험 변경사항에 대해서는 아래의 공지를 살펴보시기 바랍니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;https://training.linuxfoundation.org/certified-kubernetes-administrator-cka-program-changes/&quot;&gt;https://training.linuxfoundation.org/certified-kubernetes-administrator-cka-program-changes/&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;또한 아래 영상을 통해서 변경사항에 대한 인사이트를 살펴보실 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=fvvgM3QmKGo&quot;&gt;https://www.youtube.com/watch?v=fvvgM3QmKGo&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Linux Foundation 의 공지를 바탕으로 각 도메인 별로 추가된 토픽을 살펴보겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;전반적으로 도메인은 기존과 동일하며, 각 도메인에서 일부 토픽이 추가되었고, 동일한 토픽에 대해서도 문제 유형이 전부 수정되었습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1005&quot; data-origin-height=&quot;275&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/cYTfQr/btsMT3uVoK6/FKQyDQQEDGvYuXf3bS8eC0/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/cYTfQr/btsMT3uVoK6/FKQyDQQEDGvYuXf3bS8eC0/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/cYTfQr/btsMT3uVoK6/FKQyDQQEDGvYuXf3bS8eC0/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FcYTfQr%2FbtsMT3uVoK6%2FFKQyDQQEDGvYuXf3bS8eC0%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1005&quot; height=&quot;275&quot; data-origin-width=&quot;1005&quot; data-origin-height=&quot;275&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;StorageClass, Gateway API 와 같은 부분이 추가되었습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;652&quot; data-origin-height=&quot;427&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bCa2Hc/btsMUlWrUAO/i40FO94XnRSaxKKlddz3X0/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bCa2Hc/btsMUlWrUAO/i40FO94XnRSaxKKlddz3X0/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bCa2Hc/btsMUlWrUAO/i40FO94XnRSaxKKlddz3X0/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbCa2Hc%2FbtsMUlWrUAO%2Fi40FO94XnRSaxKKlddz3X0%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;652&quot; height=&quot;427&quot; data-origin-width=&quot;652&quot; data-origin-height=&quot;427&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Helm, Kustomize와 같은 애플리케이션 배포 기술과 CNI, CSI, CRI에 대한 기본적인 설치와 구성에 대한 이해가 필요합니다. 추가로&amp;nbsp;CRD와 같은 토픽이 추가되었습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;노드에 쿠버네티스를 설치, 트러블 슈팅을 위한 기본 커맨드와 컨트롤 플레인 구성요소를 관리하는 방식에 대한 이해가 필요합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;시험에 대한 유형이나 상세한 문제를 공개하기 어렵기 때문에 추가된 토픽을 중심으로 학습해보시기를 부탁드립니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;물론 CKA는 retake가 가능한 시험이기 때문에 첫번째 시험에서 토픽을 잘 확인하고, 추가 학습 후 두번째 시험을 보시는 것도 방법입니다. 첫번째 시험에서 풀리지 않는 문제를 푸느라 시간 배분을 못하면 오히려 나머지 토픽을 모두 확인하지 못할 수도 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;시험에 도움이 되셨으면 하며 마무리 하겠습니다.&lt;/p&gt;</description>
      <category>기타</category>
      <category>CKA</category>
      <category>kubernetes</category>
      <author>한명</author>
      <guid isPermaLink="true">https://a-person.tistory.com/41</guid>
      <comments>https://a-person.tistory.com/41#entry41comment</comments>
      <pubDate>Sat, 22 Mar 2025 16:23:10 +0900</pubDate>
    </item>
    <item>
      <title>[6] EKS의 Security - EKS 인증/인가와 Pod IAM 권한 할당</title>
      <link>https://a-person.tistory.com/40</link>
      <description>&lt;p data-ke-size=&quot;size16&quot;&gt;이번 포스트에서는 EKS의 보안(Security)에 대해서 알아 보겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;물론 쿠버네티스의 보안에는 이미지 보안, 노드 보안과 같은 영역도 있지만, 여기서는 쿠버네티스의 인증(Authentication)/인가(Authorization)가 EKS에 적용된 방식과, 워크로드(파드)의 AWS의 리소스에 대한 보안 접근이라는 두 가지 주제를 살펴보겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;먼저 EKS의 인증/인가의 흐름을 kubeconfig를 바탕으로 이해해보고, 두번째로 워크로드(파드)에 AWS의 리소스에 접근 권한을 부여하기 위해 파드에 IAM을 할당하는 방식에 대해서 살펴보겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;목차&lt;/h2&gt;
&lt;ol style=&quot;list-style-type: decimal;&quot; data-ke-list-type=&quot;decimal&quot;&gt;
&lt;li&gt;Kubernetes의 인증/인가&lt;/li&gt;
&lt;li&gt;EKS의 인증/인가&lt;/li&gt;
&lt;li&gt;AKS의 인증/인가&lt;/li&gt;
&lt;li&gt;Kubernetes의 파드 권한&lt;/li&gt;
&lt;li&gt;EKS의 파드 권한 할당&lt;/li&gt;
&lt;li&gt;AKS의 파드 권한 할당&lt;/li&gt;
&lt;/ol&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;1. Kubernetes의 인증/인가&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;쿠버네티스에서는 API를 접근을 통제하기 위해서 아래와 같은 방식이 사용됩니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;사용자나 파드(Service Account)는 Authentication(인증) -&amp;gt; Authorization(인가) -&amp;gt; Admission Control을 단계를 지나서 비로소 쿠버네티스 API에 접근할 수 있습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;2009&quot; data-origin-height=&quot;859&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/GUIyF/btsMLxjOlch/2JoFDqTbjIqvU6IlSmqKgk/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/GUIyF/btsMLxjOlch/2JoFDqTbjIqvU6IlSmqKgk/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/GUIyF/btsMLxjOlch/2JoFDqTbjIqvU6IlSmqKgk/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FGUIyF%2FbtsMLxjOlch%2F2JoFDqTbjIqvU6IlSmqKgk%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;2009&quot; height=&quot;859&quot; data-origin-width=&quot;2009&quot; data-origin-height=&quot;859&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;출처: &lt;a href=&quot;https://kubernetes.io/docs/concepts/security/controlling-access/&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://kubernetes.io/docs/concepts/security/controlling-access/&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;다만 쿠버네티스 자체는 직접적으로 사용자를 저장해 인증하는 방식을 구현하지 않고 있기 때문에 다른 인증 시스템에 위임을 하여 사용자에 대한 인증을 진행할 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;그 이후 인가 단계에서는 인증된 주체가 쿠버네티스 리소스에 대한 적절한 접근 권한을 가진 여부를 체크하게 됩니다. 마지막으로 Admission Control에서 요청 자체에 대한 Validation이나 Mutation과 같은 추가적인 절차를 진행할수 있도록 설계되어 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;그리고 그림에서 살펴보듯이 각 단계는 퍼즐 조각처럼 여러 형태의 인증, 인가, Admission Control을 선택적으로 추가할 수 있도록 되어 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이후 EKS의 인증/인가 절에서는 AWS의 인증/엑세스 관리를 담당하는 IAM을 통해서 쿠버네티스의 인증/인가를 진행하는 과정을 설명하겠습니다. 즉, AWS에서 유효한 주체(사용자)가 어떻게 쿠버네티스의 인증/인가를 거쳐 쿠버네티스를 이용할 수 있는가의 관점입니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;2. EKS의 인증/인가&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;사용자의 EKS의 인증/인가 체계를 이해하기 위해서 아래 그림을 바탕으로 kubectl 명령의 실행 흐름을 따라가 보겠습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1204&quot; data-origin-height=&quot;622&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/kz8ag/btsMMfvWBsL/0ScAjEIHI9hfFQRCaQGRyk/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/kz8ag/btsMMfvWBsL/0ScAjEIHI9hfFQRCaQGRyk/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/kz8ag/btsMMfvWBsL/0ScAjEIHI9hfFQRCaQGRyk/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2Fkz8ag%2FbtsMMfvWBsL%2F0ScAjEIHI9hfFQRCaQGRyk%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1204&quot; height=&quot;622&quot; data-origin-width=&quot;1204&quot; data-origin-height=&quot;622&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;출처: &lt;a href=&quot;https://www.youtube.com/watch?v=bksogA-WXv8&amp;amp;t=600s&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://www.youtube.com/watch?v=bksogA-WXv8&amp;amp;t=600s&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;1) kubectl get node 명령을 수행&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;2) kubeconfig에 정의된 aws eks get-token 명령으로 AWS STS reigional endpoint로 Amazon EKS 클러스터에 대한 인증 토큰 요청&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;3) aws eks get-token의 응답으로 Token 값 수신(base64로 디코딩하면 STS(Secure Token Service)로 GetCallerIdentity를 호출하는 Pre-Signed URL 값이 들어가 있음)&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;lt;&amp;lt; 이 단계까지는 EKS API Endpoint로 인증 요청 전 단계 &amp;gt;&amp;gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;4) kubectl는 Pre-Signed URL을 bearer Token으로 EKS API Cluster Endpoint로 요청&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;5) API 서버는 aws-iam-authenticator server(Webhook Token Authentication)로 Token Review 요청&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;6) aws-iam-authenticator server에서 sts GetCallerIdentity를 호출&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;7) AWS IAM은 토큰이 유효한지 확인 후 인증 완료하고, IAM User나 Role에 대한 ARN을 반환&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;8) IAM의 User/Role을 쿠버네티스의 그룹으로 맵핑한 aws-auth(ConfigMap)을 통해 쿠버네티스의 보안 주체를 확인&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;9) aws-iam-authenticator server(Webhook Token Authentication)에서는 TokenReview라는 데이터 타입으로 useruame과 쿠버네티스 group을 반환&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;lt;&amp;lt; 이 단계까지가 인증의 단계 &amp;gt;&amp;gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;10) 이 정보를 바탕으로 Kubernetes RBAC 기반 인가 진행&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;lt;&amp;lt; 이 단계까지가 인가의 단계 &amp;gt;&amp;gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;11) 인가된 경우 kubectl get node에 대한 결과 반환&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이와 같이 kubectl를 수행하면 IAM을 통해 사용자를 인증하고, 쿠버네티스 RBAC에 따라 인가를 하게 됩니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이를 다시 요약하여 아래와 같은 4단계로 나눠보겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;1) kubectl 요청을 수행하면 AWS 인증 정보를 통하여 EKS 클러스터에 대한 인증 토큰 요청&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;2) Webhook Token Authentication을 따라 IAM을 통한 인증 진행&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;3) 인증 완료 후 정보로 반환된 arn을 바탕으로 쿠버네티스 그룹과의 맵핑을 확인하는데, 이 절차는 아래와 같이 두가지 방식이 있습니다.&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;aws-auth(ConfigMap) 방식 (deprecated 될 예정)&lt;/li&gt;
&lt;li&gt;EKS API 방식&lt;/li&gt;
&lt;/ul&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;4) 인증된 IAM 정보를 바탕으로 쿠버네티스 RBAC을 통해 인가 진행&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;참고로 위의 설명에서 aws-auth(컨피그 맵)에 IAM Role/User의 arn과 쿠버네티스의 권한 그룹과 맵핑 정보를 담고 있습니다. 사용자는 &lt;code&gt;eksctl create iamidentitymapping&lt;/code&gt;를 통해 IAM 사용자와 클러스터 그룹을 맵핑하고, 이것이 컨피그 맵에 반영됩니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;다만 이 방식은 컨피그 맵이 쿠버네티스에 노출되므로, 잘못 수정하는 경우 클러스터에 이슈가 발생할 수 있는 등 여러가지 문제가 있어 최근 EKS API 방식을 도입하였습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;EKS API는 컨피그 맵이 없어지고 EKS API를 통해서 Access Entry에 IAM Role/User와 Access Policy를 맵핑하여 관리하도록 합니다. IAM을 통한 인증 완료 후 반환된 ARN 정보를 EKS API의 Access Entry 맵핑을 확인하고, 이후 쿠버네티스 RBAC 인가를 받도록 절차가 변경됩니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;879&quot; data-origin-height=&quot;427&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bznXBD/btsMLQKczkb/gTLKkV6ouOQyPIad556m6K/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bznXBD/btsMLQKczkb/gTLKkV6ouOQyPIad556m6K/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bznXBD/btsMLQKczkb/gTLKkV6ouOQyPIad556m6K/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbznXBD%2FbtsMLQKczkb%2FgTLKkV6ouOQyPIad556m6K%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;879&quot; height=&quot;427&quot; data-origin-width=&quot;879&quot; data-origin-height=&quot;427&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;출처: &lt;a href=&quot;https://aws.amazon.com/ko/blogs/containers/a-deep-dive-into-simplified-amazon-eks-access-management-controls/&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://aws.amazon.com/ko/blogs/containers/a-deep-dive-into-simplified-amazon-eks-access-management-controls/&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;웹 콘솔을 접근해 EKS에서 Access 탭에서 Authentication mode를 확인할 수 있습니다. 기본 생성된 EKS 클러스터는 &lt;code&gt;EKS API 및 ConfigMap&lt;/code&gt;이 선택되어 있습니다. 이 옵션에서 EKS API와 ConfigMap이 중복 설정되는 경우는 EKS API가 우선적용됩니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1755&quot; data-origin-height=&quot;241&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/cowI4e/btsMMdx7BYP/FnRW1IpWz1qyWlhJ0Wzkhk/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/cowI4e/btsMMdx7BYP/FnRW1IpWz1qyWlhJ0Wzkhk/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/cowI4e/btsMMdx7BYP/FnRW1IpWz1qyWlhJ0Wzkhk/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FcowI4e%2FbtsMMdx7BYP%2FFnRW1IpWz1qyWlhJ0Wzkhk%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1755&quot; height=&quot;241&quot; data-origin-width=&quot;1755&quot; data-origin-height=&quot;241&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;해당 페이지의 &lt;code&gt;Manage access&lt;/code&gt;를 통해서 아래와 같이 변경 가능한 인터페이스가 있습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;2046&quot; data-origin-height=&quot;504&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/5eSNN/btsMMBlb5B5/76z8HxTQnbJ1ugRVEfkGsk/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/5eSNN/btsMMBlb5B5/76z8HxTQnbJ1ugRVEfkGsk/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/5eSNN/btsMMBlb5B5/76z8HxTQnbJ1ugRVEfkGsk/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2F5eSNN%2FbtsMMBlb5B5%2F76z8HxTQnbJ1ugRVEfkGsk%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;2046&quot; height=&quot;504&quot; data-origin-width=&quot;2046&quot; data-origin-height=&quot;504&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이제 실제 EKS 환경에서 동작을 확인해보겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;클러스터 엑세스: ConfigMap&lt;/h4&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;해당 옵션의 설정 방식을 살펴보기 위해서 아래와 같이 testuser를 만들고, EKS 클러스터에 접근하기 위한 권한을 할당하는 방식을 알아보겠습니다.&lt;/p&gt;
&lt;pre class=&quot;routeros&quot;&gt;&lt;code&gt;# testuser 사용자 생성
aws iam create-user --user-name testuser

# 사용자에게 프로그래밍 방식 액세스 권한 부여
aws iam create-access-key --user-name testuser
{
    &quot;AccessKey&quot;: {
        &quot;UserName&quot;: &quot;testuser&quot;,
        &quot;AccessKeyId&quot;: &quot;AKIA5ILF2##&quot;,
        &quot;Status&quot;: &quot;Active&quot;,
        &quot;SecretAccessKey&quot;: &quot;TxhhwsU8##&quot;,
        &quot;CreateDate&quot;: &quot;2023-05-23T07:40:09+00:00&quot;
    }
}
# testuser 사용자에 정책을 추가
aws iam attach-user-policy --policy-arn arn:aws:iam::aws:policy/AdministratorAccess --user-name testuser

# 아래 실습은 kubectl을 신규로 세팅하기 위해 기존 aws configure가 되지 않은 VM에서 진행합니다.
# testuser 자격증명 설정
aws configure
AWS Access Key ID [None]: ...
AWS Secret Access Key [None]: ....
Default region name [None]: ap-northeast-2

# get-caller-identity 확인
aws sts get-caller-identity --query Arn
&quot;arn:aws:iam::911283464785:user/testuser&quot;

# testuser에 대한 kubeconfig를 획득합니다.
CLUSTER_NAME=myeks
aws eks update-kubeconfig --name $CLUSTER_NAME --user-alias testuser

# kubectl 시도 
kubectl get node
E0315 22:46:29.480897    1795 memcache.go:265] &quot;Unhandled Error&quot; err=&quot;couldn't get current server API group list: the server has asked for the client to provide credentials&quot;
E0315 22:46:30.514466    1795 memcache.go:265] &quot;Unhandled Error&quot; err=&quot;couldn't get current server API group list: the server has asked for the client to provide credentials&quot;
E0315 22:46:31.629986    1795 memcache.go:265] &quot;Unhandled Error&quot; err=&quot;couldn't get current server API group list: the server has asked for the client to provide credentials&quot;
E0315 22:46:32.658748    1795 memcache.go:265] &quot;Unhandled Error&quot; err=&quot;couldn't get current server API group list: the server has asked for the client to provide credentials&quot;
E0315 22:46:33.649009    1795 memcache.go:265] &quot;Unhandled Error&quot; err=&quot;couldn't get current server API group list: the server has asked for the client to provide credentials&quot;
error: You must be logged in to the server (the server has asked for the client to provide credentials)

# kubeconfig 확인
cat ~/.kube/config
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: ~
    server: ~
contexts:
- context:
    cluster: arn:aws:eks:ap-northeast-2:xx:cluster/myeks
    user: testuser
  name: testuser
current-context: testuser
kind: Config
preferences: {}
users:
- name: testuser
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1beta1
      args:
      - --region
      - ap-northeast-2
      - eks
      - get-token
      - --cluster-name
      - myeks
      - --output
      - json
      command: aws

kubectl get cm -n kube-system aws-auth -o yaml

apiVersion: v1
data:
  mapRoles: |
    - groups:
      - system:bootstrappers
      - system:nodes
      rolearn: arn:aws:iam::xx:role/eksctl-myeks-nodegroup-ng1-NodeInstanceRole-e7CGUnBoQC96
      username: system:node:{{EC2PrivateDNSName}}
kind: ConfigMap
metadata:
  creationTimestamp: &quot;2025-03-15T13:10:23Z&quot;
  name: aws-auth
  namespace: kube-system
  resourceVersion: &quot;2028&quot;
  uid: 13151df2-4cd2-4fc2-92dc-2b0289a1be55&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;testuser도 AdministratorAccess 권한을 가지고 있지만 실제로 EKS의 API Server에 인증되는 권한은 없습니다. (단, 컨피그 맵에는 정보가 없지만 EKS를 생성한 admin 계정은 EKS API의 Access Entry에 등록되어 권한이 있음)&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아래 iamidentitymapping 를 생성하면 aws-auth(컨피그 맵)이 업데이트 됩니다.&lt;/p&gt;
&lt;pre class=&quot;vim&quot;&gt;&lt;code&gt;# Creates a mapping from IAM role or user to Kubernetes user and groups
eksctl get iamidentitymapping --cluster $CLUSTER_NAME
ARN                                                                                     USERNAME                                GROUPS                          ACCOUNT
arn:aws:iam::xx:role/eksctl-myeks-nodegroup-ng1-NodeInstanceRole-e7CGUnBoQC96 system:node:{{EC2PrivateDNSName}}       system:bootstrappers,system:nodes

# IAM Identity Mapping 생성
ACCOUNT_ID=$(aws sts get-caller-identity --query 'Account' --output text)
eksctl create iamidentitymapping --cluster $CLUSTER_NAME --username testuser --group system:masters --arn arn:aws:iam::$ACCOUNT_ID:user/testuser

# 확인
eksctl get iamidentitymapping --cluster $CLUSTER_NAME
ARN                                                                                     USERNAME                                GROUPS                          ACCOUNT
arn:aws:iam::xx:role/eksctl-myeks-nodegroup-ng1-NodeInstanceRole-e7CGUnBoQC96 system:node:{{EC2PrivateDNSName}}       system:bootstrappers,system:nodes
arn:aws:iam::xx:user/testuser                                                 testuser                                system:masters

kubectl get cm -n kube-system aws-auth -o yaml
...
  mapUsers: |
    - groups:
      - system:masters
      userarn: arn:aws:iam::xx:user/testuser
      username: testuser
...&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;다만 IAM Identity Mapping 생성 후에 즉각적으로 kubectl 이 가능해지는 것은 아니며, 반영에 일부 시간이 걸릴 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이후에 실행해보면 비로소 EKS에서 kubectl이 성공합니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;# 시도
kubectl get node 
NAME                                               STATUS   ROLES    AGE   VERSION
ip-192-168-1-41.ap-northeast-2.compute.internal    Ready    &amp;lt;none&amp;gt;   64m   v1.31.5-eks-5d632ec
ip-192-168-2-79.ap-northeast-2.compute.internal    Ready    &amp;lt;none&amp;gt;   65m   v1.31.5-eks-5d632ec
ip-192-168-3-202.ap-northeast-2.compute.internal   Ready    &amp;lt;none&amp;gt;   65m   v1.31.5-eks-5d632ec
&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;실습을 마무리하고, 다음 실습을 위해서 iamidentitymapping을 삭제하겠습니다.&lt;/p&gt;
&lt;pre class=&quot;elixir&quot;&gt;&lt;code&gt;# testuser IAM 맵핑 삭제
eksctl delete iamidentitymapping --cluster $CLUSTER_NAME --arn  arn:aws:iam::$ACCOUNT_ID:user/testuser

# Get IAM identity mapping(s)
eksctl get iamidentitymapping --cluster $CLUSTER_NAME
kubectl get cm -n kube-system aws-auth -o yaml&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;클러스터 엑세스: EKS API&lt;/h4&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;웹 콘솔의 EKS&amp;gt;Access&amp;gt;IAM access entries 를 보면 현재 할당된 권한을 확인할 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;현재 EKS API and ConfigMap으로 해당 클러스터를 생성한 관리자 계정은 이미 AmazoneEKSClusterAdminPolicy를 할당 받은 것으로 확인 됩니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1753&quot; data-origin-height=&quot;652&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/ObJpu/btsML2cxkN8/EKLaB3UocgQd3oYL1xcPp1/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/ObJpu/btsML2cxkN8/EKLaB3UocgQd3oYL1xcPp1/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/ObJpu/btsML2cxkN8/EKLaB3UocgQd3oYL1xcPp1/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FObJpu%2FbtsML2cxkN8%2FEKLaB3UocgQd3oYL1xcPp1%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1753&quot; height=&quot;652&quot; data-origin-width=&quot;1753&quot; data-origin-height=&quot;652&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;먼저 아래 명령으로 EKS API 엑세스 모드로 변경합니다. 옵션을 변경하는 경우 다시 기존 옵션으로 원복은 불가한점 유의가 필요합니다.&lt;/p&gt;
&lt;pre class=&quot;routeros&quot;&gt;&lt;code&gt;# EKS API 액세스모드로 변경
aws eks update-cluster-config --name $CLUSTER_NAME --access-config authenticationMode=API
&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;웹 콘솔에서도 변경된 것으로 확인 됩니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1760&quot; data-origin-height=&quot;230&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/kUCTz/btsML0eIAaO/hbguKoAUoO9ZRITKvCBWV0/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/kUCTz/btsML0eIAaO/hbguKoAUoO9ZRITKvCBWV0/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/kUCTz/btsML0eIAaO/hbguKoAUoO9ZRITKvCBWV0/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FkUCTz%2FbtsML0eIAaO%2FhbguKoAUoO9ZRITKvCBWV0%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1760&quot; height=&quot;230&quot; data-origin-width=&quot;1760&quot; data-origin-height=&quot;230&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;참고로 아래 문서를 살펴보시면 EKS를 위해 생성된 Access Policy와 어떤 권한이 할당되었는지를 확인하실 수 있으며, 현재 제공되는 Policy는 아래와 같습니다. CLI에서는 &lt;code&gt;aws eks list-access-policies&lt;/code&gt; 를 통해서 확인 가능 합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;https://docs.aws.amazon.com/eks/latest/userguide/access-policy-permissions.html&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://docs.aws.amazon.com/eks/latest/userguide/access-policy-permissions.html&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;357&quot; data-origin-height=&quot;517&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/u2AeH/btsMLPEupHQ/ZdzveHWqePfFnKKRyoyGz0/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/u2AeH/btsMLPEupHQ/ZdzveHWqePfFnKKRyoyGz0/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/u2AeH/btsMLPEupHQ/ZdzveHWqePfFnKKRyoyGz0/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2Fu2AeH%2FbtsMLPEupHQ%2FZdzveHWqePfFnKKRyoyGz0%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;357&quot; height=&quot;517&quot; data-origin-width=&quot;357&quot; data-origin-height=&quot;517&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;현재 생성된 Access Entry를 확인 할 수 있습니다.&lt;/p&gt;
&lt;pre class=&quot;dsconfig&quot;&gt;&lt;code&gt;# 현재 생성된 Access Entry 확인
aws eks list-access-entries --cluster-name $CLUSTER_NAME | jq
{
  &quot;accessEntries&quot;: [
    &quot;arn:aws:iam::xx:role/aws-service-role/eks.amazonaws.com/AWSServiceRoleForAmazonEKS&quot;,
    &quot;arn:aws:iam::xx:role/eksctl-myeks-nodegroup-ng1-NodeInstanceRole-e7CGUnBoQC96&quot;,
    &quot;arn:aws:iam::xx:user/eksadmin&quot;
  ]
}

# admin 계정의 Associated Access Policy 확인 -&amp;gt; AmazonEKSClusterAdminPolicy
export ACCOUNT_ID=$(aws sts get-caller-identity --query 'Account' --output text)
aws eks list-associated-access-policies --cluster-name $CLUSTER_NAME --principal-arn arn:aws:iam::$ACCOUNT_ID:user/admin | jq # Linux
{
    &quot;associatedAccessPolicies&quot;: [
        {
            &quot;policyArn&quot;: &quot;arn:aws:eks::aws:cluster-access-policy/AmazonEKSClusterAdminPolicy&quot;,
            &quot;accessScope&quot;: {
                &quot;type&quot;: &quot;cluster&quot;,
                &quot;namespaces&quot;: []
            },
            &quot;associatedAt&quot;: &quot;2025-03-15T21:56:26.361000+09:00&quot;,
            &quot;modifiedAt&quot;: &quot;2025-03-15T21:56:26.361000+09:00&quot;
        }
    ],
    &quot;clusterName&quot;: &quot;myeks&quot;,
    &quot;principalArn&quot;: &quot;arn:aws:iam:xx:user/eksadmin&quot;
}&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;앞서 생성한 testuser에 대해서 Access Entry를 생성하고 Associated Access Policy를 연결합니다.&lt;/p&gt;
&lt;pre class=&quot;dsconfig&quot;&gt;&lt;code&gt;# testuser 의 access entry 생성
aws eks create-access-entry --cluster-name $CLUSTER_NAME --principal-arn arn:aws:iam::$ACCOUNT_ID:user/testuser
aws eks list-access-entries --cluster-name $CLUSTER_NAME | jq -r .accessEntries[]

# testuser에 AmazonEKSClusterAdminPolicy 연동
aws eks associate-access-policy --cluster-name $CLUSTER_NAME --principal-arn arn:aws:iam::$ACCOUNT_ID:user/testuser \
  --policy-arn arn:aws:eks::aws:cluster-access-policy/AmazonEKSClusterAdminPolicy --access-scope type=cluster

#  Associated Access Policy 확인
aws eks list-associated-access-policies --cluster-name $CLUSTER_NAME --principal-arn arn:aws:iam::$ACCOUNT_ID:user/testuser
{
    &quot;associatedAccessPolicies&quot;: [
        {
            &quot;policyArn&quot;: &quot;arn:aws:eks::aws:cluster-access-policy/AmazonEKSClusterAdminPolicy&quot;,
            &quot;accessScope&quot;: {
                &quot;type&quot;: &quot;cluster&quot;,
                &quot;namespaces&quot;: []
            },
            &quot;associatedAt&quot;: &quot;2025-03-15T23:30:17.290000+09:00&quot;,
            &quot;modifiedAt&quot;: &quot;2025-03-15T23:30:17.290000+09:00&quot;
        }
    ],
    &quot;clusterName&quot;: &quot;myeks&quot;,
    &quot;principalArn&quot;: &quot;arn:aws:iam::xx:user/testuser&quot;
}&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;기존 testuser에서 EKS API를 통해서도 정상적으로 kubectl 수행이 가능합니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;# kubectl 시도
kubectl get node 
NAME                                               STATUS   ROLES    AGE   VERSION
ip-192-168-1-41.ap-northeast-2.compute.internal    Ready    &amp;lt;none&amp;gt;   79m   v1.31.5-eks-5d632ec
ip-192-168-2-79.ap-northeast-2.compute.internal    Ready    &amp;lt;none&amp;gt;   79m   v1.31.5-eks-5d632ec
ip-192-168-3-202.ap-northeast-2.compute.internal   Ready    &amp;lt;none&amp;gt;   79m   v1.31.5-eks-5d632ec

# 현재는 AmazonEKSClusterAdminPolicy 이기 때문에 해당 작업이 가능함
kubectl auth can-i delete pods --all-namespaces
yes

# 컨피그 맵에 값이 반영되지 않는 것을 알 수 있습니다.
kubectl get cm -n kube-system aws-auth -o yaml
...
  mapUsers: |
    []
...&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;한편 Access Entry 자체를 쿠버네티스 그룹과도 맵핑해줄 수 있습니다. 먼저 앞서 생성한 Access Entry를 제거하고, 쿠버네티스 그룹과 맵핑을 해보겠습니다.&lt;/p&gt;
&lt;pre class=&quot;dsconfig&quot;&gt;&lt;code&gt;# 기존 testuser access entry 제거
aws eks delete-access-entry --cluster-name $CLUSTER_NAME --principal-arn arn:aws:iam::$ACCOUNT_ID:user/testuser
aws eks list-access-entries --cluster-name $CLUSTER_NAME | jq -r .accessEntries[]

# 확인
(testuser:N/A) [root@operator-host-2 ~]# kubectl get no
error: You must be logged in to the server (Unauthorized)

# Cluster Role 생성
cat &amp;lt;&amp;lt;EoF&amp;gt; ~/pod-viewer-role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: pod-viewer-role
rules:
- apiGroups: [&quot;&quot;]
  resources: [&quot;pods&quot;]
  verbs: [&quot;list&quot;, &quot;get&quot;, &quot;watch&quot;]
EoF

kubectl apply -f ~/pod-viewer-role.yaml

# Cluster Rolebinding 생성
kubectl create clusterrolebinding viewer-role-binding --clusterrole=pod-viewer-role --group=pod-viewer


# Access Entry 생성 (--kubernetes-group 옵션 추가)
aws eks create-access-entry --cluster-name $CLUSTER_NAME --principal-arn arn:aws:iam::$ACCOUNT_ID:user/testuser --kubernetes-group pod-viewer

# acess policy 자체에서는 정보가 보이지 않는다.
aws eks list-associated-access-policies --cluster-name $CLUSTER_NAME --principal-arn arn:aws:iam::$ACCOUNT_ID:user/testuser

{
    &quot;associatedAccessPolicies&quot;: [],
    &quot;clusterName&quot;: &quot;myeks&quot;,
    &quot;principalArn&quot;: &quot;arn:aws:iam::xx:user/testuser&quot;
}

aws eks describe-access-entry --cluster-name $CLUSTER_NAME --principal-arn arn:aws:iam::$ACCOUNT_ID:user/testuser | jq
{
  &quot;accessEntry&quot;: {
    &quot;clusterName&quot;: &quot;myeks&quot;,
    &quot;principalArn&quot;: &quot;arn:aws:iam::xx:user/testuser&quot;,
    &quot;kubernetesGroups&quot;: [
      &quot;pod-viewer&quot;
    ],
    &quot;accessEntryArn&quot;: &quot;arn:aws:eks:ap-northeast-2:xx:access-entry/myeks/user/xx/testuser/4ccacd1e-2a2e-fd71-93e2-94ead13e95e3&quot;,
    &quot;createdAt&quot;: &quot;2025-03-15T23:36:39.977000+09:00&quot;,
    &quot;modifiedAt&quot;: &quot;2025-03-15T23:36:39.977000+09:00&quot;,
    &quot;tags&quot;: {},
    &quot;username&quot;: &quot;arn:aws:iam::xx:user/testuser&quot;,
    &quot;type&quot;: &quot;STANDARD&quot;
  }
}&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아래와 같이 정보를 확인해봅니다.&lt;/p&gt;
&lt;pre class=&quot;routeros&quot;&gt;&lt;code&gt;# kubectl 시도 (node 조회는 불가, po는 조회 가능)
(testuser:N/A) [root@operator-host-2 ~]# kubectl get no
Error from server (Forbidden): nodes is forbidden: User &quot;arn:aws:iam::xx:user/testuser&quot; cannot list resource &quot;nodes&quot; in API group &quot;&quot; at the cluster scope
(testuser:N/A) [root@operator-host-2 ~]# kubectl get po
No resources found in default namespace.

# can-i 로 확인
kubectl auth can-i get pods --all-namespaces
yes
kubectl auth can-i delete pods --all-namespaces
no&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;EKS의 인증/인가에 대한 실습을 마무리하겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;3. AKS의 인증/인가&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Azure에서는 Microsoft Entra ID(이전 명칭: Azure AD(Azure Active Directory))라는 ID 및 엑세스 관리 서비스를 가지고 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;AKS의 인증 또한 Entra ID를 이용할 수 있으며 Azure RBAC을 함께 사용하여 다양한 옵션을 제공하고 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;애저 포탈에서 AKS의 Settings&amp;gt;security configuration를 확인해보면 AKS에서 선택 가능한 인증/인가 방식을 확인할 수 있습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1013&quot; data-origin-height=&quot;327&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/Z4Rim/btsMLJYJfHg/uwTRMOuZZcEmOzmZgVRo3K/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/Z4Rim/btsMLJYJfHg/uwTRMOuZZcEmOzmZgVRo3K/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/Z4Rim/btsMLJYJfHg/uwTRMOuZZcEmOzmZgVRo3K/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FZ4Rim%2FbtsMLJYJfHg%2FuwTRMOuZZcEmOzmZgVRo3K%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1013&quot; height=&quot;327&quot; data-origin-width=&quot;1013&quot; data-origin-height=&quot;327&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;사실 이부분에 대해서 제대로 이 분류를 설명하고 있는 문서가 없고, 아래의 공식 문서 또한 각 용어들을 산별적으로 설명하고 있어 이해하기가 쉽지 않습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;https://learn.microsoft.com/en-us/azure/aks/concepts-identity&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://learn.microsoft.com/en-us/azure/aks/concepts-identity&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;것을 설명하는 데 있어서도 어려운 부분이라 단계별로 설명을 이어나가 보겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;사전 지식1&lt;/h4&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이때 &lt;b&gt;Microsoft Entra ID&lt;/b&gt;는 Azure와 M365를 포괄하는 ID 인증 관리 체계로 이해할 수 있으며,&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;354&quot; data-origin-height=&quot;276&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/dyhx1P/btsMMHFrzeN/f0nmFQyljjPt3ba4SgBaBK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/dyhx1P/btsMMHFrzeN/f0nmFQyljjPt3ba4SgBaBK/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/dyhx1P/btsMMHFrzeN/f0nmFQyljjPt3ba4SgBaBK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2Fdyhx1P%2FbtsMMHFrzeN%2Ff0nmFQyljjPt3ba4SgBaBK%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;354&quot; height=&quot;276&quot; data-origin-width=&quot;354&quot; data-origin-height=&quot;276&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;b&gt;Azure RBAC&lt;/b&gt;은 Azure 수준에 대한 권한 부여 방식이라고 간단히 이해하고 넘어가겠습니다. Azure RBAC을 이용하는 역할 할당은 각 수준별(관리그룹, 구독, 리소스 그룹, 리소스) &lt;code&gt;엑세스 제어(IAM)&lt;/code&gt; 메뉴에서 가능합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;https://learn.microsoft.com/ko-kr/azure/role-based-access-control/media/shared/sub-role-assignments.png#lightbox&quot;&gt;&lt;img src=&quot;https://learn.microsoft.com/ko-kr/azure/role-based-access-control/media/shared/sub-role-assignments.png&quot; alt=&quot;Azure Portal의 액세스 제어(C:\Users\montauk\Desktop\STD\5. Seminar\202502_AEWS(EKS)\EKS과제6주차_20250310.assets\sub-role-assignments.png) 페이지 스크린샷.&quot; /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;출처: &lt;a href=&quot;https://learn.microsoft.com/ko-kr/azure/role-based-access-control/rbac-and-directory-admin-roles&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://learn.microsoft.com/ko-kr/azure/role-based-access-control/rbac-and-directory-admin-roles&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Azure 관점에서 각 Azure 리소스의 엑세스 제어(IAM)에 Microsoft Entra ID의 주체(사용자, 애플리케이션)와 Role을 할당하는 것으로 간단히 생각하실 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;사전 지식2&lt;/h4&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;AWS와 Azure의 IAM 부분에서 용어나 관점의 차이가 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;AWS의 Role은 사용자와 같은 주체의 의미입니다. Azure에서 이러한 주체는 Service Principal(SP)이나 Managed Identity(MI)라고 합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Azure의 Role은 권한(Action)들의 집합을 의미합니다. AWS에서는 이 개념을 Policy라고 합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;AWS의 IAM에서는 사용자나 Role에 Policy를 할당합니다. AWS의 IAM은 주체 관점의 RBAC(주체에 리소스+권한을 할당)을 구현하고 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Azure의 RBAC에서는 대상(리소스)에 사용자나 주체(SP,MI)를 Role을 맵핑합니다. Azure는 리소스 관점의 RBAC(리소스에 사용자+권한을 할당)을 구현하고 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;예를 들어, 특정 testuser에게 가상 머신의 관리자 권한을 준다고할 때 AWS와 Azure의 방식은 아래와 같습니다.&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;AWS는 신규 Policy에 가상머신을 선택하고 관리자 권한을 부여하고, testuser에게 이 Policy를 할당합니다.&lt;/li&gt;
&lt;li&gt;Azure는 가상머신에서 testuser와 관리자 Role을 맵핑하여 권한을 할당합니다.&lt;/li&gt;
&lt;/ul&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;사전 지식3&lt;/h4&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;AKS를 위한 Azure RBAC에서는 &lt;b&gt;AKS 리소스를 위한 Role&lt;/b&gt;과 &lt;b&gt;Kubernetes를 위한 Role&lt;/b&gt;이 구분되어 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;b&gt;AKS 리소스를 위한 Role&lt;/b&gt;이라는 것은 Azure 리소스 차원에서 AKS에 대한 CRUD(클러스터 설정 변경, 노드 풀 스케일링 등)에 대한 권한입니다. 또한 &lt;code&gt;az aks get-credentials&lt;/code&gt;를 통한 kubeconfig를 획득하기 위한 권한도 별도로 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;b&gt;Kubernetes를 위한 Role&lt;/b&gt;은 쿠버네티스 내부의 리소스에 대한 CRUD(Deployment 생성, confimgMap 조회 등)을 의미하며, Microsoft Entra ID authentication with Azure RBAC에서 Azure RBAC을 통해서 Kubernetes에 대한 권한을 부여할 수 있다는 의미입니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;사전 지식4&lt;/h4&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;그 다음은 &lt;b&gt;Local accounts&lt;/b&gt;라는 개념으로, 활성화 된 경우 admin에 해당하는 local account가 기본적으로 생성되어 있습니다. 이는 Microsoft Entra ID를 사용하는 경우에도 존재할 수 있으며, &lt;code&gt;az aks get-credentials&lt;/code&gt;에 &lt;code&gt;--admin&lt;/code&gt; 플래그를 사용하는 경우 kubeconfig admin credentials을 획득할 수 있습니다. 이를 통해 관리자가 Entra ID인증 없이 쿠버네티스를 접근할 수 있으나, 이는 보안에 취약할 수 있어 local accounts를 비활성화 할 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;참고: &lt;a href=&quot;https://learn.microsoft.com/en-us/azure/aks/manage-local-accounts-managed-azure-ad&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://learn.microsoft.com/en-us/azure/aks/manage-local-accounts-managed-azure-ad&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;사전 지식을 바탕으로 아래에 대해서 설명을 이어 나가겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;Local accounts with Kubernetes RBAC&lt;/h4&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Microsoft Entra ID와 인증을 연동하지 않은 모드입니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이 구성에서 &lt;code&gt;Azure Kubernetes Service Cluster User Role&lt;/code&gt;을 부여 받은 사용자나 그룹은 &lt;code&gt;az aks get-credentials&lt;/code&gt;을 통해 kubeconfig를 획득할 수 있습니다. 이 Kubeconfig는 Kubernetes의 admin 권한입니다. 이후 Kubernetes RBAC을 통해서 인가를 구성할 수 있습니다. 이 구성에서는 user credentials이 admin 권한을 가지기 때문에 &lt;code&gt;--admin&lt;/code&gt; 플래그와 차이가 없습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이 방식은 Microsoft Entra ID를 통한 쿠버네티스 인증이 없습니다. 단순히 kubeconfig 를 획득할 수 있는 Role을 부여하거나 부여하지 않는 방식으로 사용자를 구분할 수 있지만, 획득한 kubeconfig는 모두 동일하게 쿠버네티스에 대한 admin 권한을 가집니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;Microsoft Entra ID authentication with Kubernetes RBAC&lt;/h4&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이 구성은 Microsoft Entra ID로 인증을 연동하고, 또한 Entra ID의 사용자나 그룹을 Kubernetes RBAC의 주체로 사용할 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아래는 Microsoft Entra ID authentication with Kubernetes RBAC를 선택한 옵션으로 Kubernetes의 admin에 대한 ClusterRoleBinding을 지정할 Entra ID의 그룹을 지정할 수 있습니다. 해당 지정된 그룹의 사용자는 쿠버네티스의 admin 에 해당하는 권한을 할당 받습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이 때 사용자 시나리오는 관리자 그룹에서 Kubernetes의 role/binding을 관리하고, 나머지 사용자나 그룹에 Kubernetes의 RBAC을 부여하여 사용하는 방식입니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1012&quot; data-origin-height=&quot;123&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/q6Ho1/btsMLthFjlK/PdXU8bp2kKThnPRDLVruGK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/q6Ho1/btsMLthFjlK/PdXU8bp2kKThnPRDLVruGK/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/q6Ho1/btsMLthFjlK/PdXU8bp2kKThnPRDLVruGK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2Fq6Ho1%2FbtsMLthFjlK%2FPdXU8bp2kKThnPRDLVruGK%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1012&quot; height=&quot;123&quot; data-origin-width=&quot;1012&quot; data-origin-height=&quot;123&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;code&gt;Azure Kubernetes Service Cluster Admin Role&lt;/code&gt;을 가지는 사용자는 Kubernetes에 대한 admin 권한을 가진 kubeconfig를 획득할 수 있습니다. 이 때문에 local account를 비활성화 하는 옵션이 아래에 표시되어 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이 구성에서도 &lt;code&gt;Azure Kubernetes Service Cluster User Role&lt;/code&gt;을 부여 받은 사용자나 그룹만 &lt;code&gt;az aks get-credentials&lt;/code&gt;을 통해 kubeconfig를 획득할 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이후 획득한 kubeconfig는 Microsoft Entra ID의 인증과 연동하도록 구성되어 있으며, 또한 Kubernetes RBAC 모드에서는 RoleBinding에서 Azure Entra ID의 사용자(UPN, User Principal Name)나 그룹(Object ID)을 지정할 수 있습니다.&lt;/p&gt;
&lt;pre class=&quot;routeros&quot;&gt;&lt;code&gt;kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: dev-user-access
  namespace: dev
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: dev-user-full-access
subjects:
- kind: Group # 일반 유저인 경우 kind: User
  namespace: dev
  name: groupObjectId # 일반 유저인 경우 user principal name (UPN) 입력&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이 방식은 EKS의 인증/인가 방식에서 ConfigMap을 사용하는 방식(IAM의 User를 쿠버네티스의 그룹과 맵핑하는 방식)과 유사하다고 생각됩니다. 한편, EKS API의 access entry에서도 IAM의 User와 쿠버네티스 그룹을 맵핑해줄 수도 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;Microsoft Entra ID authentication with Azure RBAC&lt;/h4&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이 구성은 Microsoft Entra ID로 인증을 연동하고, 또한 Azure RBAC을 통하여 쿠버네티스 인가를 사용할 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이 구성에서도 &lt;code&gt;Azure Kubernetes Service Cluster User Role&lt;/code&gt;을 부여 받은 사용자나 그룹만 &lt;code&gt;az aks get-credentials&lt;/code&gt;을 통해 kubeconfig를 획득할 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이후 획득한 kubeconfig는 Microsoft Entra ID의 인증과 연동하도록 구성되어 있으며, 또한 Azure RBAC을 통해서 쿠버네티스의 인가를 구성할 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;먼저 아래와 같은 Built-in Role을 사용하거나 별도의 Custom Role을 생성할 수도 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;https://learn.microsoft.com/en-us/azure/aks/manage-azure-rbac?tabs=azure-cli#aks-built-in-roles&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://learn.microsoft.com/en-us/azure/aks/manage-azure-rbac?tabs=azure-cli#aks-built-in-roles&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;또한 Azure RBAC을 통해서 해당 Role을 사용자/그룹에 할당할 수 있습니다. 이때 &lt;code&gt;--scope&lt;/code&gt;을 통해서 특정 네임스페이스에 할당할 수 있는 점에서 완전히 Azure RBAC을 통해 쿠버네티스 수준의 관리까지 지원을 하고 있는 것을 알 수 있습니다.&lt;/p&gt;
&lt;pre class=&quot;dsconfig&quot;&gt;&lt;code&gt;# AKS에 권한 할당
az role assignment create --role &quot;Azure Kubernetes Service RBAC Admin&quot; --assignee &amp;lt;AAD-ENTITY-ID&amp;gt; --scope $AKS_ID

# 특정 Namespace에 권한 할당
az role assignment create --role &quot;Azure Kubernetes Service RBAC Reader&quot; --assignee &amp;lt;AAD-ENTITY-ID&amp;gt; --scope $AKS_ID/namespaces/&amp;lt;namespace-name&amp;gt;&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;참고로, &lt;code&gt;Azure Kubernetes Service Cluster Admin Role&lt;/code&gt;을 가지는 사용자는 Kubernetes에 대한 admin 권한을 가진 kubeconfig를 획득할 수 있습니다. 이 때문에 local account를 비활성화할 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Microsoft Entra ID authentication with Azure RBAC의 접근 방식은 AKS라는 리소스에 권한을 할당하는 방식만으로 쿠버네티스 권한 관리까지 수행할 수 있기 때문에, Azure 수준에서 AKS에 할당된 모든 권한(리소스와 쿠버네티스 권한)를 확인할 수 있다는 장점이 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이 방식은 EKS의 인증/인가 방식에서 EKS API를 사용하는 방식(AWS IAM의 User를 정의된 Policy와 맵핑하는 방식)과 유사한 것 같습니다. 다만 EKS API의 access entry에는 쿠버네티스 그룹을 맵핑해줄 수도 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;AKS의 인증/인가 요약&lt;/h4&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;요약하면, AKS에서는 특정 사용자/그룹에 한해 kubeconfig를 획득하는 권한을 할당해야 합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Microsoft Entra ID와 통합하지 않은 경우에는 admin 권한의 kubeconfig를 사용하는 방식으로 사용할 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Kubernetes의 인증을 Microsoft Entra ID와 통합할 수 있으며, 또한 인가 방식에서 Kubernetes RBAC과 Azure RBAC를 선택할 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;마지막으로 Microsoft Entra ID와 통합을 한 경우 local account를 비활성화 할 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;EKS와의 차이점을 보면, kubeconfig를 획득하기 위한 권한을 별도로 가지고 있다는 점과 Microsoft Entra ID를 인증을 하지 않는 local account 방식을 제공하는 부분이 있습니다. 또한 Azure RBAC을 통해 인가를 처리해줄 수 있는 부분이 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;4. Kubernetes의 파드 권한&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;쿠버네티스에서 파드에 ServiceAccount를 부여하고, ServiceAccount라는 주체를 RBAC으로 구성하면 쿠버네티스의 리소스에 대한 권한을 할당 받습니다. 예를 들어, 배포를 담당하는 파드가 있고, 해당 파드 할당된 ServiceAccount에 deployments에 대한 CRUD를 허용하면, 파드에서 deployment 배포가 가능하게 됩니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;다만 파드가 쿠버네티스의 리소스가 아닌 클라우드 자원 자체에 접근한다는 것은 다른 이야기입니다. 예를 들어, 파드가 AWS의 S3를 조회하거나 파일을 업로드하는 것입니다. 즉, 쿠버네티스에서 인증/인가할 수 있는 범위를 넘어서, 클라우드에서 제공하는 Identity 및 Access 관리 솔루션을 통해서 인증/인가를 받아야 합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;앞서 살펴본 EKS의 인증/인가 절에서는 AWS에서 유효한 사용자가 어떻게 쿠버네티스의 인증/인가를 이용할 수 있는가의 관점이라면, 지금 다루는 주제는 쿠버네티스에서 유효한 주체가 어떻게 AWS의 인증/인가를 이용할 수 있는가에 대한 문제입니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;EKS에서는 IRSA(IAM Roles for Service Accounts)와 Pod Identity라는 방식을 제공하고 있고, 이를 다음 절 EKS의 파드 권한 할당에서 자세히 살펴보겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;5. EKS의 파드 권한 할당&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아무런 권한을 부여하지 않은 파드는 노드의 권한을 가지게 됩니다. AWS에서는 인스턴스에 부여된 IAM Role을 가지게 됩니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아래와 같이 확인을 해볼 수 있습니다.&lt;/p&gt;
&lt;pre class=&quot;routeros&quot;&gt;&lt;code&gt;# awscli 파드 생성
cat &amp;lt;&amp;lt;EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
  name: awscli-pod
spec:
  replicas: 2
  selector:
    matchLabels:
      app: awscli-pod
  template:
    metadata:
      labels:
        app: awscli-pod
    spec:
      containers:
      - name: awscli-pod
        image: amazon/aws-cli
        command: [&quot;tail&quot;]
        args: [&quot;-f&quot;, &quot;/dev/null&quot;]
      terminationGracePeriodSeconds: 0
EOF

# 파드 생성 확인
kubectl get pod -owide

# 파드 이름 변수 지정
APODNAME1=$(kubectl get pod -l app=awscli-pod -o jsonpath=&quot;{.items[0].metadata.name}&quot;)
APODNAME2=$(kubectl get pod -l app=awscli-pod -o jsonpath=&quot;{.items[1].metadata.name}&quot;)
echo $APODNAME1, $APODNAME2

# awscli 파드에서 EC2 InstanceProfile(IAM Role)의 ARN 정보가 확인됨
kubectl exec -it $APODNAME1 -- aws sts get-caller-identity --query Arn
&quot;arn:aws:sts::xx:assumed-role/eksctl-myeks-nodegroup-ng1-NodeInstanceRole-e7CGUnBoQC96/i-0e6fd9f697b0a4c2f&quot;
kubectl exec -it $APODNAME2 -- aws sts get-caller-identity --query Arn
&quot;arn:aws:sts::xx:assumed-role/eksctl-myeks-nodegroup-ng1-NodeInstanceRole-e7CGUnBoQC96/i-04ed117980b8faf7f&quot;

# 해당 IAM Role에 권한이 없기 때문에 실패함
kubectl exec -it $APODNAME1 -- aws s3 ls
An error occurred (AccessDenied) when calling the ListBuckets operation: User: arn:aws:sts::xx:assumed-role/eksctl-myeks-nodegroup-ng1-NodeInstanceRole-e7CGUnBoQC96/i-0e6fd9f697b0a4c2f is not authorized to perform: s3:ListAllMyBuckets because no identity-based policy allows the s3:ListAllMyBuckets action
command terminated with exit code 254&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;그렇다면 인스턴스에 부여된 IAM Role을 통하여 권한을 할당하면 된다고 생각할 수 있지만, 이런 방식에서는 노드에 실행된 모든 파드에서 동일한 권한이 부여되기 때문에 최소 권한에 위배됩니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;참고로 EKS를 생성하는 ClusterConfig에 관리 노드 그룹에 대해 아래와 같이 IAM을 지정할 수 있습니다.&lt;/p&gt;
&lt;pre class=&quot;yaml&quot;&gt;&lt;code&gt;...
managedNodeGroups:
- amiFamily: AmazonLinux2023
  desiredCapacity: 3
  iam:
    withAddonPolicies:
      autoScaler: true
      certManager: true
      externalDNS: true
  instanceType: t3.medium
...&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;웹 콘솔에서는 아래와 같이 인스턴스에 할당된 IAM을 따라가보면 아래와 같이 확인할 수 있습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1724&quot; data-origin-height=&quot;778&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bGmk6q/btsMMeDMIl8/BF6bspHruKCzODMbyG6NEK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bGmk6q/btsMMeDMIl8/BF6bspHruKCzODMbyG6NEK/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bGmk6q/btsMMeDMIl8/BF6bspHruKCzODMbyG6NEK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbGmk6q%2FbtsMMeDMIl8%2FBF6bspHruKCzODMbyG6NEK%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1724&quot; height=&quot;778&quot; data-origin-width=&quot;1724&quot; data-origin-height=&quot;778&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이러한 이유로 권한이 필요한 파드에 IAM Role을 부여하는 방식이 필요합니다. 파드 권한 할당을 위해 EKS에서 제공하는 IRSA와 Pod Identity를 확인해보겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;IRSA(IAM Roles for Service Accounts)&lt;/h4&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;IRSA는 권한이 부여된 IAM Role을 SerivceAccount에 할당하고, 파드가 ServiceAccount를 사용하여 AWS의 인증을 통해 AWS 리소스를 접근하는 방식입니다. 이를 위해서 OIDC Issuer가 JWT를 발급해주고, 또한 IAM과 신뢰관계를 통해서 발급 여부를 확인해줍니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;IRSA는 아래와 같은 절차를 통해서 이뤄집니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;2048&quot; data-origin-height=&quot;1171&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/urupv/btsMLuU6rd4/g7vSNvH2PuR4vXcOFSVT60/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/urupv/btsMLuU6rd4/g7vSNvH2PuR4vXcOFSVT60/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/urupv/btsMLuU6rd4/g7vSNvH2PuR4vXcOFSVT60/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2Furupv%2FbtsMLuU6rd4%2Fg7vSNvH2PuR4vXcOFSVT60%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;2048&quot; height=&quot;1171&quot; data-origin-width=&quot;2048&quot; data-origin-height=&quot;1171&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;출처: &lt;a href=&quot;https://github.com/awskrug/security-group/blob/main/files/AWSKRUG_2024_02_EKS_ROLE_MANAGEMENT.pdf&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://github.com/awskrug/security-group/blob/main/files/AWSKRUG_2024_02_EKS_ROLE_MANAGEMENT.pdf&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;실습을 통해 살펴보겠습니다.&lt;/p&gt;
&lt;pre class=&quot;yaml&quot;&gt;&lt;code&gt;# Create an iamserviceaccount - AWS IAM role bound to a Kubernetes service account
eksctl create iamserviceaccount \
  --name my-sa \
  --namespace default \
  --cluster $CLUSTER_NAME \
  --approve \
  --attach-policy-arn $(aws iam list-policies --query 'Policies[?PolicyName==`AmazonS3ReadOnlyAccess`].Arn' --output text)

2025-03-15 23:58:25 [ℹ]  1 existing iamserviceaccount(s) (kube-system/aws-load-balancer-controller) will be excluded
2025-03-15 23:58:25 [ℹ]  1 iamserviceaccount (default/my-sa) was included (based on the include/exclude rules)
2025-03-15 23:58:25 [!]  serviceaccounts that exist in Kubernetes will be excluded, use --override-existing-serviceaccounts to override
2025-03-15 23:58:25 [ℹ]  1 task: {
    2 sequential sub-tasks: {
        create IAM role for serviceaccount &quot;default/my-sa&quot;,
        create serviceaccount &quot;default/my-sa&quot;,
    } }2025-03-15 23:58:25 [ℹ]  building iamserviceaccount stack &quot;eksctl-myeks-addon-iamserviceaccount-default-my-sa&quot;
2025-03-15 23:58:25 [ℹ]  deploying stack &quot;eksctl-myeks-addon-iamserviceaccount-default-my-sa&quot;
2025-03-15 23:58:25 [ℹ]  waiting for CloudFormation stack &quot;eksctl-myeks-addon-iamserviceaccount-default-my-sa&quot;
2025-03-15 23:58:56 [ℹ]  waiting for CloudFormation stack &quot;eksctl-myeks-addon-iamserviceaccount-default-my-sa&quot;
2025-03-15 23:58:56 [ℹ]  created serviceaccount &quot;default/my-sa&quot;

# SA 확인
kubectl get sa
NAME      SECRETS   AGE
default   0         122m
my-sa     0         4m37s

kubectl describe sa my-sa
Name:                my-sa
Namespace:           default
Labels:              app.kubernetes.io/managed-by=eksctl
Annotations:         eks.amazonaws.com/role-arn: arn:aws:iam::xx:role/eksctl-myeks-addon-iamserviceaccount-default--Role1-MYPji4gGE3x2
Image pull secrets:  &amp;lt;none&amp;gt;
Mountable secrets:   &amp;lt;none&amp;gt;
Tokens:              &amp;lt;none&amp;gt;
Events:              &amp;lt;none&amp;gt;&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;쿠버네티스에 새로운 SerivceAccount가 생성되었고, 어노테이션으로 ARN이 지정된 것을 알 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;또한 위 명령을 수행하면 CloudFormation이 실행되고 IAM Role이 생성됩니다. CloudFormation을 확인해서 리소스를 보면 IAM Role이 생성된 것을 확인할 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;생성된 ServiceAccount를 통해 IRSA를 사용하는 파드를 생성 합니다.&lt;/p&gt;
&lt;pre class=&quot;routeros&quot;&gt;&lt;code&gt;# 파드 생성
cat &amp;lt;&amp;lt;EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
  name: eks-iam-test3
spec:
  serviceAccountName: my-sa
  containers:
    - name: my-aws-cli
      image: amazon/aws-cli:latest
      command: ['sleep', '36000']
  restartPolicy: Never
  terminationGracePeriodSeconds: 0
EOF

# 파드에서 aws cli 사용 확인
NAMESPACE       NAME                            ROLE ARN
default         my-sa                           arn:aws:iam::xx:role/eksctl-myeks-addon-iamserviceaccount-default--Role1-MYPji4gGE3x2
kube-system     aws-load-balancer-controller    arn:aws:iam::xx:role/eksctl-myeks-addon-iamserviceaccount-kube-sys-Role1-oyUqvqXumqCT

kubectl exec -it eks-iam-test3 -- aws sts get-caller-identity --query Arn
&quot;arn:aws:sts::xx:assumed-role/eksctl-myeks-addon-iamserviceaccount-default--Role1-MYPji4gGE3x2/botocore-session-1742051277&quot;

# 할당된 Policy에 의해 가능한 작업 (에러 발생하지 않음)
kubectl exec -it eks-iam-test3 -- aws s3 ls

# 할당된 Policy에 의해 불가한 작업
kubectl exec -it eks-iam-test3 -- aws ec2 describe-instances --region ap-northeast-2
An error occurred (UnauthorizedOperation) when calling the DescribeInstances operation: You are not authorized to perform this operation. User: arn:aws:sts::xx:assumed-role/eksctl-myeks-addon-iamserviceaccount-default--Role1-MYPji4gGE3x2/botocore-session-1742051277 is not authorized to perform: ec2:DescribeInstances because no identity-based policy allows the ec2:DescribeInstances action
command terminated with exit code 254&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;파드 스펙에는 ServiceAccount 이름만 지정하지만, Admission Controller의 Mutation Webhook에 의해서 필요한 정보가 추가로 등록된 것을 확인할 수 있습니다.&lt;/p&gt;
&lt;pre class=&quot;yaml&quot;&gt;&lt;code&gt;# 해당 SA를 파드가 사용 시 mutatingwebhook으로 Env,Volume 추가함: AWS IAM 역할을 Pod에 자동으로 주입
kubectl get mutatingwebhookconfigurations pod-identity-webhook -o yaml
...
webhooks:
- admissionReviewVersions:
  - v1beta1
  clientConfig:
    caBundle: xxx
    url: https://127.0.0.1:23443/mutate
  failurePolicy: Ignore
  matchPolicy: Equivalent
  name: iam-for-pods.amazonaws.com
  namespaceSelector: {}
  objectSelector:
    matchExpressions:
    - key: eks.amazonaws.com/skip-pod-identity-webhook
      operator: DoesNotExist
  reinvocationPolicy: IfNeeded
  rules:
  - apiGroups:
    - &quot;&quot;
    apiVersions:
    - v1
    operations:
    - CREATE
    resources:
    - pods
    scope: '*'
  sideEffects: None
  timeoutSeconds: 10
...

# Pod Identity Webhook은 mutating webhook을 통해 아래 Env 내용과 1개의 볼륨을 추가함
kubectl get pod eks-iam-test3
kubectl get pod eks-iam-test3 -o yaml
...
    env:
    - name: AWS_STS_REGIONAL_ENDPOINTS
      value: regional
    - name: AWS_DEFAULT_REGION
      value: ap-northeast-2
    - name: AWS_REGION
      value: ap-northeast-2
    - name: AWS_ROLE_ARN
      value: arn:aws:iam::xx:role/eksctl-myeks-addon-iamserviceaccount-default--Role1-MYPji4gGE3x2
    - name: AWS_WEB_IDENTITY_TOKEN_FILE
      value: /var/run/secrets/eks.amazonaws.com/serviceaccount/token
...
    volumeMounts: 
    - mountPath: /var/run/secrets/eks.amazonaws.com/serviceaccount
      name: aws-iam-token
      readOnly: true
...
  volumes:
  - name: aws-iam-token
    projected:
      defaultMode: 420
      sources:
      - serviceAccountToken:
          audience: sts.amazonaws.com
          expirationSeconds: 86400
          path: token
...&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;마운트된 토큰을 확인해보겠습니다.&lt;/p&gt;
&lt;pre class=&quot;awk&quot;&gt;&lt;code&gt;# 토큰 확인
kubectl exec -it eks-iam-test3 -- cat /var/run/secrets/eks.amazonaws.com/serviceaccount/token ; echo&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;JWT로 디코드 해보면 아래와 같은 정보를 확인할 수 있습니다.&lt;/p&gt;
&lt;pre class=&quot;json&quot;&gt;&lt;code&gt;{
  &quot;aud&quot;: [
    &quot;sts.amazonaws.com&quot;
  ],
  &quot;exp&quot;: 1742137620,
  &quot;iat&quot;: 1742051220,
  &quot;iss&quot;: &quot;https://oidc.eks.ap-northeast-2.amazonaws.com/id/EF882B...&quot;,
  &quot;jti&quot;: &quot;585eab74-0dd1-4047-a8d5-2181c3db9c13&quot;,
  &quot;kubernetes.io&quot;: {
    &quot;namespace&quot;: &quot;default&quot;,
    &quot;node&quot;: {
      &quot;name&quot;: &quot;ip-192-168-1-41.ap-northeast-2.compute.internal&quot;,
      &quot;uid&quot;: &quot;fb4be118-8152-452a-a0aa-eaff394022e2&quot;
    },
    &quot;pod&quot;: {
      &quot;name&quot;: &quot;eks-iam-test3&quot;,
      &quot;uid&quot;: &quot;784ef7a7-0acd-4394-bcd7-c4f71f5c101f&quot;
    },
    &quot;serviceaccount&quot;: {
      &quot;name&quot;: &quot;my-sa&quot;,
      &quot;uid&quot;: &quot;bd180c18-c38d-4974-8f34-8d9b2f7b37c8&quot;
    }
  },
  &quot;nbf&quot;: 1742051220,
  &quot;sub&quot;: &quot;system:serviceaccount:default:my-sa&quot;
}&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;JWT 토큰의 iss를 확인해보면, 웹 콘솔의 EKS를 확인해보면 OpenID Connect provider URL와 일치하는 것을 알 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;실습을 마무리하고 IRSA 관련 리소스는 삭제하겠습니다.&lt;/p&gt;
&lt;pre class=&quot;routeros&quot;&gt;&lt;code&gt;# 실습 확인 후 파드 삭제 및 IRSA 제거
kubectl delete deply awscli-pod
kubectl delete pod eks-iam-test3
eksctl delete iamserviceaccount --cluster $CLUSTER_NAME --name my-sa --namespace default

# 확인
eksctl get iamserviceaccount --cluster $CLUSTER_NAME
kubectl get sa&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이러한 IRSA는 관리 복잡성과, ServiceAccount를 세부적으로 지정하지 않는 경우 보안에 취약한 점 등으로 현재는Classic으로 여겨지고 이후 Pod Identity가 도입되었습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;Pod Identity&lt;/h4&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;IRSA는 2019년에 도입되었다면, Pod Identity는 2023년 비교적 최근에 도입된 방식으로 보안과 사용성 측면에서 개선된 사항이 많습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Pod Identity는 EKS Pod Identity Agent를 통해서 credentials을 발급 받고, EKS Auth API를 통해서 인증을 처리 받습니다. 아래의 처리과정을 참고 부탁드립니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;844&quot; data-origin-height=&quot;846&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/cW8lAg/btsMM66cY5B/K8ooWmvOAo7edFzRg2qN6k/img.jpg&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/cW8lAg/btsMM66cY5B/K8ooWmvOAo7edFzRg2qN6k/img.jpg&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/cW8lAg/btsMM66cY5B/K8ooWmvOAo7edFzRg2qN6k/img.jpg&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FcW8lAg%2FbtsMM66cY5B%2FK8ooWmvOAo7edFzRg2qN6k%2Fimg.jpg&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;844&quot; height=&quot;846&quot; data-origin-width=&quot;844&quot; data-origin-height=&quot;846&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;출처: &lt;a href=&quot;https://aws.amazon.com/ko/blogs/containers/amazon-eks-pod-identity-a-new-way-for-applications-on-eks-to-obtain-iam-credentials/&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://aws.amazon.com/ko/blogs/containers/amazon-eks-pod-identity-a-new-way-for-applications-on-eks-to-obtain-iam-credentials/&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아래와 같이 실습을 진행하겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Pod Identity는 애드온으로 설치가 가능합니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;# Pod Identity 버전 확인
ADDON=eks-pod-identity-agent
aws eks describe-addon-versions \
    --addon-name $ADDON \
    --kubernetes-version 1.31 \
    --query &quot;addons[].addonVersions[].[addonVersion, compatibilities[].defaultVersion]&quot; \
    --output text
v1.3.5-eksbuild.2
False
v1.3.4-eksbuild.1
True
v1.3.2-eksbuild.2
False
v1.3.0-eksbuild.1
False
v1.2.0-eksbuild.1
False
v1.1.0-eksbuild.1
False
v1.0.0-eksbuild.1
False

# 설치
eksctl create addon --cluster $CLUSTER_NAME --name eks-pod-identity-agent --version 1.3.5

# 확인
eksctl get addon --cluster $CLUSTER_NAME

NAME                    VERSION                 STATUS          ISSUES  IAMROLE                                                                                 UPDATE AVAILABLE CONFIGURATION VALUES            POD IDENTITY ASSOCIATION ROLES
aws-ebs-csi-driver      v1.40.1-eksbuild.1      ACTIVE          0       arn:aws:iam::xx:role/eksctl-myeks-addon-aws-ebs-csi-driver-Role1-15a6w33Xm4wR
coredns                 v1.11.4-eksbuild.2      ACTIVE          0
eks-pod-identity-agent  v1.3.5-eksbuild.2       CREATING        0
kube-proxy              v1.31.3-eksbuild.2      ACTIVE          0
metrics-server          v0.7.2-eksbuild.2       ACTIVE          0
vpc-cni                 v1.19.3-eksbuild.1      ACTIVE          0       arn:aws:iam::xx:role/eksctl-myeks-addon-vpc-cni-Role1-RS9uYpCia7T9            enableNetworkPolicy: &quot;true&quot;

# 데몬 셋으로 설치됨
kubectl -n kube-system get daemonset eks-pod-identity-agent

NAME                     DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
eks-pod-identity-agent   3         3         3       3            3           &amp;lt;none&amp;gt;          33s&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아래와 같이 Pod Identity Association을 생성합니다.&lt;/p&gt;
&lt;pre class=&quot;routeros&quot;&gt;&lt;code&gt;# Pod Identity Association을 생성
eksctl create podidentityassociation \
--cluster $CLUSTER_NAME \
--namespace default \
--service-account-name s3-sa \
--role-name s3-eks-pod-identity-role \
--permission-policy-arns arn:aws:iam::aws:policy/AmazonS3ReadOnlyAccess \
--region ap-northeast-2

2025-03-16 00:22:39 [ℹ]  1 task: {
    2 sequential sub-tasks: {
        create IAM role for pod identity association for service account &quot;default/s3-sa&quot;,
        create pod identity association for service account &quot;default/s3-sa&quot;,
    } }2025-03-16 00:22:39 [ℹ]  deploying stack &quot;eksctl-myeks-podidentityrole-default-s3-sa&quot;
2025-03-16 00:22:40 [ℹ]  waiting for CloudFormation stack &quot;eksctl-myeks-podidentityrole-default-s3-sa&quot;
2025-03-16 00:23:10 [ℹ]  waiting for CloudFormation stack &quot;eksctl-myeks-podidentityrole-default-s3-sa&quot;
2025-03-16 00:23:11 [ℹ]  created pod identity association for service account &quot;s3-sa&quot; in namespace &quot;default&quot;
2025-03-16 00:23:11 [ℹ]  all tasks were completed successfully

# 확인
kubectl get sa

NAME      SECRETS   AGE
default   0         142m

eksctl get podidentityassociation --cluster $CLUSTER_NAME
ASSOCIATION ARN                                                                                 NAMESPACE       SERVICE ACCOUNT NAME    IAM ROLE ARN            OWNER ARN
arn:aws:eks:ap-northeast-2:xx:podidentityassociation/myeks/a-8zp14caxh5ask7ed0        default         s3-sa                   arn:aws:iam::xx:role/s3-eks-pod-identity-role

aws eks list-pod-identity-associations --cluster-name $CLUSTER_NAME | jq
{
  &quot;associations&quot;: [
    {
      &quot;clusterName&quot;: &quot;myeks&quot;,
      &quot;namespace&quot;: &quot;default&quot;,
      &quot;serviceAccount&quot;: &quot;s3-sa&quot;,
      &quot;associationArn&quot;: &quot;arn:aws:eks:ap-northeast-2:xx:podidentityassociation/myeks/a-8zp14caxh5ask7ed0&quot;,
      &quot;associationId&quot;: &quot;a-8zp14caxh5ask7ed0&quot;
    }
  ]
}&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;code&gt;eksctl create podidentityassociation&lt;/code&gt; 또한 CloudFormation을 실행하도록 동작하며, ServiceAccount 는 별도로 생성되지 않습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;웹 콘솔을 확인해보면 Pod Association에서 정보가 확인 가능합니다. IRSA는 웹 콘솔에는 노출되지 않습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1756&quot; data-origin-height=&quot;262&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/mcZr2/btsMNcrG2Za/TKW4FfS31eeLjKSk3aHA7K/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/mcZr2/btsMNcrG2Za/TKW4FfS31eeLjKSk3aHA7K/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/mcZr2/btsMNcrG2Za/TKW4FfS31eeLjKSk3aHA7K/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FmcZr2%2FbtsMNcrG2Za%2FTKW4FfS31eeLjKSk3aHA7K%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1756&quot; height=&quot;262&quot; data-origin-width=&quot;1756&quot; data-origin-height=&quot;262&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Pod Identity를 사용하는 파드도 연관된 ServiceAccount 이름을 지정하는 것으로 사용할 수 있습니다.&lt;/p&gt;
&lt;pre class=&quot;sql&quot;&gt;&lt;code&gt;# 서비스어카운트, 파드 생성
kubectl create sa s3-sa

cat &amp;lt;&amp;lt;EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
  name: eks-pod-identity
spec:
  serviceAccountName: s3-sa
  containers:
    - name: my-aws-cli
      image: amazon/aws-cli:latest
      command: ['sleep', '36000']
  restartPolicy: Never
  terminationGracePeriodSeconds: 0
EOF

# 파드 정보 확인
kubectl get pod eks-pod-identity -o yaml 
...
    env:
    - name: AWS_STS_REGIONAL_ENDPOINTS
      value: regional
    - name: AWS_DEFAULT_REGION
      value: ap-northeast-2
    - name: AWS_REGION
      value: ap-northeast-2
    - name: AWS_CONTAINER_CREDENTIALS_FULL_URI
      value: http://169.254.170.23/v1/credentials
    - name: AWS_CONTAINER_AUTHORIZATION_TOKEN_FILE
      value: /var/run/secrets/pods.eks.amazonaws.com/serviceaccount/eks-pod-identity-token
...
    - mountPath: /var/run/secrets/pods.eks.amazonaws.com/serviceaccount
      name: eks-pod-identity-token
      readOnly: true
...
  volumes:
  - name: eks-pod-identity-token
    projected:
      defaultMode: 420
      sources:
      - serviceAccountToken:
          audience: pods.eks.amazonaws.com
          expirationSeconds: 86400
          path: eks-pod-identity-token
...

# Pod Identity로 정보 확인
kubectl exec -it eks-pod-identity -- aws sts get-caller-identity --query Arn
&quot;arn:aws:sts::xx:assumed-role/s3-eks-pod-identity-role/eks-myeks-eks-pod-id-0382fb7d-1b2b-45c2-84bb-d5b123292589&quot;

# 에러 발생하지 않음
kubectl exec -it eks-pod-identity -- aws s3 ls

# 에러 발생
kubectl exec -it eks-pod-identity -- aws ec2 describe-instances --region ap-northeast-2
An error occurred (UnauthorizedOperation) when calling the DescribeInstances operation: You are not authorized to perform this operation. User: arn:aws:sts::xx:assumed-role/s3-eks-pod-identity-role/eks-myeks-eks-pod-id-0382fb7d-1b2b-45c2-84bb-d5b123292589 is not authorized to perform: ec2:DescribeInstances because no identity-based policy allows the ec2:DescribeInstances action
command terminated with exit code 254&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;마찬가지로 토큰을 확인해보겠습니다.&lt;/p&gt;
&lt;pre class=&quot;awk&quot;&gt;&lt;code&gt;# 토큰 정보 확인
kubectl exec -it eks-pod-identity -- cat /var/run/secrets/pods.eks.amazonaws.com/serviceaccount/eks-pod-identity-token; echo&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;aud(Audience) 가 STS에서 pods.eks.amazonaws.com 으로 다른 것을 알 수 있습니다.&lt;/p&gt;
&lt;pre class=&quot;json&quot;&gt;&lt;code&gt;{
  &quot;aud&quot;: [
    &quot;pods.eks.amazonaws.com&quot;
  ],
  &quot;exp&quot;: 1742138781,
  &quot;iat&quot;: 1742052381,
  &quot;iss&quot;: &quot;https://oidc.eks.ap-northeast-2.amazonaws.com/id/EF882B...&quot;,
  &quot;jti&quot;: &quot;50985ae4-392b-4caa-ae3c-c4de13f58e3c&quot;,
  &quot;kubernetes.io&quot;: {
    &quot;namespace&quot;: &quot;default&quot;,
    &quot;node&quot;: {
      &quot;name&quot;: &quot;ip-192-168-2-79.ap-northeast-2.compute.internal&quot;,
      &quot;uid&quot;: &quot;9e28f27b-c643-4c76-9e33-3658ca4014ed&quot;
    },
    &quot;pod&quot;: {
      &quot;name&quot;: &quot;eks-pod-identity&quot;,
      &quot;uid&quot;: &quot;276bf7da-851f-4eec-87d5-07488e972f2a&quot;
    },
    &quot;serviceaccount&quot;: {
      &quot;name&quot;: &quot;s3-sa&quot;,
      &quot;uid&quot;: &quot;7438f7de-ec70-435f-8d34-de7a239a955e&quot;
    }
  },
  &quot;nbf&quot;: 1742052381,
  &quot;sub&quot;: &quot;system:serviceaccount:default:s3-sa&quot;
}&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;실습을 마치고 리소스를 삭제하겠습니다.&lt;/p&gt;
&lt;pre class=&quot;routeros&quot;&gt;&lt;code&gt;eksctl delete podidentityassociation --cluster $CLUSTER_NAME --namespace default --service-account-name s3-sa
kubectl delete pod eks-pod-identity
kubectl delete sa s3-sa&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;6. AKS의 파드 권한 할당&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;AKS에서는 파드에 Azure의 리소스에 접근하는 권한을 할당하는 방식으로 Workload Identity를 사용할 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이 방식은 Azure에서 Managed Identity라는 주체를 생성하고, Azure RBAC을 통해서 권한 관리를 하며, Managed Identity를 ServiceAccount에서 사용하는 방식으로 진행됩니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;또한 AKS에서 workload Identity은 &lt;code&gt;--enable-oidc-issuer&lt;/code&gt;와 &lt;code&gt;--enable-workload-identity&lt;/code&gt; 옵션을 통해 활성화 할 수 있습니다.&lt;/p&gt;
&lt;pre class=&quot;jboss-cli&quot;&gt;&lt;code&gt;az aks create --resource-group &quot;${RESOURCE_GROUP}&quot; --name &quot;${CLUSTER_NAME}&quot; --enable-oidc-issuer --enable-workload-identity --generate-ssh-keys&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;AKS의 Workload Identity의 동작 과정은 아래의 문서를 살펴보실 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;https://learn.microsoft.com/en-us/azure/aks/workload-identity-overview?tabs=dotnet&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://learn.microsoft.com/en-us/azure/aks/workload-identity-overview?tabs=dotnet&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;2815&quot; data-origin-height=&quot;1658&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/mzL1J/btsMMzgDaX3/27tiUMNRrozViYJ420vlwK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/mzL1J/btsMMzgDaX3/27tiUMNRrozViYJ420vlwK/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/mzL1J/btsMMzgDaX3/27tiUMNRrozViYJ420vlwK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FmzL1J%2FbtsMMzgDaX3%2F27tiUMNRrozViYJ420vlwK%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;2815&quot; height=&quot;1658&quot; data-origin-width=&quot;2815&quot; data-origin-height=&quot;1658&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;실제 클러스터에 구성하는 방식은 아래에서 설명하고 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;https://learn.microsoft.com/en-us/azure/aks/workload-identity-deploy-cluster&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://learn.microsoft.com/en-us/azure/aks/workload-identity-deploy-cluster&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;절차를 요약하면 아래와 같습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;1) 클러스터에 Workload Identity와 OIDC issuer 활성화&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;2) Managed Identity 생성&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;3) 쿠버네티스 ServiceAccount 생성 (Managed Identity의 Client ID 입력)&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;4) Federated Identity Credentials 생성&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;Managed Identity와 OIDC Issuer, 그리고 주체(ServiceAccount)를 연결&lt;/li&gt;
&lt;/ul&gt;
&lt;pre class=&quot;bash&quot;&gt;&lt;code&gt;export FEDERATED_IDENTITY_CREDENTIAL_NAME=&quot;myFedIdentity$RANDOM_ID&quot;
az identity federated-credential create --name ${FEDERATED_IDENTITY_CREDENTIAL_NAME} --identity-name &quot;${USER_ASSIGNED_IDENTITY_NAME}&quot; --resource-group &quot;${RESOURCE_GROUP}&quot; --issuer &quot;${AKS_OIDC_ISSUER}&quot; --subject system:serviceaccount:&quot;${SERVICE_ACCOUNT_NAMESPACE}&quot;:&quot;${SERVICE_ACCOUNT_NAME}&quot; --audience api://AzureADTokenExchange&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;5) Managed Identity에 대한 Azure 리소스 권한 할당 (생략)&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;6) 애플리케이션 생성 (&lt;code&gt;azure.workload.identity/use: &quot;true&quot;&lt;/code&gt; label 및 ServiceAccount 입력)&lt;/p&gt;
&lt;pre class=&quot;yaml&quot;&gt;&lt;code&gt;kubectl apply -f - &amp;lt;&amp;lt;EOF
apiVersion: v1
kind: Pod
metadata:
    name: sample-workload-identity-key-vault
    namespace: ${SERVICE_ACCOUNT_NAMESPACE}
    labels:
        azure.workload.identity/use: &quot;true&quot;
spec:
    serviceAccountName: ${SERVICE_ACCOUNT_NAME}
    containers:
      - image: ghcr.io/azure/azure-workload-identity/msal-go
        name: oidc
        env:
          - name: KEYVAULT_URL
            value: ${KEYVAULT_URL}
          - name: SECRET_NAME
            value: ${KEYVAULT_SECRET_NAME}
    nodeSelector:
        kubernetes.io/os: linux
EOF&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이러한 과정은 eksctl 을 사용하는 EKS에 비해서 다소 복잡하게 느껴지기는 합니다. 한편으로는 EKS는 신규 API를 추가하여 기능을 간단하게 제공하고, AKS는 기존 Azure의 주체를 활용하는 방식으로 기존 Azure API를 통한 처리를 하는 것으로 이해됩니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;마무리&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;해당 포스트에서 EKS의 인증/인가와 파드에 IAM 권한을 할당하는 방식을 살펴봤습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이 과정에서 EKS가 AWS의 IAM이라는 ID 및 엑세스 관리 서비스와 연계되는 방식을 살펴봤습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이러한 과정은 서로 다른 시스템 간의 인증이 연동되는 방식으로 이해할 수 있으며, AWS에서 유효한 주체가 어떻게 쿠버네티스의 인증/인가를 이용할 수 있는가와 쿠버네티스에서 유효한 주체가 어떻게 AWS의 인증/인가를 이용할 수 있는가에 대한 답변이 되었으면 좋겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;간단히 요약하면 AWS의 사용자는 API 서버의 Token Webhook Authentication으로 IAM을 통해서 인증을 진행하고, 확인된 ARN 정보와 쿠버네티스의 그룹(혹은 Policy)의 맵핑을 확인하여 쿠버네티스 RBAC을 통해 인가가 이뤄줬습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;쿠버네티스의 유효한 주체는 OIDC나 혹은 Pod Identity Agent를 통해서 AWS IAM과 연계 및 인증을 통해서 유효한 토큰을 획득하고 AWS의 리소스를 접근할 수 있었습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;다음 포스트에서는 EKS에서 Fargate, Hybrid node를 사용하는 방식을 살펴보겠습니다.&lt;/p&gt;</description>
      <category>EKS</category>
      <category>AKS</category>
      <category>authentication</category>
      <category>Authorization</category>
      <category>aws</category>
      <category>Azure</category>
      <category>EKS</category>
      <category>IRSA</category>
      <category>Pod Identity</category>
      <author>한명</author>
      <guid isPermaLink="true">https://a-person.tistory.com/40</guid>
      <comments>https://a-person.tistory.com/40#entry40comment</comments>
      <pubDate>Sun, 16 Mar 2025 00:54:50 +0900</pubDate>
    </item>
    <item>
      <title>[5-2] EKS의 오토스케일링 Part2</title>
      <link>https://a-person.tistory.com/39</link>
      <description>&lt;p data-ke-size=&quot;size16&quot;&gt;본 포스트에서는 기본적인 쿠버네티스 환경의 스케일링 기술을 살펴보겠습니다. 이후 EKS의 오토스케일링 옵션을 살펴보고, 각 옵션을 실습을 통해 살펴도록 하겠습니다. 마지막으로 AKS의 오토스케일링 옵션을 EKS와 비교해 보겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이번 포스트에서는 EKS의 오토스케일링(Autoscaling) Part2로 지난 포스트에 이어서 Cluster Autoscaler 부터 이어나가겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;목차&lt;/h2&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;EKS의 오토스케일링 Part1 (&lt;a href=&quot;https://a-person.tistory.com/38&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://a-person.tistory.com/38&lt;/a&gt;)&lt;/h4&gt;
&lt;ol style=&quot;list-style-type: decimal;&quot; data-ke-list-type=&quot;decimal&quot;&gt;
&lt;li&gt;쿠버네티스 환경의 스케일링&lt;/li&gt;
&lt;li&gt;EKS의 오토스케일링 개요&lt;/li&gt;
&lt;li&gt;실습 환경 생성&lt;/li&gt;
&lt;li&gt;HPA(Horizontal Pod Autoscaler)&lt;/li&gt;
&lt;li&gt;KEDA(Kubernetes Event-driven Autoscaler)&lt;/li&gt;
&lt;li&gt;VPA(Vertical Pod Autoscaler)&lt;/li&gt;
&lt;/ol&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;EKS의 오토스케일링 Part2&lt;/h4&gt;
&lt;ol style=&quot;list-style-type: decimal;&quot; data-ke-list-type=&quot;decimal&quot;&gt;
&lt;li&gt;&lt;b&gt;CA(Cluster Autoscaler)&lt;/b&gt;&lt;/li&gt;
&lt;li&gt;&lt;b&gt;Karpenter&lt;/b&gt;&lt;/li&gt;
&lt;li&gt;&lt;b&gt;AKS의 오토스케일링&lt;/b&gt;&lt;/li&gt;
&lt;li&gt;&lt;b&gt;오토스케일링에 대한 주의사항&lt;/b&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;1. CA(Cluster Autoscaler)&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;노드를 스케일링하는 CA(Cluster Autoscaler)를 살펴보겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;많은 사람들이 클라우드 환경에서 컴퓨팅 자원을 기반으로 한 오토스케일링에 대한 이해를 하고 있기 때문에, 가상 머신 세트(예를 들어, ASG, VMSS 등)의 CPU/Memory와 같은 리소스 사용률이 CA를 동작시키는 것으로 오해하는 경우가 많습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;하지만 쿠버네티스의 CA는 아래와 같은 상황에서 동작합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#what-is-cluster-autoscaler&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#what-is-cluster-autoscaler&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Cluster Autoscaler increases the size of the cluster when:&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;there are pods that failed to schedule on any of the current nodes due to insufficient resources.&lt;/li&gt;
&lt;li&gt;adding a node similar to the nodes currently present in the cluster would help.&lt;/li&gt;
&lt;/ul&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Cluster Autoscaler decreases the size of the cluster when some nodes are consistently unneeded for a significant amount of time. A node is unneeded when it has low utilization and all of its important pods can be moved elsewhere.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;즉, 현재 노드의 리소스가 부족하여 &lt;b&gt;파드가 스케줄링이 될 수 없는 상황&lt;/b&gt;에서 노드의 수를 증가시키게 됩니다. 그러하므로 Pending 파드, 정확하게는 unschedurable 파드가 발생한 상황에서 노드 수가 증가하는 개념입니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아래는 EKS에서 HPA와 CA를 설명하고 있습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1361&quot; data-origin-height=&quot;720&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/A2G3k/btsMB0Ad90m/RBa65P1j33dzKv7oqc74u1/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/A2G3k/btsMB0Ad90m/RBa65P1j33dzKv7oqc74u1/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/A2G3k/btsMB0Ad90m/RBa65P1j33dzKv7oqc74u1/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FA2G3k%2FbtsMB0Ad90m%2FRBa65P1j33dzKv7oqc74u1%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1361&quot; height=&quot;720&quot; data-origin-width=&quot;1361&quot; data-origin-height=&quot;720&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;출처: &lt;a href=&quot;https://www.youtube.com/watch?v=jLuVZX6WQsw&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://www.youtube.com/watch?v=jLuVZX6WQsw&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;EKS의 CA는 노드의 아래 두가지 태그가 등록되어 있는 노드들에 대해서 동작합니다. 아래와 같이 사전 정보를 확인하실 수 있습니다.&lt;/p&gt;
&lt;pre class=&quot;crmsh&quot;&gt;&lt;code&gt;# EKS 노드에 이미 아래 tag가 들어가 있음
# k8s.io/cluster-autoscaler/enabled : true
# k8s.io/cluster-autoscaler/myeks : owned
aws ec2 describe-instances  --filters Name=tag:Name,Values=$CLUSTER_NAME-ng1-Node --query &quot;Reservations[*].Instances[*].Tags[*]&quot; --output yaml
...
- Key: k8s.io/cluster-autoscaler/myeks
      Value: owned
- Key: k8s.io/cluster-autoscaler/enabled
      Value: 'true'
...&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;CA가 동작할 수 있도록 ASG의 MaxSize를 6개로 사전에 수정합니다.&lt;/p&gt;
&lt;pre class=&quot;asciidoc&quot;&gt;&lt;code&gt;# 현재 autoscaling(ASG) 정보 확인
aws autoscaling describe-auto-scaling-groups \
    --query &quot;AutoScalingGroups[? Tags[? (Key=='eks:cluster-name') &amp;amp;&amp;amp; Value=='myeks']].[AutoScalingGroupName, MinSize, MaxSize,DesiredCapacity]&quot; \
    --output table
-----------------------------------------------------------------
|                   DescribeAutoScalingGroups                   |
+------------------------------------------------+----+----+----+
|  eks-ng1-70cab5c8-890d-c414-cc6d-c0d2eac06322  |  3 |  3 |  3 |
+------------------------------------------------+----+----+----+

# MaxSize 6개로 수정
export ASG_NAME=$(aws autoscaling describe-auto-scaling-groups --query &quot;AutoScalingGroups[? Tags[? (Key=='eks:cluster-name') &amp;amp;&amp;amp; Value=='myeks']].AutoScalingGroupName&quot; --output text)
aws autoscaling update-auto-scaling-group --auto-scaling-group-name ${ASG_NAME} --min-size 3 --desired-capacity 3 --max-size 6

# 확인
aws autoscaling describe-auto-scaling-groups --query &quot;AutoScalingGroups[? Tags[? (Key=='eks:cluster-name') &amp;amp;&amp;amp; Value=='myeks']].[AutoScalingGroupName, MinSize, MaxSize,DesiredCapacity]&quot; --output table
-----------------------------------------------------------------
|                   DescribeAutoScalingGroups                   |
+------------------------------------------------+----+----+----+
|  eks-ng1-70cab5c8-890d-c414-cc6d-c0d2eac06322  |  3 |  6 |  3 |
+------------------------------------------------+----+----+----+&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이제 클러스터에 CA를 설치 하겠습니다.&lt;/p&gt;
&lt;pre class=&quot;jboss-cli&quot;&gt;&lt;code&gt;# 배포 : Deploy the Cluster Autoscaler (CAS)
curl -s -O https://raw.githubusercontent.com/kubernetes/autoscaler/master/cluster-autoscaler/cloudprovider/aws/examples/cluster-autoscaler-autodiscover.yaml
...
            - ./cluster-autoscaler
            - --v=4
            - --stderrthreshold=info
            - --cloud-provider=aws
            - --skip-nodes-with-local-storage=false # 로컬 스토리지를 가진 노드를 autoscaler가 scale down할지 결정, false(가능!)
            - --expander=least-waste # 노드를 확장할 때 어떤 노드 그룹을 선택할지를 결정, least-waste는 리소스 낭비를 최소화하는 방식으로 새로운 노드를 선택.
            - --node-group-auto-discovery=asg:tag=k8s.io/cluster-autoscaler/enabled,k8s.io/cluster-autoscaler/&amp;lt;YOUR CLUSTER NAME&amp;gt;
...

sed -i -e &quot;s|&amp;lt;YOUR CLUSTER NAME&amp;gt;|$CLUSTER_NAME|g&quot; cluster-autoscaler-autodiscover.yaml
kubectl apply -f cluster-autoscaler-autodiscover.yaml&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;cluster-autoscaler 파드(디플로이먼트)가 노드에 실행되는 것을 확인할 수 있습니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;# 확인
kubectl get pod -n kube-system | grep cluster-autoscaler
cluster-autoscaler-6df6d76b9f-ss5gd           1/1     Running   0          11s

# node-group-auto-discovery에서 활용되는 asg:tag를 확인할 수 있습니다.
kubectl describe deployments.apps -n kube-system cluster-autoscaler | grep node-group-auto-discovery
      --node-group-auto-discovery=asg:tag=k8s.io/cluster-autoscaler/enabled,k8s.io/cluster-autoscaler/myeks

# (옵션) cluster-autoscaler 파드가 동작하는 워커 노드가 퇴출(evict) 되지 않게 설정
kubectl -n kube-system annotate deployment.apps/cluster-autoscaler cluster-autoscaler.kubernetes.io/safe-to-evict=&quot;false&quot;&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아래 예제를 통해서 CA의 동작을 확인합니다.&lt;/p&gt;
&lt;pre class=&quot;yaml&quot;&gt;&lt;code&gt;# 노드 모니터링 
while true; do date; kubectl get node; echo &quot;------------------------------&quot; ; sleep 5; done

# Deploy a Sample App
# We will deploy an sample nginx application as a ReplicaSet of 1 Pod
cat &amp;lt;&amp;lt; EOF &amp;gt; nginx.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-to-scaleout
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        service: nginx
        app: nginx
    spec:
      containers:
      - image: nginx
        name: nginx-to-scaleout
        resources:
          limits:
            cpu: 500m
            memory: 512Mi
          requests:
            cpu: 500m
            memory: 512Mi
EOF
kubectl apply -f nginx.yaml
kubectl get deployment/nginx-to-scaleout

# Scale our ReplicaSet
# Let&amp;rsquo;s scale out the replicaset to 15
kubectl scale --replicas=15 deployment/nginx-to-scaleout &amp;amp;&amp;amp; date

deployment.apps/nginx-to-scaleout scaled
Thu Mar  6 23:48:09 KST 2025

# 확인
kubectl get po |grep Pending
nginx-to-scaleout-7cfb655fb5-4vtb9   0/1     Pending   0          20s
nginx-to-scaleout-7cfb655fb5-6z6lk   0/1     Pending   0          20s
nginx-to-scaleout-7cfb655fb5-9g7s6   0/1     Pending   0          20s
nginx-to-scaleout-7cfb655fb5-ckph6   0/1     Pending   0          20s
nginx-to-scaleout-7cfb655fb5-lqbhc   0/1     Pending   0          20s
nginx-to-scaleout-7cfb655fb5-vk5bb   0/1     Pending   0          20s
nginx-to-scaleout-7cfb655fb5-vwnv7   0/1     Pending   0          20s

# 노드 자동 증가 확인
kubectl get nodes
aws autoscaling describe-auto-scaling-groups \
    --query &quot;AutoScalingGroups[? Tags[? (Key=='eks:cluster-name') &amp;amp;&amp;amp; Value=='myeks']].[AutoScalingGroupName, MinSize, MaxSize,DesiredCapacity]&quot; \
    --output table
-----------------------------------------------------------------
|                   DescribeAutoScalingGroups                   |
+------------------------------------------------+----+----+----+
|  eks-ng1-70cab5c8-890d-c414-cc6d-c0d2eac06322  |  3 |  6 |  6 |
+------------------------------------------------+----+----+----+

# [운영서버 EC2] 최근 1시간 Fleet API 호출 확인 - Link
# https://ap-northeast-2.console.aws.amazon.com/cloudtrailv2/home?region=ap-northeast-2#/events?EventName=CreateFleet
aws cloudtrail lookup-events \
  --lookup-attributes AttributeKey=EventName,AttributeValue=CreateFleet \
  --start-time &quot;$(date -d '1 hour ago' --utc +%Y-%m-%dT%H:%M:%SZ)&quot; \
  --end-time &quot;$(date --utc +%Y-%m-%dT%H:%M:%SZ)&quot;

{
    &quot;Events&quot;: [
        {
            &quot;EventId&quot;: &quot;d16d3ea9-58ef-4d1e-8776-6172f2ea0d4a&quot;,
            &quot;EventName&quot;: &quot;CreateFleet&quot;,
            &quot;ReadOnly&quot;: &quot;false&quot;,
            &quot;EventTime&quot;: &quot;2025-03-06T23:48:25+09:00&quot;,
            &quot;EventSource&quot;: &quot;ec2.amazonaws.com&quot;,
            &quot;Username&quot;: &quot;AutoScaling&quot;,
            &quot;Resources&quot;: [],
         ...

# (참고) Event name : UpdateAutoScalingGroup
# https://ap-northeast-2.console.aws.amazon.com/cloudtrailv2/home?region=ap-northeast-2#/events?EventName=UpdateAutoScalingGroup&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;EKS에서 Pending Pod가 발생한 이후 노드 생성 시점을 시간을 확인해보고, 비슷한 테스트를 AKS에서 진행한 경우 Pending Pod와 노드 생성 시점을 비교해봤습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;먼저 EKS는 t3.medium(2 vCPU, 4GB)을 사용했고, AKS에서도 Burstable에 해당하는 Standard_B2s(2 vCPU, 4GB)를 사용했습니다.&lt;/p&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;EKS CA 테스트&lt;/h4&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;# 애플리케이션 추가
kubectl scale --replicas=15 deployment/nginx-to-scaleout &amp;amp;&amp;amp; date
deployment.apps/nginx-to-scaleout scaled
Thu Mar  6 23:48:09 KST 2025

# 노드 생성 전
------------------------------
Thu Mar  6 23:49:03 KST 2025
NAME                                               STATUS   ROLES    AGE    VERSION
ip-192-168-1-87.ap-northeast-2.compute.internal    Ready    &amp;lt;none&amp;gt;   100m   v1.31.5-eks-5d632ec
ip-192-168-2-195.ap-northeast-2.compute.internal   Ready    &amp;lt;none&amp;gt;   100m   v1.31.5-eks-5d632ec
ip-192-168-3-136.ap-northeast-2.compute.internal   Ready    &amp;lt;none&amp;gt;   100m   v1.31.5-eks-5d632ec
...
# 노드 추가 -&amp;gt; 대략 1:10초 걸림
------------------------------
Thu Mar  6 23:49:17 KST 2025
NAME                                               STATUS     ROLES    AGE    VERSION
ip-192-168-1-67.ap-northeast-2.compute.internal    NotReady   &amp;lt;none&amp;gt;   5s     v1.31.5-eks-5d632ec
ip-192-168-1-87.ap-northeast-2.compute.internal    Ready      &amp;lt;none&amp;gt;   100m   v1.31.5-eks-5d632ec
ip-192-168-2-195.ap-northeast-2.compute.internal   Ready      &amp;lt;none&amp;gt;   100m   v1.31.5-eks-5d632ec
ip-192-168-2-246.ap-northeast-2.compute.internal   NotReady   &amp;lt;none&amp;gt;   10s    v1.31.5-eks-5d632ec
ip-192-168-3-136.ap-northeast-2.compute.internal   Ready      &amp;lt;none&amp;gt;   100m   v1.31.5-eks-5d632ec
ip-192-168-3-229.ap-northeast-2.compute.internal   NotReady   &amp;lt;none&amp;gt;   4s     v1.31.5-eks-5d632ec
...
# 전체 Ready -&amp;gt; 대략 1:30초 걸림
------------------------------
Thu Mar  6 23:49:32 KST 2025
NAME                                               STATUS   ROLES    AGE    VERSION
ip-192-168-1-67.ap-northeast-2.compute.internal    Ready    &amp;lt;none&amp;gt;   19s    v1.31.5-eks-5d632ec
ip-192-168-1-87.ap-northeast-2.compute.internal    Ready    &amp;lt;none&amp;gt;   100m   v1.31.5-eks-5d632ec
ip-192-168-2-195.ap-northeast-2.compute.internal   Ready    &amp;lt;none&amp;gt;   100m   v1.31.5-eks-5d632ec
ip-192-168-2-246.ap-northeast-2.compute.internal   Ready    &amp;lt;none&amp;gt;   24s    v1.31.5-eks-5d632ec
ip-192-168-3-136.ap-northeast-2.compute.internal   Ready    &amp;lt;none&amp;gt;   100m   v1.31.5-eks-5d632ec
ip-192-168-3-229.ap-northeast-2.compute.internal   Ready    &amp;lt;none&amp;gt;   18s    v1.31.5-eks-5d632ec&lt;/code&gt;&lt;/pre&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;&amp;nbsp;&lt;/h4&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;AKS CA 테스트&lt;/h4&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;AKS에서도 동일하게 3~6으로 autoscaling 설정을 하였습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;694&quot; data-origin-height=&quot;495&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/MWLsV/btsMC00A5hA/ij553aLDj0DQOodkVpI7LK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/MWLsV/btsMC00A5hA/ij553aLDj0DQOodkVpI7LK/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/MWLsV/btsMC00A5hA/ij553aLDj0DQOodkVpI7LK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FMWLsV%2FbtsMC00A5hA%2Fij553aLDj0DQOodkVpI7LK%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;694&quot; height=&quot;495&quot; data-origin-width=&quot;694&quot; data-origin-height=&quot;495&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;결과와 시간을 볼 때는 유의미한 차이가 있는 것 같지는 않습니다. EKS와 AKS 모두 1분 30초 정도에 노드들이 추가 된 것으로 확인됩니다. 물론 이 테스트는 대략적인 시간을 확인한 것이므로 참고만 부탁드립니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;# 애플리케이션 추가
$ kubectl scale --replicas=15 deployment/nginx-to-scaleout &amp;amp;&amp;amp; date
deployment.apps/nginx-to-scaleout scaled
Thu Mar  6 15:08:17 UTC 2025

# 노드 생성 전
------------------------------
Thu Mar  6 15:09:41 UTC 2025
aks-userpool-13024277-vmss000000    Ready    &amp;lt;none&amp;gt;   9m56s   v1.31.4
aks-userpool-13024277-vmss000001    Ready    &amp;lt;none&amp;gt;   9m51s   v1.31.4
aks-userpool-13024277-vmss000002    Ready    &amp;lt;none&amp;gt;   9m57s   v1.31.4
# 노드 추가 -&amp;gt; 대략 1:30초 걸림
------------------------------
Thu Mar  6 15:09:46 UTC 2025
aks-userpool-13024277-vmss000000    Ready      &amp;lt;none&amp;gt;   10m     v1.31.4
aks-userpool-13024277-vmss000001    Ready      &amp;lt;none&amp;gt;   9m57s   v1.31.4
aks-userpool-13024277-vmss000002    Ready      &amp;lt;none&amp;gt;   10m     v1.31.4
aks-userpool-13024277-vmss000003    NotReady   &amp;lt;none&amp;gt;   1s      v1.31.4
aks-userpool-13024277-vmss000004    Ready      &amp;lt;none&amp;gt;   2s      v1.31.4
aks-userpool-13024277-vmss000005    Ready      &amp;lt;none&amp;gt;   1s      v1.31.4
# 전체 Ready -&amp;gt; 대략 1:35초 걸림
------------------------------
Thu Mar  6 15:09:52 UTC 2025
aks-userpool-13024277-vmss000000    Ready    &amp;lt;none&amp;gt;   10m   v1.31.4
aks-userpool-13024277-vmss000001    Ready    &amp;lt;none&amp;gt;   10m   v1.31.4
aks-userpool-13024277-vmss000002    Ready    &amp;lt;none&amp;gt;   10m   v1.31.4
aks-userpool-13024277-vmss000003    Ready    &amp;lt;none&amp;gt;   6s    v1.31.4
aks-userpool-13024277-vmss000004    Ready    &amp;lt;none&amp;gt;   7s    v1.31.4
aks-userpool-13024277-vmss000005    Ready    &amp;lt;none&amp;gt;   6s    v1.31.4
------------------------------&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;다음 실습을 위해서 리소스를 모두 삭제하겠습니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;# 위 실습 중 디플로이먼트 삭제 후 10분 후 노드 갯수 축소되는 것을 확인 후 아래 삭제를 해보자! &amp;gt;&amp;gt; 만약 바로 아래 CA 삭제 시 워커 노드는 4개 상태가 되어서 수동으로 2대 변경 하자!
kubectl delete -f nginx.yaml

# size 수정 
aws autoscaling update-auto-scaling-group --auto-scaling-group-name ${ASG_NAME} --min-size 3 --desired-capacity 3 --max-size 3
aws autoscaling describe-auto-scaling-groups --query &quot;AutoScalingGroups[? Tags[? (Key=='eks:cluster-name') &amp;amp;&amp;amp; Value=='myeks']].[AutoScalingGroupName, MinSize, MaxSize,DesiredCapacity]&quot; --output table

# Cluster Autoscaler 삭제
kubectl delete -f cluster-autoscaler-autodiscover.yaml&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Karpenter에서는 공식 가이드를 참고하여 신규 클러스터를 사용하므로, 해당 실습을 마무리하면 아래와 같이 생성된 실습 환경도 삭제하겠습니다.&lt;/p&gt;
&lt;pre class=&quot;mel&quot;&gt;&lt;code&gt;# eksctl delete cluster --name $CLUSTER_NAME &amp;amp;&amp;amp; aws cloudformation delete-stack --stack-name $CLUSTER_NAME
nohup sh -c &quot;eksctl delete cluster --name $CLUSTER_NAME &amp;amp;&amp;amp; aws cloudformation delete-stack --stack-name $CLUSTER_NAME&quot; &amp;gt; /root/delete.log 2&amp;gt;&amp;amp;1 &amp;amp;

# (옵션) 삭제 과정 확인
tail -f /root/delete.log&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;추가로 CA에 관련하여 AWS의 Workshop 문서를 참고하실 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;https://catalog.us-east-1.prod.workshops.aws/workshops/9c0aa9ab-90a9-44a6-abe1-8dff360ae428/ko-KR/100-scaling/200-cluster-scaling&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://catalog.us-east-1.prod.workshops.aws/workshops/9c0aa9ab-90a9-44a6-abe1-8dff360ae428/ko-KR/100-scaling/200-cluster-scaling&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;2. Karpenter&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이전까지 CA에 대해서 살펴보고 동작 과정을 실습해 보았습니다. CA는 CSP에서 제공하는 가상머신 세트(ex. ASG, VMSS)를 통해 노드를 스케일링하는 옵션입니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;다만 CA는 사용자의 노드 그룹을 기준으로 스케일링을 하기 때문에 아래와 같은 한계점을 가지고 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;먼저 요구 조건별 많은 노드 그룹이 생성된 경우 복잡해지는 점과 파드의 용량(request) 관점이 아닌 노드 관점의 스케일링이 발생한다는 점입니다. 또한 CA는 내부적으로 Auto Scaling Group을 통해 EC2 인스턴스를 컨트롤 하기 때문에 일부 지연이 예상됩니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1405&quot; data-origin-height=&quot;743&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bEj6Jt/btsMB7MFFPc/5ekYekODYcmdnkgYt1Brl0/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bEj6Jt/btsMB7MFFPc/5ekYekODYcmdnkgYt1Brl0/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bEj6Jt/btsMB7MFFPc/5ekYekODYcmdnkgYt1Brl0/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbEj6Jt%2FbtsMB7MFFPc%2F5ekYekODYcmdnkgYt1Brl0%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1405&quot; height=&quot;743&quot; data-origin-width=&quot;1405&quot; data-origin-height=&quot;743&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;출처: &lt;a href=&quot;https://www.youtube.com/watch?v=jLuVZX6WQsw&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://www.youtube.com/watch?v=jLuVZX6WQsw&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이러한 CA의 복잡성과 지연을 극복하기 위해 Karpenter가 도입되었습니다. AWS에서 개발한 Karpenter는 현재 오픈소스로 전환하여 타 CSP에서도 사용 가능합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Karpenter는 고성능의 지능형 쿠버네티스 스케일링 도구입니다. Karpenter는 CA와 다르게 Pending pods의 용량을 바탕으로 적합한 노드 사이즈를 선택합니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1156&quot; data-origin-height=&quot;609&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/JmdyK/btsMD3IzxIO/k5mjmlp4LzTQPS1l2f6pOk/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/JmdyK/btsMD3IzxIO/k5mjmlp4LzTQPS1l2f6pOk/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/JmdyK/btsMD3IzxIO/k5mjmlp4LzTQPS1l2f6pOk/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FJmdyK%2FbtsMD3IzxIO%2Fk5mjmlp4LzTQPS1l2f6pOk%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1156&quot; height=&quot;609&quot; data-origin-width=&quot;1156&quot; data-origin-height=&quot;609&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;출처: &lt;a href=&quot;https://www.youtube.com/watch?v=yMOaOlPvrgY&amp;amp;t=717s&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://www.youtube.com/watch?v=yMOaOlPvrgY&amp;amp;t=717s&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;또한 EC2 Fleet API로 인스턴스 생성을 요청하고, Watch API를 통해서 Pending Pod를 감시합니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1342&quot; data-origin-height=&quot;696&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/dmynWf/btsMC2KB1p9/ydfjRzZq6HvsQkDXsVyq8k/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/dmynWf/btsMC2KB1p9/ydfjRzZq6HvsQkDXsVyq8k/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/dmynWf/btsMC2KB1p9/ydfjRzZq6HvsQkDXsVyq8k/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FdmynWf%2FbtsMC2KB1p9%2FydfjRzZq6HvsQkDXsVyq8k%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1342&quot; height=&quot;696&quot; data-origin-width=&quot;1342&quot; data-origin-height=&quot;696&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;출처: &lt;a href=&quot;https://www.youtube.com/watch?v=jLuVZX6WQsw&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://www.youtube.com/watch?v=jLuVZX6WQsw&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;요약하면, CA와 Karpenter에는 아래와 같은 차이점이 있습니다.&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;CA는 10초에 한번씩 Pending(unschedulable) pod 이벤트를 체크하는 반면, Karpenter는 Watch를 통해서 즉시 감지할 수 있습니다.&lt;/li&gt;
&lt;li&gt;CA는 CA -&amp;gt; ASG -&amp;gt; EC2 Fleet API로 ASG라는 단계를 추가로 거치게 되는데 비해, Karpenter가 ASG에 의존하지 않고 즉시 EC2 Fleet API에 호출하여 속도가 빠른 점이 있습니다. (여러 노드 그룹에 Pending Pod가 발생한다면 CA는 이를 순차 처리하기 때문에 더 늦어 질 수 있습니다)&lt;/li&gt;
&lt;li&gt;CA는 Pending pods의 용량에 비례해서 증가하기 보다는 노드 그룹에 지정된 용량으로 노드가 증가합니다. 이 때문에 right size로 노드가 생성된다고 보기 어렵습니다. 반면 Karpenter의 경우 Pending pods를 batch로 판단할 수 있고, 이들의 용량에 적합한 인스턴스 사이즈를 결정합니다.&lt;/li&gt;
&lt;/ul&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아래와 같이 Karpenter의 동작 과정 이해할 수 있습니다. 요약하면 감지(watch) -&amp;gt; 평가 -&amp;gt; Fleet 요청으로 이뤄집니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1369&quot; data-origin-height=&quot;717&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/RAUiF/btsMDxpT9ZA/6431ClhbP8OG1H2K5OruAk/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/RAUiF/btsMDxpT9ZA/6431ClhbP8OG1H2K5OruAk/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/RAUiF/btsMDxpT9ZA/6431ClhbP8OG1H2K5OruAk/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FRAUiF%2FbtsMDxpT9ZA%2F6431ClhbP8OG1H2K5OruAk%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1369&quot; height=&quot;717&quot; data-origin-width=&quot;1369&quot; data-origin-height=&quot;717&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;출처: &lt;a href=&quot;https://www.youtube.com/watch?v=jLuVZX6WQsw&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://www.youtube.com/watch?v=jLuVZX6WQsw&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;실습을 진행하기 위해서 신규 EKS 클러스터를 생성하겠습니다.&lt;/p&gt;
&lt;pre class=&quot;yaml&quot;&gt;&lt;code&gt;# 변수 설정
export KARPENTER_NAMESPACE=&quot;kube-system&quot;
export KARPENTER_VERSION=&quot;1.2.1&quot;
export K8S_VERSION=&quot;1.32&quot;
export AWS_PARTITION=&quot;aws&quot; 
export CLUSTER_NAME=&quot;karpenter-demo&quot; # ${USER}-karpenter-demo
export AWS_DEFAULT_REGION=&quot;ap-northeast-2&quot;
export AWS_ACCOUNT_ID=&quot;$(aws sts get-caller-identity --query Account --output text)&quot;
export TEMPOUT=&quot;$(mktemp)&quot;
export ALIAS_VERSION=&quot;$(aws ssm get-parameter --name &quot;/aws/service/eks/optimized-ami/${K8S_VERSION}/amazon-linux-2023/x86_64/standard/recommended/image_id&quot; --query Parameter.Value | xargs aws ec2 describe-images --query 'Images[0].Name' --image-ids | sed -r 's/^.*(v[[:digit:]]+).*$/\1/')&quot;

# 확인
echo &quot;${KARPENTER_NAMESPACE}&quot; &quot;${KARPENTER_VERSION}&quot; &quot;${K8S_VERSION}&quot; &quot;${CLUSTER_NAME}&quot; &quot;${AWS_DEFAULT_REGION}&quot; &quot;${AWS_ACCOUNT_ID}&quot; &quot;${TEMPOUT}&quot; &quot;${ALIAS_VERSION}&quot;

# CloudFormation 스택으로 IAM Policy/Role, SQS, Event/Rule 생성 : 3분 정도 소요
## IAM Policy : KarpenterControllerPolicy-gasida-karpenter-demo
## IAM Role : KarpenterNodeRole-gasida-karpenter-demo
curl -fsSL https://raw.githubusercontent.com/aws/karpenter-provider-aws/v&quot;${KARPENTER_VERSION}&quot;/website/content/en/preview/getting-started/getting-started-with-karpenter/cloudformation.yaml  &amp;gt; &quot;${TEMPOUT}&quot; \
&amp;amp;&amp;amp; aws cloudformation deploy \
  --stack-name &quot;Karpenter-${CLUSTER_NAME}&quot; \
  --template-file &quot;${TEMPOUT}&quot; \
  --capabilities CAPABILITY_NAMED_IAM \
  --parameter-overrides &quot;ClusterName=${CLUSTER_NAME}&quot;


# 클러스터 생성 : EKS 클러스터 생성 15분 정도 소요
eksctl create cluster -f - &amp;lt;&amp;lt;EOF
---
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
  name: ${CLUSTER_NAME}
  region: ${AWS_DEFAULT_REGION}
  version: &quot;${K8S_VERSION}&quot;
  tags:
    karpenter.sh/discovery: ${CLUSTER_NAME}

iam:
  withOIDC: true
  podIdentityAssociations:
  - namespace: &quot;${KARPENTER_NAMESPACE}&quot;
    serviceAccountName: karpenter
    roleName: ${CLUSTER_NAME}-karpenter
    permissionPolicyARNs:
    - arn:${AWS_PARTITION}:iam::${AWS_ACCOUNT_ID}:policy/KarpenterControllerPolicy-${CLUSTER_NAME}

iamIdentityMappings:
- arn: &quot;arn:${AWS_PARTITION}:iam::${AWS_ACCOUNT_ID}:role/KarpenterNodeRole-${CLUSTER_NAME}&quot;
  username: system:node:{{EC2PrivateDNSName}}
  groups:
  - system:bootstrappers
  - system:nodes
  ## If you intend to run Windows workloads, the kube-proxy group should be specified.
  # For more information, see https://github.com/aws/karpenter/issues/5099.
  # - eks:kube-proxy-windows

managedNodeGroups:
- instanceType: m5.large
  amiFamily: AmazonLinux2023
  name: ${CLUSTER_NAME}-ng
  desiredCapacity: 2
  minSize: 1
  maxSize: 10
  iam:
    withAddonPolicies:
      externalDNS: true

addons:
- name: eks-pod-identity-agent
EOF


# eks 배포 확인
eksctl get cluster
NAME            REGION          EKSCTL CREATED
karpenter-demo  ap-northeast-2  True

eksctl get nodegroup --cluster $CLUSTER_NAME
CLUSTER         NODEGROUP               STATUS  CREATED                 MIN SIZE        MAX SIZE        DESIRED CAPACITY        INSTANCE TYPE   IMAGE ID                ASG NAME                                                   TYPE
karpenter-demo  karpenter-demo-ng       ACTIVE  2025-03-06T15:38:46Z    1               10              2                       m5.large        AL2023_x86_64_STANDARD  eks-karpenter-demo-ng-96cab60d-f4b7-28dd-a83d-8366de887a29 managed


# k8s 확인
kubectl get node --label-columns=node.kubernetes.io/instance-type,eks.amazonaws.com/capacityType,topology.kubernetes.io/zone
NAME                                                STATUS   ROLES    AGE     VERSION               INSTANCE-TYPE   CAPACITYTYPE   ZONE
ip-192-168-33-227.ap-northeast-2.compute.internal   Ready    &amp;lt;none&amp;gt;   5m25s   v1.32.1-eks-5d632ec   m5.large        ON_DEMAND      ap-northeast-2a
ip-192-168-91-227.ap-northeast-2.compute.internal   Ready    &amp;lt;none&amp;gt;   5m25s   v1.32.1-eks-5d632ec   m5.large        ON_DEMAND      ap-northeast-2b

kubectl get po -A
NAMESPACE     NAME                              READY   STATUS    RESTARTS   AGE
kube-system   aws-node-9nppw                    2/2     Running   0          5m30s
kube-system   aws-node-x9ffn                    2/2     Running   0          5m30s
kube-system   coredns-844d8f59bb-j9jf9          1/1     Running   0          9m33s
kube-system   coredns-844d8f59bb-pqgpf          1/1     Running   0          9m33s
kube-system   eks-pod-identity-agent-bnshb      1/1     Running   0          5m30s
kube-system   eks-pod-identity-agent-f49wd      1/1     Running   0          5m30s
kube-system   kube-proxy-qqtss                  1/1     Running   0          5m29s
kube-system   kube-proxy-vk86h                  1/1     Running   0          5m30s
kube-system   metrics-server-74b6cb4f8f-dg8qk   1/1     Running   0          9m35s
kube-system   metrics-server-74b6cb4f8f-rkrhr   1/1     Running   0          9m35s&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;실습 과정에서 노드 생성을 확인하기 위해서 kube-ops-view를 추가로 설치하겠습니다.&lt;/p&gt;
&lt;pre class=&quot;vim&quot;&gt;&lt;code&gt;# kube-ops-view
helm repo add geek-cookbook https://geek-cookbook.github.io/charts/
helm install kube-ops-view geek-cookbook/kube-ops-view --version 1.2.2 --set service.main.type=LoadBalancer --set env.TZ=&quot;Asia/Seoul&quot; --namespace kube-system

# 접속
echo -e &quot;http://$(kubectl get svc -n kube-system kube-ops-view -o jsonpath=&quot;{.status.loadBalancer.ingress[0].hostname}&quot;):8080/#scale=1.5&quot;&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이제 Karpenter를 설치해 보겠습니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;# Logout of helm registry to perform an unauthenticated pull against the public ECR
helm registry logout public.ecr.aws

# Karpenter 설치를 위한 변수 설정 및 확인
export CLUSTER_ENDPOINT=&quot;$(aws eks describe-cluster --name &quot;${CLUSTER_NAME}&quot; --query &quot;cluster.endpoint&quot; --output text)&quot;
export KARPENTER_IAM_ROLE_ARN=&quot;arn:${AWS_PARTITION}:iam::${AWS_ACCOUNT_ID}:role/${CLUSTER_NAME}-karpenter&quot;
echo &quot;${CLUSTER_ENDPOINT} ${KARPENTER_IAM_ROLE_ARN}&quot;

# karpenter 설치
helm upgrade --install karpenter oci://public.ecr.aws/karpenter/karpenter --version &quot;${KARPENTER_VERSION}&quot; --namespace &quot;${KARPENTER_NAMESPACE}&quot; --create-namespace \
  --set &quot;settings.clusterName=${CLUSTER_NAME}&quot; \
  --set &quot;settings.interruptionQueue=${CLUSTER_NAME}&quot; \
  --set controller.resources.requests.cpu=1 \
  --set controller.resources.requests.memory=1Gi \
  --set controller.resources.limits.cpu=1 \
  --set controller.resources.limits.memory=1Gi \
  --wait

# 확인
helm list -n kube-system
NAME            NAMESPACE       REVISION        UPDATED                                 STATUS          CHART                   APP VERSION
karpenter       kube-system     1               2025-03-07 00:48:49.238176978 +0900 KST deployed        karpenter-1.2.1         1.2.1
kube-ops-view   kube-system     1               2025-03-07 00:47:14.936078967 +0900 KST deployed        kube-ops-view-1.2.2     20.4.0

kubectl get pod -n $KARPENTER_NAMESPACE |grep karpenter
karpenter-5bdb74ddd6-kx7bq        1/1     Running   0          113s
karpenter-5bdb74ddd6-qpzvh        1/1     Running   0          113s

kubectl get crd | grep karpenter
ec2nodeclasses.karpenter.k8s.aws             2025-03-06T15:48:48Z
nodeclaims.karpenter.sh                      2025-03-06T15:48:48Z
nodepools.karpenter.sh                       2025-03-06T15:48:48Z&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Nodepool과 EC2NodeClass를 생성합니다.&lt;/p&gt;
&lt;pre class=&quot;routeros&quot;&gt;&lt;code&gt;# 변수 확인
echo $ALIAS_VERSION
v20250228

# NodePool, EC2NodeClass 생성
cat &amp;lt;&amp;lt;EOF | envsubst | kubectl apply -f -
apiVersion: karpenter.sh/v1
kind: NodePool
metadata:
  name: default
spec:
  template:
    spec:
      requirements:
        - key: kubernetes.io/arch
          operator: In
          values: [&quot;amd64&quot;]
        - key: kubernetes.io/os
          operator: In
          values: [&quot;linux&quot;]
        - key: karpenter.sh/capacity-type
          operator: In
          values: [&quot;on-demand&quot;]
        - key: karpenter.k8s.aws/instance-category
          operator: In
          values: [&quot;c&quot;, &quot;m&quot;, &quot;r&quot;]
        - key: karpenter.k8s.aws/instance-generation
          operator: Gt
          values: [&quot;2&quot;]
      nodeClassRef:
        group: karpenter.k8s.aws
        kind: EC2NodeClass
        name: default
      expireAfter: 720h # 30 * 24h = 720h
  limits:
    cpu: 1000
  disruption:
    consolidationPolicy: WhenEmptyOrUnderutilized
    consolidateAfter: 1m
---
apiVersion: karpenter.k8s.aws/v1
kind: EC2NodeClass
metadata:
  name: default
spec:
  role: &quot;KarpenterNodeRole-${CLUSTER_NAME}&quot; # replace with your cluster name
  amiSelectorTerms:
    - alias: &quot;al2023@${ALIAS_VERSION}&quot; # ex) al2023@latest
  subnetSelectorTerms:
    - tags:
        karpenter.sh/discovery: &quot;${CLUSTER_NAME}&quot; # replace with your cluster name
  securityGroupSelectorTerms:
    - tags:
        karpenter.sh/discovery: &quot;${CLUSTER_NAME}&quot; # replace with your cluster name
EOF

# 확인 (nodeclaim은 없음)
kubectl get nodepool,ec2nodeclass,nodeclaims
NAME                            NODECLASS   NODES   READY   AGE
nodepool.karpenter.sh/default   default     0       True    12s

NAME                                     READY   AGE
ec2nodeclass.karpenter.k8s.aws/default   True    12s&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;여기서 NodePool과 NodeClass는 아래와 같은 의미를 가지고 있습니다.&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;NodePool: 노드 그룹의 구성과 동작 정의(노드의 선택 기준/바운더리 정의). 예를 들어, 인스턴스 유형, 용량 유형, 워커노드의 Spec에 대한 요구사항, 스케일링 정책, 노드 수명 주기 관리 -&amp;gt; 어떤 노드가 필요한 지 정의&lt;/li&gt;
&lt;li&gt;NodeClass: EC2 인스턴스의 구체적인 설정. 예를 들어, 노드 이미지, 서브넷, 보안 그룹 설정, IAM 역할, 태그 -&amp;gt; 노드를 AWS에서 어떻게 생성할지 정의&lt;/li&gt;
&lt;/ul&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이때 Karpenter는 NodeClaim라는 오브젝트를 통해 노드를 생성하고 관리합니다. Karpenter는 NodePool과 NodeClass를 모니터링하고, 새로운 파드의 요구사항이 기존 노드의 리소스나 조건과 맞지 않을 때, NodeClaim 생성하여 적절한 사양의 새로운 노드를 프로비저닝합니다. 결국 쿠버네티스에서 각 노드는 고유한 NodeClaim과 1:1로 맵핑됩니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이러한 절차를 아래 그림과 같이 확인할 수 있습니다.&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;4784&quot; data-origin-height=&quot;4016&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/ejOeGl/btsMC0M5wSR/uLKpKDcMZIvfBAf1SLzodK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/ejOeGl/btsMC0M5wSR/uLKpKDcMZIvfBAf1SLzodK/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/ejOeGl/btsMC0M5wSR/uLKpKDcMZIvfBAf1SLzodK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FejOeGl%2FbtsMC0M5wSR%2FuLKpKDcMZIvfBAf1SLzodK%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;4784&quot; height=&quot;4016&quot; data-origin-width=&quot;4784&quot; data-origin-height=&quot;4016&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;출처: &lt;a href=&quot;https://karpenter.sh/docs/concepts/nodeclaims/&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://karpenter.sh/docs/concepts/nodeclaims/&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;노드의 생성단계는 아래와 같이 진행됩니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;참고: &lt;a title=&quot;Karpenter 로그를 통해 EKS Worker Node의 Lifecycle Event를 추출하고 한눈에 파악하는 방안&quot; href=&quot;https://repost.aws/ko/articles/ARLmKuAa3FT9yMjdpq9krOTg/karpenter-%EB%A1%9C%EA%B7%B8%EB%A5%BC-%ED%86%B5%ED%95%B4-eks-worker-node%EC%9D%98-lifecycle-event%EB%A5%BC-%EC%B6%94%EC%B6%9C%ED%95%98%EA%B3%A0-%ED%95%9C%EB%88%88%EC%97%90-%ED%8C%8C%EC%95%85%ED%95%98%EB%8A%94-%EB%B0%A9%EC%95%88&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://repost.aws/ko/articles/ARLmKuAa3FT9yMjdpq9krOTg/karpenter-%EB%A1%9C%EA%B7%B8%EB%A5%BC-%ED%86%B5%ED%95%B4-eks-worker-node%EC%9D%98-lifecycle-event%EB%A5%BC-%EC%B6%94%EC%B6%9C%ED%95%98%EA%B3%A0-%ED%95%9C%EB%88%88%EC%97%90-%ED%8C%8C%EC%95%85%ED%95%98%EB%8A%94-%EB%B0%A9%EC%95%88&lt;/a&gt;&lt;/p&gt;
&lt;ol style=&quot;list-style-type: decimal;&quot; data-ke-list-type=&quot;decimal&quot;&gt;
&lt;li&gt;&lt;b&gt;Create NodeClaim&lt;/b&gt;: Karpenter는 배포(Provisioning) 혹은 중단(Disrupting) 요구에 따라 새로운 NodeClaim을 생성합니다.&lt;/li&gt;
&lt;li&gt;&lt;b&gt;Launch NodeClaim&lt;/b&gt;: AWS에 새로운 EC2 Instance를 생성하기 위해 CreateFleet API를 호출합니다.&lt;/li&gt;
&lt;li&gt;&lt;b&gt;Register NodeClaim&lt;/b&gt;: EC2 Instance가 생성되고 Cluster에 등록된 Node를 NodeClaim과 연결합니다.&lt;/li&gt;
&lt;li&gt;&lt;b&gt;Initialize NodeClaim&lt;/b&gt;: Node가 Ready 상태가 될 때까지 기다립니다.&lt;/li&gt;
&lt;/ol&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Karpenter는 모든 단계별 작업이 완료된 후 작업의 세부 내용을 시스템 로그에 기록하며, 아래는 해당 로그의 예시입니다.&lt;/p&gt;
&lt;pre class=&quot;clean&quot;&gt;&lt;code&gt;## Create NodeClaim
{&quot;level&quot;:&quot;INFO&quot;,&quot;time&quot;:&quot;2024-12-31T09:50:28.720Z&quot;,&quot;logger&quot;:&quot;controller&quot;,&quot;message&quot;:&quot;created nodeclaim&quot;,&quot;commit&quot;:&quot;0a85efb&quot;,&quot;controller&quot;:&quot;provisioner&quot;,&quot;namespace&quot;:&quot;&quot;,&quot;name&quot;:&quot;&quot;,&quot;reconcileID&quot;:&quot;63c2695c-4c54-4a9b-9b64-1804d9ddbb82&quot;,&quot;NodePool&quot;:{&quot;name&quot;:&quot;default&quot;},&quot;NodeClaim&quot;:{&quot;name&quot;:&quot;default-abcde&quot;},&quot;requests&quot;:{&quot;cpu&quot;:&quot;1516m&quot;,&quot;memory&quot;:&quot;1187Mi&quot;,&quot;pods&quot;:&quot;17&quot;},&quot;instance-types&quot;:&quot;c4.large, c4.xlarge, c5.large, c5.xlarge, c5a.2xlarge and 55 other(s)&quot;}&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이제 테스트를 위해서 샘플 애플리케이션을 배포하겠습니다.&lt;/p&gt;
&lt;pre class=&quot;yaml&quot;&gt;&lt;code&gt;# pause 파드 1개에 CPU 1개 최소 보장 할당할 수 있게 디플로이먼트 배포 (현재는 replicas:0)
cat &amp;lt;&amp;lt;EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
  name: inflate
spec:
  replicas: 0
  selector:
    matchLabels:
      app: inflate
  template:
    metadata:
      labels:
        app: inflate
    spec:
      terminationGracePeriodSeconds: 0
      securityContext:
        runAsUser: 1000
        runAsGroup: 3000
        fsGroup: 2000
      containers:
      - name: inflate
        image: public.ecr.aws/eks-distro/kubernetes/pause:3.7
        resources:
          requests:
            cpu: 1
        securityContext:
          allowPrivilegeEscalation: false
EOF

# Scale up
kubectl scale deployment inflate --replicas 5; date
deployment.apps/inflate scaled
Fri Mar  7 00:56:59 KST 2025&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Karpenter에 의해서 노드가 생성되는 과정을 추가로 확인해보겠습니다.&lt;/p&gt;
&lt;pre class=&quot;crmsh&quot;&gt;&lt;code&gt;# karpenter 파드의 로그 확인
kubectl logs -f -n &quot;${KARPENTER_NAMESPACE}&quot; -l app.kubernetes.io/name=karpenter -c controller
...
{&quot;level&quot;:&quot;INFO&quot;,&quot;time&quot;:&quot;2025-03-06T15:57:00.326Z&quot;,&quot;logger&quot;:&quot;controller&quot;,&quot;message&quot;:&quot;found provisionable pod(s)&quot;,&quot;commit&quot;:&quot;058c665&quot;,&quot;controller&quot;:&quot;provisioner&quot;,&quot;namespace&quot;:&quot;&quot;,&quot;name&quot;:&quot;&quot;,&quot;reconcileID&quot;:&quot;529ce301-0064-436f-9275-6020da23c7b5&quot;,&quot;Pods&quot;:&quot;default/inflate-5c5f75666d-gbgst, default/inflate-5c5f75666d-p6zt9, default/inflate-5c5f75666d-85csz, default/inflate-5c5f75666d-fjkhh, default/inflate-5c5f75666d-pncp9&quot;,&quot;duration&quot;:&quot;74.997844ms&quot;}
# 파드에 적합한 nodeclaim을 위한 계산에 들어감
{&quot;level&quot;:&quot;INFO&quot;,&quot;time&quot;:&quot;2025-03-06T15:57:00.326Z&quot;,&quot;logger&quot;:&quot;controller&quot;,&quot;message&quot;:&quot;computed new nodeclaim(s) to fit pod(s)&quot;,&quot;commit&quot;:&quot;058c665&quot;,&quot;controller&quot;:&quot;provisioner&quot;,&quot;namespace&quot;:&quot;&quot;,&quot;name&quot;:&quot;&quot;,&quot;reconcileID&quot;:&quot;529ce301-0064-436f-9275-6020da23c7b5&quot;,&quot;nodeclaims&quot;:1,&quot;pods&quot;:5}
# nodeclaim을 생성
{&quot;level&quot;:&quot;INFO&quot;,&quot;time&quot;:&quot;2025-03-06T15:57:00.344Z&quot;,&quot;logger&quot;:&quot;controller&quot;,&quot;message&quot;:&quot;created nodeclaim&quot;,&quot;commit&quot;:&quot;058c665&quot;,&quot;controller&quot;:&quot;provisioner&quot;,&quot;namespace&quot;:&quot;&quot;,&quot;name&quot;:&quot;&quot;,&quot;reconcileID&quot;:&quot;529ce301-0064-436f-9275-6020da23c7b5&quot;,&quot;NodePool&quot;:{&quot;name&quot;:&quot;default&quot;},&quot;NodeClaim&quot;:{&quot;name&quot;:&quot;default-n4xc5&quot;},&quot;requests&quot;:{&quot;cpu&quot;:&quot;5150m&quot;,&quot;pods&quot;:&quot;8&quot;},&quot;instance-types&quot;:&quot;c4.2xlarge, c4.4xlarge, c5.2xlarge, c5.4xlarge, c5a.2xlarge and 55 other(s)&quot;}
# nodeclaim을 Lauch
{&quot;level&quot;:&quot;INFO&quot;,&quot;time&quot;:&quot;2025-03-06T15:57:02.422Z&quot;,&quot;logger&quot;:&quot;controller&quot;,&quot;message&quot;:&quot;launched nodeclaim&quot;,&quot;commit&quot;:&quot;058c665&quot;,&quot;controller&quot;:&quot;nodeclaim.lifecycle&quot;,&quot;controllerGroup&quot;:&quot;karpenter.sh&quot;,&quot;controllerKind&quot;:&quot;NodeClaim&quot;,&quot;NodeClaim&quot;:{&quot;name&quot;:&quot;default-n4xc5&quot;},&quot;namespace&quot;:&quot;&quot;,&quot;name&quot;:&quot;default-n4xc5&quot;,&quot;reconcileID&quot;:&quot;d273db5c-0284-4fd1-9246-6d68fcb0c06b&quot;,&quot;provider-id&quot;:&quot;aws:///ap-northeast-2a/i-0068e4889e1e71961&quot;,&quot;instance-type&quot;:&quot;c5a.2xlarge&quot;,&quot;zone&quot;:&quot;ap-northeast-2a&quot;,&quot;capacity-type&quot;:&quot;on-demand&quot;,&quot;allocatable&quot;:{&quot;cpu&quot;:&quot;7910m&quot;,&quot;ephemeral-storage&quot;:&quot;17Gi&quot;,&quot;memory&quot;:&quot;14162Mi&quot;,&quot;pods&quot;:&quot;58&quot;,&quot;vpc.amazonaws.com/pod-eni&quot;:&quot;38&quot;}}
# nodeclaim을 Register
{&quot;level&quot;:&quot;INFO&quot;,&quot;time&quot;:&quot;2025-03-06T15:57:21.500Z&quot;,&quot;logger&quot;:&quot;controller&quot;,&quot;message&quot;:&quot;registered nodeclaim&quot;,&quot;commit&quot;:&quot;058c665&quot;,&quot;controller&quot;:&quot;nodeclaim.lifecycle&quot;,&quot;controllerGroup&quot;:&quot;karpenter.sh&quot;,&quot;controllerKind&quot;:&quot;NodeClaim&quot;,&quot;NodeClaim&quot;:{&quot;name&quot;:&quot;default-n4xc5&quot;},&quot;namespace&quot;:&quot;&quot;,&quot;name&quot;:&quot;default-n4xc5&quot;,&quot;reconcileID&quot;:&quot;e49f377e-7e6c-4969-8231-b3b2657bd624&quot;,&quot;provider-id&quot;:&quot;aws:///ap-northeast-2a/i-0068e4889e1e71961&quot;,&quot;Node&quot;:{&quot;name&quot;:&quot;ip-192-168-149-58.ap-northeast-2.compute.internal&quot;}}
# nodeclaim을 initilized
{&quot;level&quot;:&quot;INFO&quot;,&quot;time&quot;:&quot;2025-03-06T15:57:31.030Z&quot;,&quot;logger&quot;:&quot;controller&quot;,&quot;message&quot;:&quot;initialized nodeclaim&quot;,&quot;commit&quot;:&quot;058c665&quot;,&quot;controller&quot;:&quot;nodeclaim.lifecycle&quot;,&quot;controllerGroup&quot;:&quot;karpenter.sh&quot;,&quot;controllerKind&quot;:&quot;NodeClaim&quot;,&quot;NodeClaim&quot;:{&quot;name&quot;:&quot;default-n4xc5&quot;},&quot;namespace&quot;:&quot;&quot;,&quot;name&quot;:&quot;default-n4xc5&quot;,&quot;reconcileID&quot;:&quot;e3481aac-a971-49ab-b670-bd8c788faff7&quot;,&quot;provider-id&quot;:&quot;aws:///ap-northeast-2a/i-0068e4889e1e71961&quot;,&quot;Node&quot;:{&quot;name&quot;:&quot;ip-192-168-149-58.ap-northeast-2.compute.internal&quot;},&quot;allocatable&quot;:{&quot;cpu&quot;:&quot;7910m&quot;,&quot;ephemeral-storage&quot;:&quot;18181869946&quot;,&quot;hugepages-1Gi&quot;:&quot;0&quot;,&quot;hugepages-2Mi&quot;:&quot;0&quot;,&quot;memory&quot;:&quot;15140112Ki&quot;,&quot;pods&quot;:&quot;58&quot;}}
..

# json으로 확인 가능
kubectl logs -f -n &quot;${KARPENTER_NAMESPACE}&quot; -l app.kubernetes.io/name=karpenter -c controller | jq '.'

kubectl logs -n &quot;${KARPENTER_NAMESPACE}&quot; -l app.kubernetes.io/name=karpenter -c controller | grep 'launched nodeclaim' | jq '.'
{
  &quot;level&quot;: &quot;INFO&quot;,
  &quot;time&quot;: &quot;2025-03-06T15:57:02.422Z&quot;,
  &quot;logger&quot;: &quot;controller&quot;,
  &quot;message&quot;: &quot;launched nodeclaim&quot;,
  &quot;commit&quot;: &quot;058c665&quot;,
  &quot;controller&quot;: &quot;nodeclaim.lifecycle&quot;,
  &quot;controllerGroup&quot;: &quot;karpenter.sh&quot;,
  &quot;controllerKind&quot;: &quot;NodeClaim&quot;,
  &quot;NodeClaim&quot;: {
    &quot;name&quot;: &quot;default-n4xc5&quot;
  },
  &quot;namespace&quot;: &quot;&quot;,
  &quot;name&quot;: &quot;default-n4xc5&quot;,
  &quot;reconcileID&quot;: &quot;d273db5c-0284-4fd1-9246-6d68fcb0c06b&quot;,
  &quot;provider-id&quot;: &quot;aws:///ap-northeast-2a/i-0068e4889e1e71961&quot;,
  &quot;instance-type&quot;: &quot;c5a.2xlarge&quot;,
  &quot;zone&quot;: &quot;ap-northeast-2a&quot;,
  &quot;capacity-type&quot;: &quot;on-demand&quot;,
  &quot;allocatable&quot;: {
    &quot;cpu&quot;: &quot;7910m&quot;,
    &quot;ephemeral-storage&quot;: &quot;17Gi&quot;,
    &quot;memory&quot;: &quot;14162Mi&quot;,
    &quot;pods&quot;: &quot;58&quot;,
    &quot;vpc.amazonaws.com/pod-eni&quot;: &quot;38&quot;
  }
}

# 노드 모니터링 
kubectl scale deployment inflate --replicas 5; date
deployment.apps/inflate scaled
Fri Mar  7 00:56:59 KST 2025

while true; do date; kubectl get node; echo &quot;------------------------------&quot; ; sleep 5; done
...
# 노드 생성 전
------------------------------
Fri Mar  7 00:57:16 KST 2025
NAME                                                STATUS   ROLES    AGE   VERSION
ip-192-168-33-227.ap-northeast-2.compute.internal   Ready    &amp;lt;none&amp;gt;   17m   v1.32.1-eks-5d632ec
ip-192-168-91-227.ap-northeast-2.compute.internal   Ready    &amp;lt;none&amp;gt;   17m   v1.32.1-eks-5d632ec
------------------------------
# 노드 추가: 24초
Fri Mar  7 00:57:23 KST 2025
NAME                                                STATUS     ROLES    AGE   VERSION
ip-192-168-149-58.ap-northeast-2.compute.internal   NotReady   &amp;lt;none&amp;gt;   4s    v1.32.1-eks-5d632ec
ip-192-168-33-227.ap-northeast-2.compute.internal   Ready      &amp;lt;none&amp;gt;   17m   v1.32.1-eks-5d632ec
ip-192-168-91-227.ap-northeast-2.compute.internal   Ready      &amp;lt;none&amp;gt;   17m   v1.32.1-eks-5d632ec
------------------------------
# 노드 Ready: 31초
Fri Mar  7 00:57:30 KST 2025
NAME                                                STATUS   ROLES    AGE   VERSION
ip-192-168-149-58.ap-northeast-2.compute.internal   Ready    &amp;lt;none&amp;gt;   11s   v1.32.1-eks-5d632ec
ip-192-168-33-227.ap-northeast-2.compute.internal   Ready    &amp;lt;none&amp;gt;   17m   v1.32.1-eks-5d632ec
ip-192-168-91-227.ap-northeast-2.compute.internal   Ready    &amp;lt;none&amp;gt;   17m   v1.32.1-eks-5d632ec


# nodeClaim이 생성된다.
kubectl get nodeclaims -w
NAME            TYPE   CAPACITY   ZONE   NODE   READY   AGE
default-n4xc5                                           0s
default-n4xc5                                           0s
default-n4xc5                                   Unknown   0s
default-n4xc5   c5a.2xlarge   on-demand   ap-northeast-2a          Unknown   2s
default-n4xc5   c5a.2xlarge   on-demand   ap-northeast-2a          Unknown   2s
default-n4xc5   c5a.2xlarge   on-demand   ap-northeast-2a   ip-192-168-149-58.ap-northeast-2.compute.internal   Unknown   21s
default-n4xc5   c5a.2xlarge   on-demand   ap-northeast-2a   ip-192-168-149-58.ap-northeast-2.compute.internal   Unknown   22s
default-n4xc5   c5a.2xlarge   on-demand   ap-northeast-2a   ip-192-168-149-58.ap-northeast-2.compute.internal   Unknown   30s
default-n4xc5   c5a.2xlarge   on-demand   ap-northeast-2a   ip-192-168-149-58.ap-northeast-2.compute.internal   True      31s
default-n4xc5   c5a.2xlarge   on-demand   ap-northeast-2a   ip-192-168-149-58.ap-northeast-2.compute.internal   True      36s


# nodeClaim 확인
kubectl describe nodeclaims
Name:         default-n4xc5
Namespace:
Labels:       karpenter.k8s.aws/ec2nodeclass=default
              karpenter.k8s.aws/instance-category=c
              karpenter.k8s.aws/instance-cpu=8
              karpenter.k8s.aws/instance-cpu-manufacturer=amd
              karpenter.k8s.aws/instance-cpu-sustained-clock-speed-mhz=3300
              karpenter.k8s.aws/instance-ebs-bandwidth=3170
              karpenter.k8s.aws/instance-encryption-in-transit-supported=true
              karpenter.k8s.aws/instance-family=c5a
              karpenter.k8s.aws/instance-generation=5
              karpenter.k8s.aws/instance-hypervisor=nitro
              karpenter.k8s.aws/instance-memory=16384
              karpenter.k8s.aws/instance-network-bandwidth=2500
              karpenter.k8s.aws/instance-size=2xlarge
              karpenter.sh/capacity-type=on-demand
              karpenter.sh/nodepool=default
              kubernetes.io/arch=amd64
              kubernetes.io/os=linux
              node.kubernetes.io/instance-type=c5a.2xlarge
              topology.k8s.aws/zone-id=apne2-az1
              topology.kubernetes.io/region=ap-northeast-2
              topology.kubernetes.io/zone=ap-northeast-2a
Annotations:  compatibility.karpenter.k8s.aws/cluster-name-tagged: true
              karpenter.k8s.aws/ec2nodeclass-hash: 15535182697325354914
              karpenter.k8s.aws/ec2nodeclass-hash-version: v4
              karpenter.k8s.aws/tagged: true
              karpenter.sh/nodepool-hash: 6821555240594823858
              karpenter.sh/nodepool-hash-version: v3
API Version:  karpenter.sh/v1
Kind:         NodeClaim
Metadata:
  Creation Timestamp:  2025-03-06T15:57:00Z
  Finalizers:
    karpenter.sh/termination
  Generate Name:  default-
  Generation:     1
  Owner References:
    API Version:           karpenter.sh/v1
    Block Owner Deletion:  true
    Kind:                  NodePool
    Name:                  default
    UID:                   9342267c-6f75-488c-b067-9005999e31ef
  Resource Version:        5525
  UID:                     3bd4c5ab-c393-4b28-bb46-96531f0d1fc8
Spec:
  Expire After:  720h
  Node Class Ref:
    Group:  karpenter.k8s.aws
    Kind:   EC2NodeClass
    Name:   default
  Requirements:
    Key:       karpenter.sh/nodepool
    Operator:  In
    Values:
      default
    Key:       node.kubernetes.io/instance-type
    Operator:  In
    Values:
      c4.2xlarge
      c4.4xlarge
      c5.2xlarge
      c5.4xlarge
      c5a.2xlarge
      c5a.4xlarge
      c5a.8xlarge
      c5d.2xlarge
      c5d.4xlarge
      c5n.2xlarge
      c5n.4xlarge
      c6i.2xlarge
      c6i.4xlarge
      c6id.2xlarge
      c6id.4xlarge
      c6in.2xlarge
      c6in.4xlarge
      c7i-flex.2xlarge
      c7i-flex.4xlarge
      c7i.2xlarge
      c7i.4xlarge
      m4.2xlarge
      m4.4xlarge
      m5.2xlarge
      m5.4xlarge
      m5a.2xlarge
      m5a.4xlarge
      m5ad.2xlarge
      m5ad.4xlarge
      m5d.2xlarge
      m5d.4xlarge
      m5zn.2xlarge
      m5zn.3xlarge
      m6i.2xlarge
      m6i.4xlarge
      m6id.2xlarge
      m6id.4xlarge
      m7i-flex.2xlarge
      m7i-flex.4xlarge
      m7i.2xlarge
      m7i.4xlarge
      r3.2xlarge
      r4.2xlarge
      r4.4xlarge
      r5.2xlarge
      r5.4xlarge
      r5a.2xlarge
      r5a.4xlarge
      r5ad.2xlarge
      r5ad.4xlarge
      r5b.2xlarge
      r5d.2xlarge
      r5d.4xlarge
      r5dn.2xlarge
      r5n.2xlarge
      r6i.2xlarge
      r6i.4xlarge
      r6id.2xlarge
      r7i.2xlarge
      r7i.4xlarge
    Key:       kubernetes.io/os
    Operator:  In
    Values:
      linux
    Key:       karpenter.sh/capacity-type
    Operator:  In
    Values:
      on-demand
    Key:       karpenter.k8s.aws/instance-category
    Operator:  In
    Values:
      c
      m
      r
    Key:       karpenter.k8s.aws/instance-generation
    Operator:  Gt
    Values:
      2
    Key:       kubernetes.io/arch
    Operator:  In
    Values:
      amd64
    Key:       karpenter.k8s.aws/ec2nodeclass
    Operator:  In
    Values:
      default
  Resources:
    Requests:
      Cpu:   5150m
      Pods:  8
Status:
  Allocatable:
    Cpu:                        7910m
    Ephemeral - Storage:        17Gi
    Memory:                     14162Mi
    Pods:                       58
    vpc.amazonaws.com/pod-eni:  38
  Capacity:
    Cpu:                        8
    Ephemeral - Storage:        20Gi
    Memory:                     15155Mi
    Pods:                       58
    vpc.amazonaws.com/pod-eni:  38
  Conditions:
    Last Transition Time:  2025-03-06T15:57:02Z
    Message:
    Observed Generation:   1
    Reason:                Launched
    Status:                True
    Type:                  Launched
    Last Transition Time:  2025-03-06T15:57:21Z
    Message:
    Observed Generation:   1
    Reason:                Registered
    Status:                True
    Type:                  Registered
    Last Transition Time:  2025-03-06T15:57:31Z
    Message:
    Observed Generation:   1
    Reason:                Initialized
    Status:                True
    Type:                  Initialized
    Last Transition Time:  2025-03-06T15:58:36Z
    Message:
    Observed Generation:   1
    Reason:                Consolidatable
    Status:                True
    Type:                  Consolidatable
    Last Transition Time:  2025-03-06T15:57:31Z
    Message:
    Observed Generation:   1
    Reason:                Ready
    Status:                True
    Type:                  Ready
  Image ID:                ami-089f1bf55c5291efd
  Last Pod Event Time:     2025-03-06T15:57:36Z
  Node Name:               ip-192-168-149-58.ap-northeast-2.compute.internal
  Provider ID:             aws:///ap-northeast-2a/i-0068e4889e1e71961
Events:
  Type    Reason             Age    From       Message
  ----    ------             ----   ----       -------
  Normal  Launched           4m35s  karpenter  Status condition transitioned, Type: Launched, Status: Unknown -&amp;gt; True, Reason: Launched
  Normal  DisruptionBlocked  4m31s  karpenter  Nodeclaim does not have an associated node
  Normal  Registered         4m16s  karpenter  Status condition transitioned, Type: Registered, Status: Unknown -&amp;gt; True, Reason: Registered
  Normal  Initialized        4m6s   karpenter  Status condition transitioned, Type: Initialized, Status: Unknown -&amp;gt; True, Reason: Initialized
  Normal  Ready              4m6s   karpenter  Status condition transitioned, Type: Ready, Status: Unknown -&amp;gt; True, Reason: Ready
  Normal  Unconsolidatable   3m     karpenter  Can't replace with a cheaper node
&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Karpenter는 노드 용량 추적을 위해 클러스터의 CloudProvider 머신과 CustomResources 간의 매핑을 만듭니다. 이 매핑이 일관되도록 하기 위해 Karpenter는 다음 태그 키를 활용합니다.&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;&lt;code&gt;karpenter.sh/managed-by&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;karpenter.sh/nodepool&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;kubernetes.io/cluster/${CLUSTER_NAME}&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Karpenter에 의해 등록된 노드에 추가 라벨이 등록된 것이 확인됩니다.&lt;/p&gt;
&lt;pre class=&quot;routeros&quot;&gt;&lt;code&gt;kubectl get node -l karpenter.sh/registered=true -o jsonpath=&quot;{.items[0].metadata.labels}&quot; | jq '.'
...
  &quot;karpenter.sh/initialized&quot;: &quot;true&quot;,
  &quot;karpenter.sh/nodepool&quot;: &quot;default&quot;,
  &quot;karpenter.sh/registered&quot;: &quot;true&quot;,
...&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;생성된 노드 &lt;code&gt;ip-192-168-149-58.ap-northeast-2.compute.interna&lt;/code&gt;는 기존 노드와 다른 c5a.2xlarge로 생성된 것을 확인할 수 있습니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;kubectl get no
NAME                                                STATUS   ROLES    AGE   VERSION
ip-192-168-149-58.ap-northeast-2.compute.internal   Ready    &amp;lt;none&amp;gt;   11m   v1.32.1-eks-5d632ec
ip-192-168-33-227.ap-northeast-2.compute.internal   Ready    &amp;lt;none&amp;gt;   29m   v1.32.1-eks-5d632ec
ip-192-168-91-227.ap-northeast-2.compute.internal   Ready    &amp;lt;none&amp;gt;   29m   v1.32.1-eks-5d632ec&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;웹 콘솔에서 확인하였습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;999&quot; data-origin-height=&quot;161&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bwOmJ2/btsMCYodOun/pdPHLSwQIrLpkBi4GEtRk0/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bwOmJ2/btsMCYodOun/pdPHLSwQIrLpkBi4GEtRk0/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bwOmJ2/btsMCYodOun/pdPHLSwQIrLpkBi4GEtRk0/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbwOmJ2%2FbtsMCYodOun%2FpdPHLSwQIrLpkBi4GEtRk0%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;999&quot; height=&quot;161&quot; data-origin-width=&quot;999&quot; data-origin-height=&quot;161&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Karpenter는 스케줄링이 필요한 모든 파드를 수용할 수 있는 하나의 노드를 생성하였고, 또한 CA에 비해서 더 빠른 프로비저닝 속도를 확인할 수 있었습니다.&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;Pending Pod 발생&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;638&quot; data-origin-height=&quot;292&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/nkW6g/btsMDhAOeDQ/RGyRvMdknUEQoLQxm9uDCK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/nkW6g/btsMDhAOeDQ/RGyRvMdknUEQoLQxm9uDCK/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/nkW6g/btsMDhAOeDQ/RGyRvMdknUEQoLQxm9uDCK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FnkW6g%2FbtsMDhAOeDQ%2FRGyRvMdknUEQoLQxm9uDCK%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;638&quot; height=&quot;292&quot; data-origin-width=&quot;638&quot; data-origin-height=&quot;292&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;노드 생성 이후&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;654&quot; data-origin-height=&quot;289&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/coHarP/btsMEosaz4z/xkzNP2TXcdRsOenWlxdoV0/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/coHarP/btsMEosaz4z/xkzNP2TXcdRsOenWlxdoV0/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/coHarP/btsMEosaz4z/xkzNP2TXcdRsOenWlxdoV0/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FcoHarP%2FbtsMEosaz4z%2FxkzNP2TXcdRsOenWlxdoV0%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;654&quot; height=&quot;289&quot; data-origin-width=&quot;654&quot; data-origin-height=&quot;289&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Karpenter 실습을 마무리하고 리소스를 정리하겠습니다.&lt;/p&gt;
&lt;pre class=&quot;dsconfig&quot;&gt;&lt;code&gt;# Karpenter helm 삭제 
helm uninstall karpenter --namespace &quot;${KARPENTER_NAMESPACE}&quot;

# Karpenter IAM Role 등 생성한 CloudFormation 삭제
aws cloudformation delete-stack --stack-name &quot;Karpenter-${CLUSTER_NAME}&quot;

# EC2 Launch Template 삭제
aws ec2 describe-launch-templates --filters &quot;Name=tag:karpenter.k8s.aws/cluster,Values=${CLUSTER_NAME}&quot; |
    jq -r &quot;.LaunchTemplates[].LaunchTemplateName&quot; |
    xargs -I{} aws ec2 delete-launch-template --launch-template-name {}

# 클러스터 삭제
eksctl delete cluster --name &quot;${CLUSTER_NAME}&quot;&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;참고로 디플로이먼트를 스케일링 다운해 Karpenter에 의해 생성된 노드가 삭제된 이후 클러스터를 삭제하셔야 합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;바로 클러스터를 삭제하니 &lt;span style=&quot;color: #ee2323;&quot;&gt;Karpenter에 의해 생성된 노드는 삭제되지 않고 EC2 인스턴스에 남아 있는 현상&lt;/span&gt;을 발견했습니다. 아무래도 Karpenter에서 생성된 노드이다보니, EKS가 직접 관리하는 리소스로 정리가 되지 않는 것으로 보입니다. &lt;span style=&quot;color: #333333; text-align: start;&quot;&gt;먼저 스케일링 다운으로&lt;span&gt;&amp;nbsp;&lt;/span&gt;&lt;/span&gt; Karpenter에 의해 생성된 노드가 삭제된 후 클러스터 삭제를 진행을 하셔야 합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;추가로 클러스터 삭제 이후에도, CloudFormation &lt;span style=&quot;color: #333333; text-align: start;&quot;&gt;생성한&lt;span&gt; Karpenter IAM Role이 &lt;/span&gt;&lt;/span&gt;삭제안될 경우 AWS CloudFormation 관리 콘솔에서 직접 삭제하시기 바랍니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;3. AKS의 오토스케일링&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;AKS에서도 EKS의 오토스케일링 옵션에 대응하는 솔루션을 제공하고 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;앞서 살펴본바와 같이 EKS의 오토스케일링 옵션은 사용자가 직접 해당 컴포넌트를 설치하는 방식으로 제공되고 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;AKS에서는 오토스케일링 옵션을 애드온 혹은 기능으로 제공하고 있기 때문에 클러스터 생성 시점에 필요한 옵션을 사용하면 해당 기능을 사용할 수 있습니다(혹은 설치된 클러스터에 기능을 활성화할 수 있음).&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;앞서 살펴본 바와 같이 HPA는 쿠버네티스 환경에서 기본으로 제공되기 때문에 어떤 환경에 있는 쿠버네티스에서도 사용이 가능합니다. 그 외 AKS에서 제공하는 나머지 오토스케일링 기능에 대해 아래와 같습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;KEDA: add-on으로 제공&lt;/h3&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;https://learn.microsoft.com/en-us/azure/aks/keda-about&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://learn.microsoft.com/en-us/azure/aks/keda-about&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;클러스터 옵션의 &lt;code&gt;--enable-keda&lt;/code&gt; 옵션을 통해서 활성화 할 수 있습니다.&lt;/p&gt;
&lt;pre class=&quot;verilog&quot;&gt;&lt;code&gt;az aks create --resource-group myResourceGroup --name myAKSCluster --enable-keda --generate-ssh-keys&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;VPA: &lt;code&gt;--enable-vpa&lt;/code&gt; 옵션&lt;/h3&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;https://learn.microsoft.com/en-us/azure/aks/use-vertical-pod-autoscaler&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://learn.microsoft.com/en-us/azure/aks/use-vertical-pod-autoscaler&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;클러스터 옵션의 &lt;code&gt;--enable-vpa&lt;/code&gt; 옵션을 통해서 VPA 기능을 활성화 할 수 있습니다.&lt;/p&gt;
&lt;pre class=&quot;verilog&quot;&gt;&lt;code&gt;az aks create --resource-group myResourceGroup --name myAKSCluster --enable-keda --generate-ssh-keys&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;Cluster Autoscaler: &lt;code&gt;--enable-cluster-autoscaler&lt;/code&gt; 옵션&lt;/h3&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;https://learn.microsoft.com/en-us/azure/aks/cluster-autoscaler?tabs=azure-cli&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://learn.microsoft.com/en-us/azure/aks/cluster-autoscaler?tabs=azure-cli&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;클러스터 옵션의 &lt;code&gt;--enable-cluster-autoscaler&lt;/code&gt; 으로 활성화할 수 있으며, &lt;code&gt;--min-count&lt;/code&gt; 와 &lt;code&gt;--max-count&lt;/code&gt;으로 최소/최대 값을 지정할 수 있습니다.&lt;/p&gt;
&lt;pre class=&quot;n1ql&quot;&gt;&lt;code&gt;az aks create --resource-group myResourceGroup --name myAKSCluster --node-count 1 --vm-set-type VirtualMachineScaleSets --load-balancer-sku standard --enable-cluster-autoscaler --min-count 1 --max-count 3 --generate-ssh-keys&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;추가로 CA의 &lt;code&gt;scan interval&lt;/code&gt;, &lt;code&gt;expander&lt;/code&gt;와 같은 옵션을 &lt;code&gt;cluster autoscaler profile&lt;/code&gt;로 정의할 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아래 문서를 통해 지원 가능한 옵션을 살펴보실 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;https://learn.microsoft.com/en-us/azure/aks/cluster-autoscaler?tabs=azure-cli#use-the-cluster-autoscaler-profile&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://learn.microsoft.com/en-us/azure/aks/cluster-autoscaler?tabs=azure-cli#use-the-cluster-autoscaler-profile&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;Karpenter: NAP(Node Autoprovisioning)으로 활성화&lt;/h3&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;https://learn.microsoft.com/en-us/azure/aks/node-autoprovision?tabs=azure-cli&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://learn.microsoft.com/en-us/azure/aks/node-autoprovision?tabs=azure-cli&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;클러스터 옵션의 &lt;code&gt;--node-provisioning-mode Auto&lt;/code&gt; 사용하여 Node Autoprovisioning 을 활성화 할 수 있습니다. NAP는 2025년 03월 기준 Preview 상태입니다.&lt;/p&gt;
&lt;pre class=&quot;brainfuck&quot;&gt;&lt;code&gt;az aks create --name $CLUSTER_NAME --resource-group $RESOURCE_GROUP_NAME --node-provisioning-mode Auto --network-plugin azure --network-plugin-mode overlay --network-dataplane cilium --generate-ssh-keys&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;4. 오토스케일링에 대한 주의사항&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이러한 오토스케일링을 CSP에서 사용할 때는 아래와 같은 일반적인 주의사항이 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;파드 스케일링에 사용되는 VPA, HPA를 동시에 사용하는 것은 권장되지 않습니다.&lt;/li&gt;
&lt;li&gt;VPA로 신규로 생성된 파드는 사용 가능한 리소스를 초과할 수 있고 파드를 Pending 상태로 만들 수 있습니다. 이 때문에 VPA는 CA와 함께 사용해야 할 수 있습니다. (혹은 VPA를 off 모드로 사용하고 적합한 사이징을 위해서 사용하실 수도 있습니다)&lt;/li&gt;
&lt;li&gt;CA와 Karpenter를 동시에 사용하지 말아야 합니다.&lt;/li&gt;
&lt;li&gt;CA를 가상 머신 스케일링 메커니즘(예를 들어, CPU 사용량에 따른 가상머신 스케일링을 설정)과 동시에 설정하지 말아야합니다. 이는 의도치 않은 결과를 만들어 낼 수 있습니다.&lt;/li&gt;
&lt;li&gt;노드 스케일링 옵션에서 Scale down은 의도치 않은 파드의 eviction을 발생시킬 수 있으므로, 필요한 경우 파드 내 annotation으로 evict를 하지 않도록 설정하거나(&lt;code&gt;cluster-autoscaler.kubernetes.io/safe-to-evict=&quot;false&quot;&lt;/code&gt;), 혹은 PDB(Pod Distruption Budget)으로 안정적인 eviction을 유도할 수 있습니다.&lt;/li&gt;
&lt;li&gt;빈번한 Scale up/down이 발생하는 경우 오히려 애플리케이션의 안전성이 무너질 수 있으므로 모니터링을 통해 리소스 사용을 안정화 할 필요가 있습니다. 혹은 Scale up/down에 조정 시간을 주는 옵션을 검토해야 합니다.&lt;/li&gt;
&lt;li&gt;노드 스케일링으로 API 요청이 빈번하게 발생하는 경우 API throttling이 발생할 수 있고, 이 경우 요청이 정상 처리 되지 않을 수 있는 점도 유의하실 필요가 있습니다.&lt;/li&gt;
&lt;/ul&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;마무리&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;금번 포스트에서는 EKS의 오토스케일링 옵션을 살펴보고 AKS와 비교해 보았습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;EKS의 특성이 기본 구성이 최소화된 점과, 한편으로 사용자에게 자율성을 주는 것으로도 이해할 수 있습니다. 오토스케일링 또한 사용자가 직접 컴포넌트를 구성해야 하는데, 이러한 과정에서 사용자가 설치를 제대로 하지 못하거나 정확한 기능을 이해하지 못하는 경우 오토스케일링이 제대로 동작하지 않을 수 있습니다. 또한 해당 컴포넌트의 업그레이드도 사용자의 몫입니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;반면 관련 컴포넌트를 사용자 데이터 플레인에 위치 시키므로 해당 컴포넌트의 동작을 이해하고, 이슈를 직접 트러블 슈팅할 수 있습니다. 또한 오픈소스 컴포넌트를 그대로 사용하기 때문에 다양한 옵션을 활용할 수 있습니다. 이러한 측면에서 EKS의 환경은 가볍지만 상당 부분을 고객이 직접 구성하므로 고급 사용자에게 적합하지 않은가라는 생각도 들기는 합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;AKS는 오토스케일링 옵션을 Managed Service의 일부로 제공합니다. 클러스터에서 VPA, CAS, KEDA를 활성화 하는 옵션을 제공하고 있으며, 최근 Karpenter를 NAP(Node Auto Provisioning)라는 이름으로 Preview로 제공하고 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이로써 해당 기능에 대한 개념을 이해하는 일반 사용자 또한 쉽게 애드온으로 기능을 사용할 수 있으며, 애드온으로 제공된다는 것은 해당 컴포넌트의 라이프사이클을 AKS에서 직접 관리해주기 때문에 관리 편의성이 높습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;다만 해당 구성에서 제공되는 옵션 또한 검증된 부분만 제공하기 때문에 오픈 소스의 모든 옵션을 제공하지 않을 수 있으므로 Limitation을 확인하셔야 합니다. 또한 컴포넌트들이 컨트롤 플레인 영역에 배치되어 직접 트러블 슈팅을 하는데 제한이 있을 수 있습니다. 한편 Managed Service로 기능이 제공되기 때문에 옵션 추가 등에서 오픈 소스의 기능을 빠르게 따라가지 못하는 점도 아쉬운 점으로 남을 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;다른 측면으로 한가지를 언급 드릴 부분은, EKS를 기능적으로 지원하는 컴포넌트들은 상당한 부분이 커스터마이즈(옵션 변경, 삭제 등)가 가능합니다. 반대로 AKS의 시스템 컴포넌트는 addon Manager에 의해서 관리되어, 이러한 컴포넌트 혹은 Configmap을 살펴보면 &lt;code&gt;addonmanager.kubernetes.io/mode=Reconcile&lt;/code&gt; 로 레이블이 지정되어 있습니다. 이는 addon manager에 의해 정기적으로 조정(reconcile)되는 리소스이기 때문에 사용자가 임의로 변경해도 다시 원복 됩니다. 즉, AKS는 허용된 방식으로만 시스템 컴포넌트를 제어할 수 있습니다. 일반적으로 매니지드 영역에 대한 수정은 권장하지 않고 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;blockquote data-ke-style=&quot;style3&quot;&gt;[Note]&lt;br /&gt;Addon manager에 대해서 아래의 문서를 참고 부탁드립니다.&lt;br /&gt;&lt;a href=&quot;https://github.com/kubernetes/kubernetes/blob/master/cluster/addons/addon-manager/README.md#addon-manager&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://github.com/kubernetes/kubernetes/blob/master/cluster/addons/addon-manager/README.md#addon-manager&lt;/a&gt;&lt;/blockquote&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;그럼 이번 포스트를 마무리 하도록 하겠습니다.&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;다음 포스트에서는 EKS의 보안에 대해서 학습한 내용을 작성해 보겠습니다.&lt;/p&gt;</description>
      <category>EKS</category>
      <category>AKS</category>
      <category>AutoScaling</category>
      <category>ca</category>
      <category>EKS</category>
      <category>HPA</category>
      <category>Karpenter</category>
      <category>KEDA</category>
      <category>VPA</category>
      <author>한명</author>
      <guid isPermaLink="true">https://a-person.tistory.com/39</guid>
      <comments>https://a-person.tistory.com/39#entry39comment</comments>
      <pubDate>Fri, 7 Mar 2025 01:24:16 +0900</pubDate>
    </item>
    <item>
      <title>[5-1] EKS의 오토스케일링 Part1</title>
      <link>https://a-person.tistory.com/38</link>
      <description>&lt;p data-ke-size=&quot;size16&quot;&gt;이번 포스트에서는 EKS의 오토스케일링(Autoscaling) 옵션을 살펴보겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;기본적인 쿠버네티스 환경의 스케일링 옵션을 전반적으로 살펴보겠습니다. 이후 EKS의 오토스케일링 옵션을 살펴보고, 이를 실습을 통해 확인 해보도록 하겠습니다. 마지막으로 AKS의 오토스케일링 옵션을 EKS와 비교해 보겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;글을 작성하는 과정에서 분량이 너무 길어져, Part1에서는 HPA, KEDA, VPA까지의 내용을 다루고 Part2에서 Cluster Autoscaler 부터 이어서 설명하도록 하겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;목차&lt;/h2&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;EKS의 오토스케일링 Part1&lt;/h4&gt;
&lt;ol style=&quot;list-style-type: decimal;&quot; data-ke-list-type=&quot;decimal&quot;&gt;
&lt;li&gt;&lt;b&gt;쿠버네티스 환경의 스케일링&lt;/b&gt;&lt;/li&gt;
&lt;li&gt;&lt;b&gt;EKS의 오토스케일링 개요&lt;/b&gt;&lt;/li&gt;
&lt;li&gt;&lt;b&gt;실습 환경 생성&lt;/b&gt;&lt;/li&gt;
&lt;li&gt;&lt;b&gt;HPA(Horizontal Pod Autoscaler)&lt;/b&gt;&lt;/li&gt;
&lt;li&gt;&lt;b&gt;KEDA(Kubernetes Event-driven Autoscaler)&lt;/b&gt;&lt;/li&gt;
&lt;li&gt;&lt;b&gt;VPA(Vertical Pod Autoscaler)&lt;/b&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;EKS의 오토스케일링 Part2 (&lt;a href=&quot;https://a-person.tistory.com/39&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://a-person.tistory.com/39&lt;/a&gt;)&lt;/h4&gt;
&lt;ol style=&quot;list-style-type: decimal;&quot; data-ke-list-type=&quot;decimal&quot;&gt;
&lt;li&gt;CA(Cluster Autoscaler)&lt;/li&gt;
&lt;li&gt;Karpenter&lt;/li&gt;
&lt;li&gt;AKS의 오토스케일링&lt;/li&gt;
&lt;li&gt;오토스케일링에 대한 주의사항&lt;/li&gt;
&lt;/ol&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;1. 쿠버네티스 환경의 스케일링&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;쿠버네티스 패턴(책만, 2020)에서는 쿠버네티스 환경의 애플리케이션의 스케일링 레벨을 아래와 같이 설명하고 있습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;708&quot; data-origin-height=&quot;574&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bZ6mQj/btsMDH0ioQv/Nk60EwJcViiIaGIYlJnnGk/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bZ6mQj/btsMDH0ioQv/Nk60EwJcViiIaGIYlJnnGk/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bZ6mQj/btsMDH0ioQv/Nk60EwJcViiIaGIYlJnnGk/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbZ6mQj%2FbtsMDH0ioQv%2FNk60EwJcViiIaGIYlJnnGk%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;708&quot; height=&quot;574&quot; data-origin-width=&quot;708&quot; data-origin-height=&quot;574&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;출처: 빌긴 이브리암/롤란트 후스, 안승규/서한배, 책만, 2020, 272.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;먼저 애플리케이션 튜닝이 필요합니다. 쿠버네티스 환경이라고 할지라도 애플리케이션 자체가 최대한의 성능을 사용하도록 동작해야 합니다. 애플리케이션에 할당된 리소스를 증가시키거나 복제본을 증가시키는 것은 부차적인 일입니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;두번째는 수직 파드 오토스케일러(VPA, Vertical Pod Autoscaler)입니다. 파드의 리소스가 부족할 때 설정된 리소스(CPU/MEM) 자체를 증가시키는 방식입니다. 쿠버네티스 환경에서는 request와 limit을 지정할 수 있고, 이 값이 변경됩니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;세번째는 수평 파드 오토스케일러(HPA, Horizontal Pod Autoscaler)입니다. 파드의 리소스가 임계치 이상 사용되면, 동일한 파드의 복제본을 증가시키는 방식입니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;네번째는 VPA나 HPA로 애플리케이션이 용량이나 갯수가 증가했을 때, 노드의 할당 가능한 자원(Allocatable resource)을 모두 소진하면 파드가 스케줄링 불가능(Unschedurable Pod)한 상황이 발생할 수 있습니다. 이때는 노드 자체를 증가시켜야 합니다. 이것을 클러스터 오토스케일러(CA, Cluster Autoscaler)에서 지원합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;쿠버네티스 환경은 애플리케이션의 스케일링을 하기 위해서 위와 같은 기법을 사용할 수 있으며, 이후 EKS에서는 이들을 어떤식으로 활용할 수 있는지 살펴보겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;일반적으로 애플리케이션 튜닝은 쿠버네티스에 국한되지 않은 별개의 영역이므로 설명에 제외하도록 하겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;2. EKS의 오토스케일링 개요&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;EKS에서는 설명한 HPA, VPA, CA를 모두 지원하고 있으며, Karpenter이라는 노드 오토스케일링을 방식을 추가로 제공하고 있습니다. 또한 그림에는 없지만 KEDA를 통해서 이벤트를 통해 HPA를 확장할 수 있는 방식도 있습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1388&quot; data-origin-height=&quot;698&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bDCK0L/btsMCA8HqnN/x6PKXLvmJPkboCsvw1lSa0/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bDCK0L/btsMCA8HqnN/x6PKXLvmJPkboCsvw1lSa0/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bDCK0L/btsMCA8HqnN/x6PKXLvmJPkboCsvw1lSa0/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbDCK0L%2FbtsMCA8HqnN%2Fx6PKXLvmJPkboCsvw1lSa0%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1388&quot; height=&quot;698&quot; data-origin-width=&quot;1388&quot; data-origin-height=&quot;698&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;출처: &lt;a href=&quot;https://www.youtube.com/watch?v=jLuVZX6WQsw&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://www.youtube.com/watch?v=jLuVZX6WQsw&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;HPA는 쿠버네티스 환경에서 기본으로 제공되기 때문에 별도의 설치 과정 없이 hpa 오브젝트를 생성하여 사용할 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;EKS에서는 KEDA, VPA, CA, Karpenter는 helm이나 yaml을 배포하는 방식으로 사용자가 설치 및 구성해야 합니다. 애드온과 같은 방식이 아니라 오픈 소스를 배포하므로 직접 라이프사이클을 관리하며, 제공되는 모든 옵션을 활용할 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;컨트롤러에 해당하는 컴포넌트가 데이터 플레인에 직접 배포되기 때문에 동작 과정을 이해할 수 있습니다. 다만 이 또한 데이터 플레인의 리소스를 사용한다는 점과 사용자 책임의 관리가 필요한 점을 유의해야 합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;참고로 AKS의 KEDA, VPA, CA, Karpenter는 애드온이나 기능으로 제공되기 때문에 Managed의 영역으로 이해할 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;3. 실습 환경 생성&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;실습 환경은 아래와 같습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;2089&quot; data-origin-height=&quot;667&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/TtStf/btsMEepNfgK/ORNn0VNY2GTWJ0fUzZoKpK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/TtStf/btsMEepNfgK/ORNn0VNY2GTWJ0fUzZoKpK/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/TtStf/btsMEepNfgK/ORNn0VNY2GTWJ0fUzZoKpK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FTtStf%2FbtsMEepNfgK%2FORNn0VNY2GTWJ0fUzZoKpK%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;2089&quot; height=&quot;667&quot; data-origin-width=&quot;2089&quot; data-origin-height=&quot;667&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;CloudFormation을 바탕으로 실습 환경을 구성하도록 하겠습니다.&lt;/p&gt;
&lt;pre class=&quot;jboss-cli&quot;&gt;&lt;code&gt;# YAML 파일 다운로드
curl -O https://s3.ap-northeast-2.amazonaws.com/cloudformation.cloudneta.net/K8S/myeks-5week.yaml

# 변수 지정
CLUSTER_NAME=myeks
SSHKEYNAME=&amp;lt;SSH 키 페이 이름&amp;gt;
MYACCESSKEY=&amp;lt;IAM Uesr 액세스 키&amp;gt;
MYSECRETKEY=&amp;lt;IAM Uesr 시크릿 키&amp;gt;

# CloudFormation 스택 배포
aws cloudformation deploy --template-file myeks-5week.yaml --stack-name $CLUSTER_NAME --parameter-overrides KeyName=$SSHKEYNAME SgIngressSshCidr=$(curl -s ipinfo.io/ip)/32  MyIamUserAccessKeyID=$MYACCESSKEY MyIamUserSecretAccessKey=$MYSECRETKEY ClusterBaseName=$CLUSTER_NAME --region ap-northeast-2

# CloudFormation 스택 배포 완료 후 작업용 EC2 IP 출력
aws cloudformation describe-stacks --stack-name myeks --query 'Stacks[*].Outputs[0].OutputValue' --output text

# EC2 접속
ssh -i ~/.ssh/ekskey.pem ec2-user@$(aws cloudformation describe-stacks --stack-name myeks --query 'Stacks[*].Outputs[0].OutputValue' --output text)&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;CloudFormation에서 운영서버를 배포하고, 이후 EKS 까지 배포를 하게 되어 있습니다. 대략 15~20분 가량이 소요됩니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;그리고 생성된 EKS를 확인하고 kubeconfig을 받습니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;# 변수 지정
CLUSTER_NAME=myeks

# 클러스터 확인
eksctl get cluster

# kubeconfig 생성
aws sts get-caller-identity --query Arn
aws eks update-kubeconfig --name $CLUSTER_NAME --user-alias &amp;lt;위 출력된 자격증명 사용자&amp;gt;

# 클러스터 기본 확인
kubectl get node --label-columns=node.kubernetes.io/instance-type,eks.amazonaws.com/capacityType,topology.kubernetes.io/zone
kubectl get pod -A
NAME                                               STATUS   ROLES    AGE     VERSION               INSTANCE-TYPE   CAPACITYTYPE   ZONE
ip-192-168-1-87.ap-northeast-2.compute.internal    Ready    &amp;lt;none&amp;gt;   3m12s   v1.31.5-eks-5d632ec   t3.medium       ON_DEMAND      ap-northeast-2a
ip-192-168-2-195.ap-northeast-2.compute.internal   Ready    &amp;lt;none&amp;gt;   3m9s    v1.31.5-eks-5d632ec   t3.medium       ON_DEMAND      ap-northeast-2b
ip-192-168-3-136.ap-northeast-2.compute.internal   Ready    &amp;lt;none&amp;gt;   3m8s    v1.31.5-eks-5d632ec   t3.medium       ON_DEMAND      ap-northeast-2c

kubectl get pod -A
NAMESPACE     NAME                                  READY   STATUS    RESTARTS   AGE
kube-system   aws-node-b7vsh                        2/2     Running   0          3m16s
kube-system   aws-node-fnrj8                        2/2     Running   0          3m13s
kube-system   aws-node-ltkxn                        2/2     Running   0          3m12s
kube-system   coredns-86f5954566-96j8r              1/1     Running   0          9m10s
kube-system   coredns-86f5954566-j6dvp              1/1     Running   0          9m10s
kube-system   ebs-csi-controller-549bf6879f-6h7jg   6/6     Running   0          49s
kube-system   ebs-csi-controller-549bf6879f-p2gml   6/6     Running   0          49s
kube-system   ebs-csi-node-bm94h                    3/3     Running   0          49s
kube-system   ebs-csi-node-h5ntq                    3/3     Running   0          49s
kube-system   ebs-csi-node-n7s2s                    3/3     Running   0          49s
kube-system   kube-proxy-brf4v                      1/1     Running   0          3m16s
kube-system   kube-proxy-x7sbw                      1/1     Running   0          3m13s
kube-system   kube-proxy-zm6ht                      1/1     Running   0          3m12s
kube-system   metrics-server-6bf5998d9c-bzxn6       1/1     Running   0          9m9s
kube-system   metrics-server-6bf5998d9c-zs5mg       1/1     Running   0          9m10s&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;그리고 이후 실습에 활용하기 위한 일부 컴포넌트들을 설치합니다.&lt;/p&gt;
&lt;pre class=&quot;dsconfig&quot;&gt;&lt;code&gt;# 환경 변수
CERT_ARN=$(aws acm list-certificates --query 'CertificateSummaryList[].CertificateArn[]' --output text)
MyDomain=aperson.link # 각자 자신의 도메인 이름 입력
MyDnzHostedZoneId=$(aws route53 list-hosted-zones-by-name --dns-name &quot;$MyDomain.&quot; --query &quot;HostedZones[0].Id&quot; --output text)

# AWS LoadBalancerController
helm repo add eks https://aws.github.io/eks-charts
helm install aws-load-balancer-controller eks/aws-load-balancer-controller -n kube-system --set clusterName=$CLUSTER_NAME \
  --set serviceAccount.create=false --set serviceAccount.name=aws-load-balancer-controller

# ExternalDNS
echo $MyDomain
curl -s https://raw.githubusercontent.com/gasida/PKOS/main/aews/externaldns.yaml | MyDomain=$MyDomain MyDnzHostedZoneId=$MyDnzHostedZoneId envsubst | kubectl apply -f -

# kube-ops-view
helm repo add geek-cookbook https://geek-cookbook.github.io/charts/
helm install kube-ops-view geek-cookbook/kube-ops-view --version 1.2.2 --set service.main.type=ClusterIP  --set env.TZ=&quot;Asia/Seoul&quot; --namespace kube-system

# kubeopsview 용 Ingress 설정 : group 설정으로 1대의 ALB를 여러개의 ingress 에서 공용 사용
echo $CERT_ARN
cat &amp;lt;&amp;lt;EOF | kubectl apply -f -
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    alb.ingress.kubernetes.io/certificate-arn: $CERT_ARN
    alb.ingress.kubernetes.io/group.name: study
    alb.ingress.kubernetes.io/listen-ports: '[{&quot;HTTPS&quot;:443}, {&quot;HTTP&quot;:80}]'
    alb.ingress.kubernetes.io/load-balancer-name: $CLUSTER_NAME-ingress-alb
    alb.ingress.kubernetes.io/scheme: internet-facing
    alb.ingress.kubernetes.io/ssl-redirect: &quot;443&quot;
    alb.ingress.kubernetes.io/success-codes: 200-399
    alb.ingress.kubernetes.io/target-type: ip
  labels:
    app.kubernetes.io/name: kubeopsview
  name: kubeopsview
  namespace: kube-system
spec:
  ingressClassName: alb
  rules:
  - host: kubeopsview.$MyDomain
    http:
      paths:
      - backend:
          service:
            name: kube-ops-view
            port:
              number: 8080  # name: http
        path: /
        pathType: Prefix
EOF&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Prometheus와 Grafana도 설치를 진행합니다.&lt;/p&gt;
&lt;pre class=&quot;yaml&quot;&gt;&lt;code&gt;# repo 추가
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts

# 파라미터 파일 생성 : PV/PVC(AWS EBS) 삭제에 불편하여 PV/PVC 미사용 하도록 수정
cat &amp;lt;&amp;lt;EOT &amp;gt; monitor-values.yaml
prometheus:
  prometheusSpec:
    scrapeInterval: &quot;15s&quot;
    evaluationInterval: &quot;15s&quot;
    podMonitorSelectorNilUsesHelmValues: false
    serviceMonitorSelectorNilUsesHelmValues: false
    retention: 5d
    retentionSize: &quot;10GiB&quot;

  # Enable vertical pod autoscaler support for prometheus-operator
  verticalPodAutoscaler:
    enabled: true

  ingress:
    enabled: true
    ingressClassName: alb
    hosts: 
      - prometheus.$MyDomain
    paths: 
      - /*
    annotations:
      alb.ingress.kubernetes.io/scheme: internet-facing
      alb.ingress.kubernetes.io/target-type: ip
      alb.ingress.kubernetes.io/listen-ports: '[{&quot;HTTPS&quot;:443}, {&quot;HTTP&quot;:80}]'
      alb.ingress.kubernetes.io/certificate-arn: $CERT_ARN
      alb.ingress.kubernetes.io/success-codes: 200-399
      alb.ingress.kubernetes.io/load-balancer-name: myeks-ingress-alb
      alb.ingress.kubernetes.io/group.name: study
      alb.ingress.kubernetes.io/ssl-redirect: '443'

grafana:
  defaultDashboardsTimezone: Asia/Seoul
  adminPassword: xxx # Grafana 패스워드
  defaultDashboardsEnabled: false

  ingress:
    enabled: true
    ingressClassName: alb
    hosts: 
      - grafana.$MyDomain
    paths: 
      - /*
    annotations:
      alb.ingress.kubernetes.io/scheme: internet-facing
      alb.ingress.kubernetes.io/target-type: ip
      alb.ingress.kubernetes.io/listen-ports: '[{&quot;HTTPS&quot;:443}, {&quot;HTTP&quot;:80}]'
      alb.ingress.kubernetes.io/certificate-arn: $CERT_ARN
      alb.ingress.kubernetes.io/success-codes: 200-399
      alb.ingress.kubernetes.io/load-balancer-name: myeks-ingress-alb
      alb.ingress.kubernetes.io/group.name: study
      alb.ingress.kubernetes.io/ssl-redirect: '443'

kube-state-metrics:
  rbac:
    extraRules:
      - apiGroups: [&quot;autoscaling.k8s.io&quot;]
        resources: [&quot;verticalpodautoscalers&quot;]
        verbs: [&quot;list&quot;, &quot;watch&quot;]
  customResourceState:
    enabled: true
    config:
      kind: CustomResourceStateMetrics
      spec:
        resources:
          - groupVersionKind:
              group: autoscaling.k8s.io
              kind: &quot;VerticalPodAutoscaler&quot;
              version: &quot;v1&quot;
            labelsFromPath:
              verticalpodautoscaler: [metadata, name]
              namespace: [metadata, namespace]
              target_api_version: [apiVersion]
              target_kind: [spec, targetRef, kind]
              target_name: [spec, targetRef, name]
            metrics:
              - name: &quot;vpa_containerrecommendations_target&quot;
                help: &quot;VPA container recommendations for memory.&quot;
                each:
                  type: Gauge
                  gauge:
                    path: [status, recommendation, containerRecommendations]
                    valueFrom: [target, memory]
                    labelsFromPath:
                      container: [containerName]
                commonLabels:
                  resource: &quot;memory&quot;
                  unit: &quot;byte&quot;
              - name: &quot;vpa_containerrecommendations_target&quot;
                help: &quot;VPA container recommendations for cpu.&quot;
                each:
                  type: Gauge
                  gauge:
                    path: [status, recommendation, containerRecommendations]
                    valueFrom: [target, cpu]
                    labelsFromPath:
                      container: [containerName]
                commonLabels:
                  resource: &quot;cpu&quot;
                  unit: &quot;core&quot;
  selfMonitor:
    enabled: true

alertmanager:
  enabled: false
defaultRules:
  create: false
kubeControllerManager:
  enabled: false
kubeEtcd:
  enabled: false
kubeScheduler:
  enabled: false
prometheus-windows-exporter:
  prometheus:
    monitor:
      enabled: false
EOT
cat monitor-values.yaml

# helm 배포
helm install kube-prometheus-stack prometheus-community/kube-prometheus-stack --version 69.3.1 \
-f monitor-values.yaml --create-namespace --namespace monitoring

# helm 확인
helm get values -n monitoring kube-prometheus-stack

# PV 사용하지 않음
kubectl get pv,pvc -A

# 프로메테우스 웹 접속
echo -e &quot;https://prometheus.$MyDomain&quot;

# 그라파나 웹 접속
echo -e &quot;https://grafana.$MyDomain&quot;

# TargetGroup binding 확인
kubectl get targetgroupbindings.elbv2.k8s.aws -A
NAMESPACE     NAME                               SERVICE-NAME                       SERVICE-PORT   TARGET-TYPE   AGE
kube-system   k8s-kubesyst-kubeopsv-b2ecfd420f   kube-ops-view                      8080           ip            2m54s
monitoring    k8s-monitori-kubeprom-40399c957e   kube-prometheus-stack-grafana      80             ip            45s
monitoring    k8s-monitori-kubeprom-826f25cbb8   kube-prometheus-stack-prometheus   9090           ip            45s&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;4. HPA(Horizontal Pod Autoscaler)&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;HPA는 지정된 워크로드의 특정 메트릭이 임계치를 초과하는 경우 복제본(Replicas)를 증가시킵니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1324&quot; data-origin-height=&quot;716&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/AmCSA/btsMB9cDPNH/0PmQOh0CDGrF27bK9SiYN1/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/AmCSA/btsMB9cDPNH/0PmQOh0CDGrF27bK9SiYN1/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/AmCSA/btsMB9cDPNH/0PmQOh0CDGrF27bK9SiYN1/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FAmCSA%2FbtsMB9cDPNH%2F0PmQOh0CDGrF27bK9SiYN1%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1324&quot; height=&quot;716&quot; data-origin-width=&quot;1324&quot; data-origin-height=&quot;716&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;출처: &lt;a href=&quot;https://www.youtube.com/watch?v=jLuVZX6WQsw&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://www.youtube.com/watch?v=jLuVZX6WQsw&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;HPA는 쿠버네티스 환경에서 기본으로 제공되기 때문에 어떤 환경에 있는 쿠버네티스에서도 사용이 가능합니다. 주의할 점은 HPA가 metrics-server에서 제공되는 core system metrics에 의해서 판단하게 되므로, metrics-server가 정상적이지 않으면 HPA가 동작하지 않는 점만 기억하시면 됩니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;HPA는 EKS 특화된 기술이 아니기 때문에 EKS 환경의 실습은 진행하지 않으며, 다음 절에서 KEDA를 통해 HPA를 사용하는 사례를 통해 살펴보도록 하겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;HPA 실습&lt;/h3&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;HPA에 대한 실습은 쿠버네티스 공식 문서를 참고하실 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;1) 샘플 애플리케이션&lt;/p&gt;
&lt;pre class=&quot;yaml&quot;&gt;&lt;code&gt;apiVersion: apps/v1
kind: Deployment
metadata:
  name: php-apache
spec:
  selector:
    matchLabels:
      run: php-apache
  template:
    metadata:
      labels:
        run: php-apache
    spec:
      containers:
      - name: php-apache
        image: registry.k8s.io/hpa-example
        ports:
        - containerPort: 80
        resources:
          limits:
            cpu: 500m
          requests:
            cpu: 200m
---
apiVersion: v1
kind: Service
metadata:
  name: php-apache
  labels:
    run: php-apache
spec:
  ports:
  - port: 80
  selector:
    run: php-apache&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;2) HPA 정의&lt;/p&gt;
&lt;pre class=&quot;routeros&quot;&gt;&lt;code&gt;kubectl autoscale deployment php-apache --cpu-percent=50 --min=1 --max=10&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;3) Load 발생&lt;/p&gt;
&lt;pre class=&quot;routeros&quot;&gt;&lt;code&gt;# Run this in a separate terminal
# so that the load generation continues and you can carry on with the rest of the steps
kubectl run -i --tty load-generator --rm --image=busybox:1.28 --restart=Never -- /bin/sh -c &quot;while sleep 0.01; do wget -q -O- http://php-apache; done&quot;&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이후 &lt;code&gt;kubectl get hpa php-apache --watch&lt;/code&gt;를 통해 메트릭의 증가와 복제본의 증가를 확인하실 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;HPA의 메트릭 확장&lt;/h3&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;HPA는 Metric API를 통해서 값을 수집하는데, 이는 쿠버네티스에서 기본 제공되는 API가 아니며, 이를 노출하기 위한 metrics-server가 필요합니다. 또한 HPA는 이미 정의된 Resource Metric에 의해서 스케일링을 판단하는데 이는 metrics-server에 의해서 기본적으로 제공됩니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이러한 Metrics API를 확장하기 위해서 Custom Metric, External Metric을 사용할 수 있습니다. 보통 Prometheus를 통해서 추가 메트릭을 수집하고, Prometheus에서 수집된 메트릭을 기반으로 Prometheus Adapter이 Custom Metric API Server의 역할을 해 Custom Metric와 External Metric을 노출합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1070&quot; data-origin-height=&quot;440&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bSkzQM/btsMDPRkBFu/d1qjWeNtJgUasZo7d1P0H1/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bSkzQM/btsMDPRkBFu/d1qjWeNtJgUasZo7d1P0H1/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bSkzQM/btsMDPRkBFu/d1qjWeNtJgUasZo7d1P0H1/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbSkzQM%2FbtsMDPRkBFu%2Fd1qjWeNtJgUasZo7d1P0H1%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1070&quot; height=&quot;440&quot; data-origin-width=&quot;1070&quot; data-origin-height=&quot;440&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;출처: &lt;a href=&quot;https://itnext.io/autoscaling-apps-on-kubernetes-with-the-horizontal-pod-autoscaler-798750ab7847&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://itnext.io/autoscaling-apps-on-kubernetes-with-the-horizontal-pod-autoscaler-798750ab7847&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이후 살펴볼 KEDA 또한 자체적으로 Metrics API Server를 가지고 External Metircs를 노출하게 됩니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;5. KEDA(Kubernetes Event-driven Autoscaler)&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;HPA는 metrics-server에 의해서 수집된 CPU, Memory와 같은 메트릭을 기반으로 스케일링을 결정합니다. 이러한 리소스 기반이 아닌 다른 메트릭을 참조하여 HPA를 동작하도 도와주는 컴포넌트가 KEDA(Kubernetes Event-driven Autoscaler)입니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;KEDA는 다양한 Event Source로 부터 발생하는 이벤트를 기반으로 스케일링 여부를 결정할 수 있습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1403&quot; data-origin-height=&quot;694&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/nNnr0/btsMClRE8W6/hCSjpzH2ng01dufKGJuRlk/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/nNnr0/btsMClRE8W6/hCSjpzH2ng01dufKGJuRlk/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/nNnr0/btsMClRE8W6/hCSjpzH2ng01dufKGJuRlk/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FnNnr0%2FbtsMClRE8W6%2FhCSjpzH2ng01dufKGJuRlk%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1403&quot; height=&quot;694&quot; data-origin-width=&quot;1403&quot; data-origin-height=&quot;694&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;출처: &lt;a href=&quot;https://www.youtube.com/watch?v=jLuVZX6WQsw&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://www.youtube.com/watch?v=jLuVZX6WQsw&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;EKS에서는 helm을 통해서 KEDA를 설치할 수 있습니다. (해당 실습에서 prometheus를 통해 모니터링을 하는 부분이 포함되어 있어, 사전 Prometheus가 설치되어 있어야 합니다)&lt;/p&gt;
&lt;pre class=&quot;yaml&quot;&gt;&lt;code&gt;# 설치 전 기존 metrics-server 제공 Metris API 확인
kubectl get --raw &quot;/apis/metrics.k8s.io&quot; | jq
{
  &quot;kind&quot;: &quot;APIGroup&quot;,
  &quot;apiVersion&quot;: &quot;v1&quot;,
  &quot;name&quot;: &quot;metrics.k8s.io&quot;,
  &quot;versions&quot;: [
    {
      &quot;groupVersion&quot;: &quot;metrics.k8s.io/v1beta1&quot;,
      &quot;version&quot;: &quot;v1beta1&quot;
    }
  ],
  &quot;preferredVersion&quot;: {
    &quot;groupVersion&quot;: &quot;metrics.k8s.io/v1beta1&quot;,
    &quot;version&quot;: &quot;v1beta1&quot;
  }
}

# external metrics는 없음
kubectl get --raw &quot;/apis/external.metrics.k8s.io/v1beta1&quot; | jq
Error from server (NotFound): the server could not find the requested resource


# KEDA 설치
cat &amp;lt;&amp;lt;EOT &amp;gt; keda-values.yaml
metricsServer:
  useHostNetwork: true

prometheus:
  metricServer:
    enabled: true
    port: 9022
    portName: metrics
    path: /metrics
    serviceMonitor:
      # Enables ServiceMonitor creation for the Prometheus Operator
      enabled: true
    podMonitor:
      # Enables PodMonitor creation for the Prometheus Operator
      enabled: true
  operator:
    enabled: true
    port: 8080
    serviceMonitor:
      # Enables ServiceMonitor creation for the Prometheus Operator
      enabled: true
    podMonitor:
      # Enables PodMonitor creation for the Prometheus Operator
      enabled: true
  webhooks:
    enabled: true
    port: 8020
    serviceMonitor:
      # Enables ServiceMonitor creation for the Prometheus webhooks
      enabled: true
EOT

helm repo add kedacore https://kedacore.github.io/charts
helm repo update
helm install keda kedacore/keda --version 2.16.0 --namespace keda --create-namespace -f keda-values.yaml

# apiservice가 생성된 것을 알 수 있습니다.
kubectl get apiservice v1beta1.external.metrics.k8s.io -o yaml
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
  annotations:
    meta.helm.sh/release-name: keda
    meta.helm.sh/release-namespace: keda
  creationTimestamp: &quot;2025-03-06T13:23:54Z&quot;
  labels:
    app.kubernetes.io/component: operator
    app.kubernetes.io/instance: keda
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: v1beta1.external.metrics.k8s.io
    app.kubernetes.io/part-of: keda-operator
    app.kubernetes.io/version: 2.16.0
    helm.sh/chart: keda-2.16.0
  name: v1beta1.external.metrics.k8s.io
  resourceVersion: &quot;7353&quot;
  uid: 26d3a3b1-7487-4086-84f9-1fd3105aa89d
spec:
  caBundle: &amp;lt;생략&amp;gt;
  group: external.metrics.k8s.io
  groupPriorityMinimum: 100
  service:
    name: keda-operator-metrics-apiserver
    namespace: keda
    port: 443
  version: v1beta1
  versionPriority: 100
status:
  conditions:
  - lastTransitionTime: &quot;2025-03-06T13:24:25Z&quot;
    message: all checks passed
    reason: Passed
    status: &quot;True&quot;
    type: Available

# 설치 후 KEDA Metrics Server의해 노출된 External Metrics를 확인합니다.
kubectl get --raw &quot;/apis/external.metrics.k8s.io/v1beta1&quot; | jq
{
  &quot;kind&quot;: &quot;APIResourceList&quot;,
  &quot;apiVersion&quot;: &quot;v1&quot;,
  &quot;groupVersion&quot;: &quot;external.metrics.k8s.io/v1beta1&quot;,
  &quot;resources&quot;: [
    {
      &quot;name&quot;: &quot;externalmetrics&quot;,
      &quot;singularName&quot;: &quot;&quot;,
      &quot;namespaced&quot;: true,
      &quot;kind&quot;: &quot;ExternalMetricValueList&quot;,
      &quot;verbs&quot;: [
        &quot;get&quot;
      ]
    }
  ]
}&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;KEDA를 설치한 이후 생성된 파드를 살펴보면 KEDA의 구성요소를 알 수 있는데, 각 Agent, Metrics, Admission Webhook의 역할을 합니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;$ kubectl get pod -n keda
NAME                                                   READY   STATUS    RESTARTS     AGE
pod/keda-admission-webhooks-86cffccbf5-nq7kw           1/1     Running   0            4m11s
pod/keda-operator-6bdffdc78-zrhmg                      1/1     Running   1 (4m ago)   4m11s
pod/keda-operator-metrics-apiserver-74d844d769-2rbqk   1/1     Running   0            4m11s&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;공식 문서를 보면 각 컴포넌트에 해당하는 역할을 확인하실 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;https://keda.sh/docs/2.10/concepts/#how-keda-works&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://keda.sh/docs/2.10/concepts/#how-keda-works&lt;/a&gt;&lt;/p&gt;
&lt;ol style=&quot;list-style-type: decimal;&quot; data-ke-list-type=&quot;decimal&quot;&gt;
&lt;li&gt;&lt;b&gt;Agent&lt;/b&gt; &amp;mdash; KEDA activates and deactivates Kubernetes &lt;a href=&quot;https://kubernetes.io/docs/concepts/workloads/controllers/deployment&quot;&gt;Deployments&lt;/a&gt; to scale to and from zero on no events. This is one of the primary roles of the &lt;code&gt;keda-operator&lt;/code&gt; container that runs when you install KEDA.&lt;/li&gt;
&lt;li&gt;&lt;b&gt;Metrics&lt;/b&gt; &amp;mdash; KEDA acts as a &lt;a href=&quot;https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#support-for-custom-metrics&quot;&gt;Kubernetes metrics server&lt;/a&gt; that exposes rich event data like queue length or stream lag to the Horizontal Pod Autoscaler to drive scale out. It is up to the Deployment to consume the events directly from the source. This preserves rich event integration and enables gestures like completing or abandoning queue messages to work out of the box. The metric serving is the primary role of the &lt;code&gt;keda-operator-metrics-apiserver&lt;/code&gt; container that runs when you install KEDA.&lt;/li&gt;
&lt;li&gt;&lt;b&gt;Admission Webhooks&lt;/b&gt; - Automatically validate resource changes to prevent misconfiguration and enforce best practices by using an &lt;a href=&quot;https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/&quot;&gt;admission controller&lt;/a&gt;. As an example, it will prevent multiple ScaledObjects to target the same scale target.&lt;/li&gt;
&lt;/ol&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;그리고 생성된 CRD를 확인해보면 아래와 같은 CRD가 생성된 것을 알 수 있습니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;$ kubectl get crd | grep keda
cloudeventsources.eventing.keda.sh           2025-03-06T13:23:51Z
clustercloudeventsources.eventing.keda.sh    2025-03-06T13:23:51Z
clustertriggerauthentications.keda.sh        2025-03-06T13:23:51Z
scaledjobs.keda.sh                           2025-03-06T13:23:53Z
scaledobjects.keda.sh                        2025-03-06T13:23:51Z
triggerauthentications.keda.sh               2025-03-06T13:23:51Z&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이 중 &lt;code&gt;ScaledObjects&lt;/code&gt; 이 중요한 역할을 합니다. 이는 Event Source(예를 들어, Rabbit MQ)와 쿠버네티스 리소스(예를 들어, Deployment) 간의 의도하는 맵핑(desired mapping)을 나타냅니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아래 실습에 사용되는 명세를 살펴보면, cron을 바탕으로 트리거&lt;code&gt;triggers&lt;/code&gt;되며, php-apache라는 Deployment를 대상 지정&lt;code&gt;scaleTargetRef&lt;/code&gt;하고, &lt;code&gt;spec&lt;/code&gt;에 HPA 오브젝트에 필요한 값이나 스케일링 속성을 지정합니다.&lt;/p&gt;
&lt;pre class=&quot;yaml&quot;&gt;&lt;code&gt;apiVersion: keda.sh/v1alpha1
kind: ScaledObject
metadata:
  name: php-apache-cron-scaled
spec:
  minReplicaCount: 0
  maxReplicaCount: 2
  pollingInterval: 30
  cooldownPeriod: 300
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: php-apache
  triggers:
  - type: cron
    metadata:
      timezone: Asia/Seoul
      start: 00,15,30,45 * * * *
      end: 05,20,35,50 * * * *
      desiredReplicas: &quot;1&quot;&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;참고로 scaledObject를 생성하면 HPA도 같이 생성이 됩니다. 아래에서 다시 살펴보겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이제 샘플 애플리케이션과 ScaledObject를 생성합니다.&lt;/p&gt;
&lt;pre class=&quot;yaml&quot;&gt;&lt;code&gt;# keda 네임스페이스에 디플로이먼트 생성
cat &amp;lt;&amp;lt; EOF &amp;gt; php-apache.yaml
apiVersion: apps/v1
kind: Deployment
metadata: 
  name: php-apache
spec: 
  selector: 
    matchLabels: 
      run: php-apache
  template: 
    metadata: 
      labels: 
        run: php-apache
    spec: 
      containers: 
      - name: php-apache
        image: registry.k8s.io/hpa-example
        ports: 
        - containerPort: 80
        resources: 
          limits: 
            cpu: 500m
          requests: 
            cpu: 200m
---
apiVersion: v1
kind: Service
metadata: 
  name: php-apache
  labels: 
    run: php-apache
spec: 
  ports: 
  - port: 80
  selector: 
    run: php-apache
EOF

kubectl apply -f php-apache.yaml -n keda
kubectl get pod -n keda
...
php-apache-d87b7ff46-bbp8c                         0/1     ContainerCreating   0               3s

# ScaledObject 정책 생성 : cron
cat &amp;lt;&amp;lt;EOT &amp;gt; keda-cron.yaml
apiVersion: keda.sh/v1alpha1
kind: ScaledObject
metadata:
  name: php-apache-cron-scaled
spec:
  minReplicaCount: 0
  maxReplicaCount: 2  # Specifies the maximum number of replicas to scale up to (defaults to 100).
  pollingInterval: 30  # Specifies how often KEDA should check for scaling events
  cooldownPeriod: 300  # Specifies the cool-down period in seconds after a scaling event
  scaleTargetRef:  # Identifies the Kubernetes deployment or other resource that should be scaled.
    apiVersion: apps/v1
    kind: Deployment
    name: php-apache
  triggers:  # Defines the specific configuration for your chosen scaler, including any required parameters or settings
  - type: cron
    metadata:
      timezone: Asia/Seoul
      start: 00,15,30,45 * * * *
      end: 05,20,35,50 * * * *
      desiredReplicas: &quot;1&quot;
EOT
kubectl apply -f keda-cron.yaml -n keda

# 모니터링
kubectl get ScaledObject,hpa,pod -n keda
kubectl get ScaledObject -n keda -w

# HPA 확인 -&amp;gt; external 유형으로 생성되어 있음
 kubectl get hpa -o jsonpath=&quot;{.items[0].spec}&quot; -n keda | jq
{
  &quot;maxReplicas&quot;: 2,
  &quot;metrics&quot;: [
    {
      &quot;external&quot;: {
        &quot;metric&quot;: {
          &quot;name&quot;: &quot;s0-cron-Asia-Seoul-00,15,30,45xxxx-05,20,35,50xxxx&quot;,
          &quot;selector&quot;: {
            &quot;matchLabels&quot;: {
              &quot;scaledobject.keda.sh/name&quot;: &quot;php-apache-cron-scaled&quot;
            }
          }
        },
        &quot;target&quot;: {
          &quot;averageValue&quot;: &quot;1&quot;,
          &quot;type&quot;: &quot;AverageValue&quot;
        }
      },
      &quot;type&quot;: &quot;External&quot;
    }
  ],
  &quot;minReplicas&quot;: 1,
  &quot;scaleTargetRef&quot;: {
    &quot;apiVersion&quot;: &quot;apps/v1&quot;,
    &quot;kind&quot;: &quot;Deployment&quot;,
    &quot;name&quot;: &quot;php-apache&quot;
  }
}&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이 예시는 cron을 통해서 00, 15, 30, 45분에 &lt;code&gt;desiredReplicas&lt;/code&gt;에 지정한 1개의 파드가 증가하고, 이후 05, 20, 35, 40에 &lt;code&gt;minReplicaCount&lt;/code&gt;에 지정된 0개로 파드가 줄어 듭니다. (이 예제에서 &lt;code&gt;maxReplicaCount&lt;/code&gt;는 큰 의미가 없지만 생성되는 HPA 오브젝트에서 사용되기 때문에 작성이 필요함)&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;결과를 아래와 같이 확인할 수 있습니다. 다만 테스트를 해보면 Cron으로 정의한 45분(start), 50분(end)에 정시에 트리거가 되는 것은 아닌 것으로 확인되기 때문에 정확도에 대해서는 기대치를 다소 낮춰야 할 것 같습니다.&lt;/p&gt;
&lt;pre class=&quot;yaml&quot;&gt;&lt;code&gt;# 이전 시점
Thu Mar  6 22:44:53 KST 2025
NAME                                          SCALETARGETKIND      SCALETARGETNAME   MIN   MAX   READY   ACTIVE   FALLBACK   PAUSED    TRIGGERS   AUTHENTICATIONS   AGE
scaledobject.keda.sh/php-apache-cron-scaled   apps/v1.Deployment   php-apache        0     2     True    False    False      Unknown                                12m

NAME                                                                  REFERENCE               TARGETS             MINPODS   MAXPODS   REPLICAS   AGE
horizontalpodautoscaler.autoscaling/keda-hpa-php-apache-cron-scaled   Deployment/php-apache   &amp;lt;unknown&amp;gt;/1 (avg)   1         2         0          12m

NAME                                                   READY   STATUS    RESTARTS      AGE
pod/keda-admission-webhooks-86cffccbf5-nq7kw           1/1     Running   0             21m
pod/keda-operator-6bdffdc78-zrhmg                      1/1     Running   1 (20m ago)   21m
pod/keda-operator-metrics-apiserver-74d844d769-2rbqk   1/1     Running   0             21m

# 45분 이후
Thu Mar  6 22:45:29 KST 2025
NAME                                          SCALETARGETKIND      SCALETARGETNAME   MIN   MAX   READY   ACTIVE   FALLBACK   PAUSED    TRIGGERS   AUTHENTICATIONS   AGE
scaledobject.keda.sh/php-apache-cron-scaled   apps/v1.Deployment   php-apache        0     2     True    True     False      Unknown                                12m

NAME                                                                  REFERENCE               TARGETS             MINPODS   MAXPODS   REPLICAS   AGE
horizontalpodautoscaler.autoscaling/keda-hpa-php-apache-cron-scaled   Deployment/php-apache   &amp;lt;unknown&amp;gt;/1 (avg)   1         2         0          12m

NAME                                                   READY   STATUS    RESTARTS      AGE
pod/keda-admission-webhooks-86cffccbf5-nq7kw           1/1     Running   0             21m
pod/keda-operator-6bdffdc78-zrhmg                      1/1     Running   1 (21m ago)   21m
pod/keda-operator-metrics-apiserver-74d844d769-2rbqk   1/1     Running   0             21m
pod/php-apache-d87b7ff46-gblgb                         1/1     Running   0             9s


# ScaledObject 의 ACTIVE 상태가 45분 시점 True로 변경됨
kubectl get ScaledObject -n keda -w
NAME                     SCALETARGETKIND      SCALETARGETNAME   MIN   MAX   READY   ACTIVE   FALLBACK   PAUSED    TRIGGERS   AUTHENTICATIONS   AGE
php-apache-cron-scaled   apps/v1.Deployment   php-apache        0     2     True    False    False      Unknown                                2m30s
php-apache-cron-scaled   apps/v1.Deployment   php-apache        0     2     True    False    False      Unknown                                7m
php-apache-cron-scaled   apps/v1.Deployment   php-apache        0     2     True    False    False      Unknown                                12m
php-apache-cron-scaled   apps/v1.Deployment   php-apache        0     2     True    True     False      Unknown                                12m
php-apache-cron-scaled   apps/v1.Deployment   php-apache        0     2     True    True     False      Unknown                                13m
...

# 40분 쯤에 ScaleTarget을 minReplicaCount로 변경 함
kubectl logs -f -n keda keda-operator-6bdffdc78-zrhmg
...
2025-03-06T13:39:52Z    INFO    scaleexecutor   Successfully set ScaleTarget replicas count to ScaledObject minReplicaCount     {&quot;scaledobject.Name&quot;: &quot;php-apache-cron-scaled&quot;, &quot;scaledObject.Namespace&quot;: &quot;keda&quot;, &quot;scaleTarget.Name&quot;: &quot;php-apache&quot;, &quot;Original Replicas Count&quot;: 1, &quot;New Replicas Count&quot;: 0}

# 45분 쯤에 keda-operator에서 ScaleTarget을 업데이트 함
kubectl logs -f -n keda keda-operator-6bdffdc78-zrhmg
...
2025-03-06T13:45:22Z    INFO    scaleexecutor   Successfully updated ScaleTarget        {&quot;scaledobject.Name&quot;: &quot;php-apache-cron-scaled&quot;, &quot;scaledObject.Namespace&quot;: &quot;keda&quot;, &quot;scaleTarget.Name&quot;: &quot;php-apache&quot;, &quot;Original Replicas Count&quot;: 0, &quot;New Replicas Count&quot;: 1}

# 54분에 ScaleTarget을 minReplicaCount로 변경 함
2025-03-06T13:54:52Z    INFO    scaleexecutor   Successfully set ScaleTarget replicas count to ScaledObject minReplicaCount     {&quot;scaledobject.Name&quot;: &quot;php-apache-cron-scaled&quot;, &quot;scaledObject.Namespace&quot;: &quot;keda&quot;, &quot;scaleTarget.Name&quot;: &quot;php-apache&quot;, &quot;Original Replicas Count&quot;: 1, &quot;New Replicas Count&quot;: 0}
&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;참고로 Grafana에서 아래의 json으로 대시보드를 만들어 모니터링 할 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;https://github.com/kedacore/keda/blob/main/config/grafana/keda-dashboard.json&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://github.com/kedacore/keda/blob/main/config/grafana/keda-dashboard.json&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1684&quot; data-origin-height=&quot;627&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/doOPdw/btsMCBGwfGn/OeeANkrRA7p8DfKpkmyah0/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/doOPdw/btsMCBGwfGn/OeeANkrRA7p8DfKpkmyah0/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/doOPdw/btsMCBGwfGn/OeeANkrRA7p8DfKpkmyah0/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FdoOPdw%2FbtsMCBGwfGn%2FOeeANkrRA7p8DfKpkmyah0%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1684&quot; height=&quot;627&quot; data-origin-width=&quot;1684&quot; data-origin-height=&quot;627&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;실습을 마무리하고 생성된 리소스를 삭제합니다.&lt;/p&gt;
&lt;pre class=&quot;sas&quot;&gt;&lt;code&gt;# KEDA 및 deployment 등 삭제
kubectl delete ScaledObject -n keda php-apache-cron-scaled &amp;amp;&amp;amp; kubectl delete deploy php-apache -n keda &amp;amp;&amp;amp; helm uninstall keda -n keda
kubectl delete namespace keda&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;6. VPA(Vertical Pod Autoscaler)&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;VPA는 대상이 되는 리소스의 과거 사용률을 기반으로 대상의 컨테이너 스펙의 request와 limits 자체를 변경하는 오토스케일러 입니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;VPA의 동작과정을 아래에서 설명하고 있습니다. 단, 아래 설명은 Update Mode: Auto이므로 파드가 재생성되는 과정까지를 설명하고 있습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1413&quot; data-origin-height=&quot;723&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/SGiky/btsMDizGzbo/JKMOKlNwgVQAAKQW6kXOT0/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/SGiky/btsMDizGzbo/JKMOKlNwgVQAAKQW6kXOT0/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/SGiky/btsMDizGzbo/JKMOKlNwgVQAAKQW6kXOT0/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FSGiky%2FbtsMDizGzbo%2FJKMOKlNwgVQAAKQW6kXOT0%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1413&quot; height=&quot;723&quot; data-origin-width=&quot;1413&quot; data-origin-height=&quot;723&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;출처: &lt;a href=&quot;https://www.youtube.com/watch?v=jLuVZX6WQsw&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://www.youtube.com/watch?v=jLuVZX6WQsw&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;그림에서 표시된 바와 같이 VPA에는 아래와 같은 주요 컴포넌트가 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;참고: &lt;a href=&quot;https://github.com/kubernetes/autoscaler/blob/master/vertical-pod-autoscaler/docs/components.md&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://github.com/kubernetes/autoscaler/blob/master/vertical-pod-autoscaler/docs/components.md&lt;/a&gt;&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;&lt;a href=&quot;https://github.com/kubernetes/autoscaler/blob/master/vertical-pod-autoscaler/docs/components.md#recommender&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;Recommender&lt;/a&gt; - monitors the current and past resource consumption and, based on it, provides recommended values for the containers' cpu and memory requests.&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://github.com/kubernetes/autoscaler/blob/master/vertical-pod-autoscaler/docs/components.md#updater&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;Updater&lt;/a&gt; - checks which of the managed pods have correct resources set and, if not, kills them so that they can be recreated by their controllers with the updated requests.&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://github.com/kubernetes/autoscaler/blob/master/vertical-pod-autoscaler/docs/components.md#admission-controller&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;Admission Controller&lt;/a&gt; - sets the correct resource requests on new pods (either just created or recreated by their controller due to Updater's activity).&lt;/li&gt;
&lt;/ul&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;VPA에서 추천값을 계산하는 방식은 과거 사용률을 바탕으로 기준값을 정하고 여기에 Margin을 더하여 결정합니다. 상세한 내용은 아래 링크를 참고 부탁드립니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;https://devocean.sk.com/blog/techBoardDetail.do?ID=164786&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://devocean.sk.com/blog/techBoardDetail.do?ID=164786&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;또한 VPA에는 updateMode를 설정할 수 있는데 Auto(Default), Recreate, Initial, Off 입니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이해하기로는 현 시점 Auto와 Recreate은 동일하게 동작을 합니다. 즉 생성되는 파드와 실행 중인 파드를 변경하고, 필요하면 현재 실행 중인 파드를 재시작 합니다. 다만 이후 in-place resource resize(Kubernetes 1.27, Alpha)가 가능해지면, Auto 모드에서는 재시작을 하지않고, 현재 파드의 리소스를 수정만하는 방식으로 사용될 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Initial은 파드가 생성되는 시점에만 추천 값을 적용하는 모드이고, Off는 추천 값을 VPA 오브젝트를 통해서 확인만 가능한 모드입니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;상세한 설명은 아래 문서를 참고 부탁드립니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;https://learn.microsoft.com/en-us/azure/aks/vertical-pod-autoscaler#vpa-object-operation-modes&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://learn.microsoft.com/en-us/azure/aks/vertical-pod-autoscaler#vpa-object-operation-modes&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이제 VPA를 설치하도록 하겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;참고: &lt;a href=&quot;https://github.com/kubernetes/autoscaler/blob/master/vertical-pod-autoscaler/docs/installation.md#install-command&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://github.com/kubernetes/autoscaler/blob/master/vertical-pod-autoscaler/docs/installation.md#install-command&lt;/a&gt;&lt;/p&gt;
&lt;pre class=&quot;sql&quot;&gt;&lt;code&gt;# VPA 코드 다운로드
git clone https://github.com/kubernetes/autoscaler.git


# 만약 openssl 이 1.1.1 이하 버전인 경우, 1.1.1 이상 버전으로 업그레이드 필요함
openssl version
OpenSSL 1.0.2k-fips  26 Jan 2017

# (필요한 경우) 1.0 제거 
yum remove openssl -y

# (필요한 경우) openssl 1.1.1 이상 버전 확인 
yum install openssl11 -y

# 스크립트 파일 내에 openssl11 수정 (commit 까지 해야 수행됨)
cd autoscaler/vertical-pod-autoscaler/
sed -i 's/openssl/openssl11/g' pkg/admission-controller/gencerts.sh
git status
git config --global user.email &quot;you@example.com&quot;
git config --global user.name &quot;Your Name&quot;
git add .
git commit -m &quot;openssl version modify&quot;

# Deploy the Vertical Pod Autoscaler to your cluster with the following command.
watch -d kubectl get pod -n kube-system
./hack/vpa-up.sh

# (필요한 경우) openssl 관련 에러가 발생한다면 재실행!
sed -i 's/openssl/openssl11/g' pkg/admission-controller/gencerts.sh
./hack/vpa-up.sh

# 설치 후 확인
kubectl get pod -n kube-system |grep vpa
vpa-admission-controller-659c978dcd-zwn24     1/1     Running   0          106s
vpa-recommender-9bb6d98b7-gjqc7               1/1     Running   0          112s
vpa-updater-68db47986b-jqnnh                  1/1     Running   0          116s

# mutating webhook을 통해서 파드 생성 시점 설정을 변경함
kubectl get mutatingwebhookconfigurations vpa-webhook-config

NAME                 WEBHOOKS   AGE
vpa-webhook-config   1          101s

kubectl get mutatingwebhookconfigurations vpa-webhook-config -o json | jq
{
  &quot;apiVersion&quot;: &quot;admissionregistration.k8s.io/v1&quot;,
  &quot;kind&quot;: &quot;MutatingWebhookConfiguration&quot;,
  &quot;metadata&quot;: {
    &quot;creationTimestamp&quot;: &quot;2025-03-06T14:14:31Z&quot;,
    &quot;generation&quot;: 1,
    &quot;name&quot;: &quot;vpa-webhook-config&quot;,
    &quot;resourceVersion&quot;: &quot;22754&quot;,
    &quot;uid&quot;: &quot;03b88fcf-c1ff-4079-b33c-38c998829d50&quot;
  },
  &quot;webhooks&quot;: [
    {
      &quot;admissionReviewVersions&quot;: [
        &quot;v1&quot;
      ],
      &quot;clientConfig&quot;: {
        &quot;caBundle&quot;: &quot;&amp;lt;생략&amp;gt;&quot;,
        &quot;service&quot;: {
          &quot;name&quot;: &quot;vpa-webhook&quot;,
          &quot;namespace&quot;: &quot;kube-system&quot;,
          &quot;port&quot;: 443
        }
      },
      &quot;failurePolicy&quot;: &quot;Ignore&quot;,
      &quot;matchPolicy&quot;: &quot;Equivalent&quot;,
      &quot;name&quot;: &quot;vpa.k8s.io&quot;,
      &quot;namespaceSelector&quot;: {
        &quot;matchExpressions&quot;: [
          {
            &quot;key&quot;: &quot;kubernetes.io/metadata.name&quot;,
            &quot;operator&quot;: &quot;NotIn&quot;,
            &quot;values&quot;: [
              &quot;&quot;
            ]
          }
        ]
      },
      &quot;objectSelector&quot;: {},
      &quot;reinvocationPolicy&quot;: &quot;Never&quot;,
      &quot;rules&quot;: [
        {
          &quot;apiGroups&quot;: [
            &quot;&quot;
          ],
          &quot;apiVersions&quot;: [
            &quot;v1&quot;
          ],
          &quot;operations&quot;: [
            &quot;CREATE&quot;
          ],
          &quot;resources&quot;: [
            &quot;pods&quot;
          ],
          &quot;scope&quot;: &quot;*&quot;
        },
        {
          &quot;apiGroups&quot;: [
            &quot;autoscaling.k8s.io&quot;
          ],
          &quot;apiVersions&quot;: [
            &quot;*&quot;
          ],
          &quot;operations&quot;: [
            &quot;CREATE&quot;,
            &quot;UPDATE&quot;
          ],
          &quot;resources&quot;: [
            &quot;verticalpodautoscalers&quot;
          ],
          &quot;scope&quot;: &quot;*&quot;
        }
      ],
      &quot;sideEffects&quot;: &quot;None&quot;,
      &quot;timeoutSeconds&quot;: 30
    }
  ]
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;설치를 마치면 VPA와 관련된 파드와 CRD를 확인하실 수 있습니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;kubectl get crd | grep autoscaling
verticalpodautoscalercheckpoints.autoscaling.k8s.io   2025-03-06T14:13:55Z
verticalpodautoscalers.autoscaling.k8s.io             2025-03-06T14:13:55Z&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아래와 같이 샘플 예제를 배포하여 VPA를 테스트 하겠습니다.&lt;/p&gt;
&lt;pre class=&quot;routeros&quot;&gt;&lt;code&gt;# 공식 예제 배포 (autoscaler/vertical-pod-autoscaler 위치에서 수행)
kubectl apply -f examples/hamster.yaml &amp;amp;&amp;amp; kubectl get vpa -w
verticalpodautoscaler.autoscaling.k8s.io/hamster-vpa created
deployment.apps/hamster created
NAME          MODE   CPU   MEM   PROVIDED   AGE
hamster-vpa   Auto                          3s
hamster-vpa   Auto   511m   262144k   True       44s # VPA의 추천 값

# 파드 리소스 Requestes 확인
kubectl describe pod | grep Requests: -A2
    Requests:
      cpu:        100m
      memory:     50Mi
--
    Requests:
      cpu:        100m
      memory:     50Mi      


# VPA에 의해 기존 파드 삭제되고 신규 파드가 생성됨
kubectl get events --sort-by=&quot;.metadata.creationTimestamp&quot; | grep VPA
19s         Normal   EvictedByVPA        pod/hamster-598b78f579-8gjfh        Pod was evicted by VPA Updater to apply resource recommendation.
19s         Normal   EvictedPod          verticalpodautoscaler/hamster-vpa   VPA Updater evicted Pod hamster-598b78f579-8gjfh to apply resource recommendation.

# 파드 리소스 Requestes 가 변경 됨
kubectl describe pod | grep Requests: -A2
    Requests:
      cpu:        100m
      memory:     50Mi
--
    Requests:
      cpu:        100m
      memory:     50Mi
--
    Requests:
      cpu:        511m
      memory:     262144k&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;실습을 마무리하고 아래와 같이 리소스를 삭제하겠습니다.&lt;/p&gt;
&lt;pre class=&quot;awk&quot;&gt;&lt;code&gt;kubectl delete -f examples/hamster.yaml &amp;amp;&amp;amp; ./hack/vpa-down.sh&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;마무리&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;생각보다 길이 길어져서 오토스케일링 개요, HPA, KEDA, VPA 까지만 이번 포스트에서 작성하고, CA, Karpenter, AKS의 오토스케일링, 스케일링 주의사항은 다음 포스트에서 이어서 작성하겠습니다.&lt;/p&gt;</description>
      <category>EKS</category>
      <category>AKS</category>
      <category>AutoScaling</category>
      <category>ca</category>
      <category>EKS</category>
      <category>HPA</category>
      <category>Karpenter</category>
      <category>KEDA</category>
      <category>VPA</category>
      <author>한명</author>
      <guid isPermaLink="true">https://a-person.tistory.com/38</guid>
      <comments>https://a-person.tistory.com/38#entry38comment</comments>
      <pubDate>Fri, 7 Mar 2025 00:35:01 +0900</pubDate>
    </item>
    <item>
      <title>curl의 다양한 옵션</title>
      <link>https://a-person.tistory.com/37</link>
      <description>&lt;p data-ke-size=&quot;size16&quot;&gt;서비스 테스트에 자주 사용하는 Curl 옵션입니다.&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;timeout&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;curl을 옵션없이 사용하면 timeout이 될때까지 상당히 오래 대기합니다.&lt;br /&gt;보통 curl을 통한 테스트는 성공 여부만 확인하면 되기 때문에 timeout을 지정하면 해당 시간이 지나면 timeout으로 처리합니다.&lt;/p&gt;
&lt;pre class=&quot;routeros&quot;&gt;&lt;code&gt;$ curl -m 3 192.168.0.1
curl: (28) Connection timed out after 3002 milliseconds&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;DNS resolution을 명시적으로 주기&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;보통 ingress의 경우 여로 host를 서비스 하는 경우도 있습니다.&lt;br /&gt;이때 DNS 설정이 되지 않은 IP에 대해서 명시적으로 dns resolution을 지정해 줄 수 있습니다.&lt;/p&gt;
&lt;pre class=&quot;bash&quot; data-ke-language=&quot;bash&quot;&gt;&lt;code&gt;$ curl --resolve test.agic.contoso.com:80:20.20.20.20 http://test.agic.contoso.com
&amp;lt;html&amp;gt;
&amp;lt;head&amp;gt;&amp;lt;title&amp;gt;502 Bad Gateway&amp;lt;/title&amp;gt;&amp;lt;/head&amp;gt;
&amp;lt;body&amp;gt;
&amp;lt;center&amp;gt;&amp;lt;h1&amp;gt;502 Bad Gateway&amp;lt;/h1&amp;gt;&amp;lt;/center&amp;gt;
&amp;lt;hr&amp;gt;&amp;lt;center&amp;gt;Microsoft-Azure-Application-Gateway/v2&amp;lt;/center&amp;gt;
&amp;lt;/body&amp;gt;
&amp;lt;/html&amp;gt;&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;특정 Method 테스트 하기&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;테스트 목적으로 특정 Method의 응답 여부를 확인할 때가 있습니다.&lt;br /&gt;이 경우 &lt;code&gt;-X&lt;/code&gt; 옵션을 사용할 수 있습니다.&lt;/p&gt;
&lt;pre class=&quot;sas&quot;&gt;&lt;code&gt;curl -X OPTIONS &quot;https://url.com/default.css&quot;&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;Status code 확인하기&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;response body가 아닌 status code 자체를 확인해야 하는 경우가 있습니다.&lt;br /&gt;예를 들어, 네트워크 장비에서 health probe 를 하는데 이 경우 2xx~3xx 의 응답이 아닌 경우 실패로 간주 합니다.&lt;br /&gt;보통은 프로세스가 정상적인 경우에 다음 액션으로 실제 응답을 확인해야하는 상황이 있습니다.&lt;/p&gt;
&lt;pre class=&quot;perl&quot;&gt;&lt;code&gt;curl -w &quot; - status code: %{http_code}&quot; &quot;http://url.com/&quot;&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;bearer 토큰 지정&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;인증이 필요한 서비스에 대한 호출은 헤더로 토큰 값을 같이 넘겨야 하는 경우가 있습니다.&lt;br /&gt;이 경우 아래와 같이 지정할 수 있습니다.&lt;/p&gt;
&lt;pre class=&quot;routeros&quot;&gt;&lt;code&gt;TOKEN=xxx
$ curl 'https://url.com' -H &quot;Authorization: Bearer $TOKEN&quot;&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;curl 호출 과정 보기&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;curl 을 통해 응답만 보는게 아니라 상세한 진행 과정을 확인하고 싶을 때가 있습니다.&lt;br /&gt;tls 관련 이슈에서 사용할 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이럴 때 &lt;code&gt;-v&lt;/code&gt; 옵션을 사용할 수 있습니다.&lt;/p&gt;
&lt;pre class=&quot;gams&quot;&gt;&lt;code&gt;$ curl -v https://url.com&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;Redirection 된 페이지 보기&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;curl을 수행했지만, 페이지가 redirection되는 경우 301 코드만 확인됩니다.&lt;br /&gt;이 경우 아래와 같이 &lt;code&gt;-L&lt;/code&gt; 옵션을 사용할 수 있습니다.&lt;/p&gt;
&lt;pre class=&quot;xml&quot;&gt;&lt;code&gt;$ curl naver.com
&amp;lt;html&amp;gt;
&amp;lt;head&amp;gt;&amp;lt;title&amp;gt;301 Moved Permanently&amp;lt;/title&amp;gt;&amp;lt;/head&amp;gt;
&amp;lt;body&amp;gt;
&amp;lt;center&amp;gt;&amp;lt;h1&amp;gt;301 Moved Permanently&amp;lt;/h1&amp;gt;&amp;lt;/center&amp;gt;
&amp;lt;hr&amp;gt;&amp;lt;center&amp;gt;nginx&amp;lt;/center&amp;gt;
&amp;lt;/body&amp;gt;
&amp;lt;/html&amp;gt;

$ curl -L naver.com 
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   162  100   162    0     0   4205      0 --:--:-- --:--:-- --:--:--  4263
100   138    0   138    0     0   2251      0 --:--:-- --:--:-- --:--:--  2251
   &amp;lt;!doctype html&amp;gt; &amp;lt;html lang=&quot;ko&quot; class=&quot;fzoom&quot;&amp;gt; &amp;lt;head&amp;gt; &amp;lt;meta charset=&quot;utf-8&quot;&amp;gt; &amp;lt;meta name=&quot;Referrer&quot; content=&quot;origin&quot;&amp;gt; &amp;lt;meta http-equiv=&quot;X-UA-Compa
tible&quot; content=&quot;IE=edge&quot;&amp;gt; &amp;lt;meta name=&quot;viewport&quot; content=&quot;width=1190&quot;&amp;gt; &amp;lt;title&amp;gt;NAVER&amp;lt;/title&amp;gt; 
...&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;curl로 파일 다운 받기&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;curl 실제 파일을 다운받아야 하는 경우가 있습니다. wget을 사용할 수 도 있습니다.&lt;br /&gt;이때 &lt;code&gt;-O&lt;/code&gt; 옵션을 사용합니다.&lt;/p&gt;
&lt;pre class=&quot;groovy&quot;&gt;&lt;code&gt;curl -o http://url.com/default.css a.css # 지정된 파일명으로 다운
curl -O http://url.com/default.css # 파일명 그대로 다운&lt;/code&gt;&lt;/pre&gt;</description>
      <category>기타</category>
      <category>304</category>
      <category>curl</category>
      <category>option</category>
      <category>Timeout</category>
      <author>한명</author>
      <guid isPermaLink="true">https://a-person.tistory.com/37</guid>
      <comments>https://a-person.tistory.com/37#entry37comment</comments>
      <pubDate>Thu, 6 Mar 2025 21:37:54 +0900</pubDate>
    </item>
    <item>
      <title>[4] EKS의 모니터링과 로깅</title>
      <link>https://a-person.tistory.com/36</link>
      <description>&lt;p data-ke-size=&quot;size16&quot;&gt;이번 포스트에서는 EKS의 모니터링과 로깅에 대해서 살펴보겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;먼저 모니터링과 옵저버빌리티에 대한 차이와 이에 사용되는 지표인 메트릭, 로그, 트레이싱에 대한 용어를 살펴보겠습니다. 이후 Kubernetes 환경의 모니터링 관점을 설명하고, EKS에서 어떤 방식으로 클러스터 모니터링을 제공하는지를 살펴보겠습니다. 그리고 마지막으로 AKS(Azure Kubernetes Service)의 모니터링을 비교해 보겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;본 포스트는 CSP 사에서 제공하는 메트릭과 이벤트, 로그 수준의 모니터링 관점으로 주로 설명을 드리겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;목차&lt;/h2&gt;
&lt;ol style=&quot;list-style-type: decimal;&quot; data-ke-list-type=&quot;decimal&quot;&gt;
&lt;li&gt;Monitoring과 Observability&lt;/li&gt;
&lt;li&gt;Kuberntes 환경의 모니터링&lt;/li&gt;
&lt;li&gt;실습 환경 생성&lt;/li&gt;
&lt;li&gt;EKS의 모니터링과 로깅&lt;/li&gt;
&lt;li&gt;AKS의 모니터링과 로깅&lt;/li&gt;
&lt;li&gt;리소스 정리&lt;/li&gt;
&lt;/ol&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;1. Monitoring과 Observability&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;과거부터 흔히 사용한 모니터링(Monitoring)이라는 단어가 익숙한데 비해 최근에 옵저버빌리티(Observability)라고 표현하는 관측 가능성이라는 용어는 조금 낯설기도 합니다. 이번 절에서는 모니터링과 옵저버빌리티를 구분을 해보고자 합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;그리고 모니터링과 옵저버빌리티에서 사용되는 Metric, Log, Tracing과 같은 용어도 알아 보겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;Monitoring과 Observability&lt;/h4&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;code&gt;모니터링&lt;/code&gt;은 전체적인 시스템 상태를 이해하고 감시하기 위한 활동입니다. 이를 위해 정의된 기준을 기반으로 성능 지표를 수집하여 예상치 못한 문제를 조기에 감지하는 과정을 말합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;과거부터 모니터링은 IT 시스템을 측정하고 장애를 예방하기 위한 목적으로 널리 사용되었지만, 시스템이 점차 다양해지고 분산 환경으로 변화됨에 따라, 개별 구성 요소들의 모니터링 지표로는 전체 서비스를 이해하기는 어려운 한계를 가지고 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;다만 시스템의 복잡성이 높아진다고 해서 모니터링 자체가 불필요 해지는 것은 아니며, 문제를 Drill down 하기 위해서는 개별 시스템의 모니터링 지표와 로그에서 유용한 정보를 찾을 필요도 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;code&gt;옵저버빌리&lt;/code&gt;티 혹은 &lt;code&gt;관측 가능성&lt;/code&gt;은 클라우드 네이티브 처럼 분산된 환경의 애플리케이션에서 발생할 수 있는 문제에 대해서 각 이벤트에 대한 통찰력을 제공하기 위해서, 각 마이크로 서비스 간의 태그, 로그를 결합해 컨텍스트(Context)를 제공하는 것이 목표입니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;마이크로 서비스 환경에서 발생한 문제는 개별 시스템 차원의 분석으로는 설명하기는 어려울 수 있고, 각 마이크로 서비스 간의 연결 과정에서 파악해야 할 수 있습니다. 이를 파악하기 위해 지표나 이벤트를 통한 접근도 중요하지만 한편으로 이를 디버깅하는 과정에 옵저버빌리티의 역할이 커지고 있는 것 같습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;Metric, Log, Tracing&lt;/h4&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;code&gt;메트릭(Metrics)&lt;/code&gt;: 특정 대상에 대한 정량화 되는 값을 의미합니다. 이는 시스템의 성능과 상태를 수치로 표현한 데이터로, 예를 들어, CPU 사용량, 메모리 사용량, 요청 지연 시간, 오류율 등이 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;시스템의 전반적인 상태를 한눈에 볼 수 있고, 이상 징후를 감지하도록 도움을 줍니다. 보통은 시간에 따른 추이를 보고, 임계치를 벗어나거나 혹은 패턴을 벗어나는 이상치를 감지할 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;code&gt;로그(Logs)&lt;/code&gt;: 시스템에서 발생한 이벤트를 기록한 텍스트나 구조화된 데이터 입니다. 로그는 보통 타임스탭프, 상태 코드, 상세 로그와 같은 형태로 기록되어, 이상 상황이나 오류가 발생한 시점을 기준으로 상세한 이유를 알 수 있어, 디버깅에 유용합니다. 예를 들어, 애플리케이션 로그, 시스템 이벤트 로그와 같은 형태입니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;code&gt;추적(Tracing)&lt;/code&gt;: 분산 시스템에서 요청이 어떤 구성 요소들을 통해서 이동하는지 파악하는 것을 의미합니다. 요청의 흐름을 시각화 하고, 성능 이슈나 병목 혹은 이슈가 여러 서비스에 걸쳐 발생하는 경우 이를 진단하는데 도움을 줍니다. 예를 들어, 사용자가 어떤 화면을 접근했을 때, 그 요청이 어떤 내부 서비스를 거치는지 알 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;2. Kuberntes 환경의 모니터링&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;옵저버빌리티는 모니터링과는 다른 접근이나 솔루션이 필요하기 때문에 이후 설명은 쿠버네티스 관점에서 모니터링 관점으로 글을 이어 나가겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;물론 쿠버네티스를 자체를 모니터링 한다기 보다는 쿠버네티스 환경에서 실행되는 애플리케이션의 안전성을 위해 각 레이어별 모니터링을 설명하고자 합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이를 설명을 하기 위해서 Azure에서 제공하는 아래 그림을 기반으로 설명을 이어 가겠습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;2328&quot; data-origin-height=&quot;690&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/y8n5R/btsMAHFxC2N/yS8S4royV4mWtWoZLSf9m1/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/y8n5R/btsMAHFxC2N/yS8S4royV4mWtWoZLSf9m1/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/y8n5R/btsMAHFxC2N/yS8S4royV4mWtWoZLSf9m1/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2Fy8n5R%2FbtsMAHFxC2N%2FyS8S4royV4mWtWoZLSf9m1%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;2328&quot; height=&quot;690&quot; data-origin-width=&quot;2328&quot; data-origin-height=&quot;690&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;출처: &lt;a href=&quot;https://learn.microsoft.com/ko-kr/azure/aks/monitor-aks&quot;&gt;https://learn.microsoft.com/ko-kr/azure/aks/monitor-aks&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;먼저 Level 1 클러스터가 위치한 네트워크 수준의 모니터링이고, Level 2로 클러스터 레벨 컴포넌트인 노드(가상머신 세트)를 모니터링 해야합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;그리고 Level 3은 쿠버네티스 컴포넌트에 대한 모니터링으로, 컨트롤 프레인 컴포넌트인 API 서버, Cloud Controller, Kubelet과 같은 요소들을 모니터링 해야합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Level 4는 쿠버네티스 관점의 오브젝트와 워크로드에 대한 모니터링이며, 예를 들어, 컨테이너에 대한 메트릭과 파드의 재시작과 같은 이벤트도 포함해야 합니다. 한편으로 쿠버네티스에서 발생한 Event도 중요합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Level 5는 애플리케이션 수준의 모니터링입니다. Level 4와 다소 겹칠 수 있지만, 애플리케이션 지표를 포함해 애플리케이션의 로그를 모니터링 할 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;그리고 이를 위한 모니터링 도구와 시각화 도구가 필요함을 설명하고 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;여기서 클라우드의 Managed Kubernetes Service 관점으로 생각해 볼 때, 리소스에 대한 CRUD 또한 모니터링 해야할 수도 있습니다. 예를 들어, 누가 새로운 리소스를 만들고, 구성을 변경하거나 잘못된 삭제를 한 것과 같은 이벤트입니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;각 CSP(Cloud Service Provider)에서는 보통 각 상품을 위한 일반적인 모니터링을 기능을 제공하고 있습니다. AWS의 CloudWatch나 Azure의 Azure Monitor가 될 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;그리고 로깅 서비스를 통해 로그 데이터를 적재하고 쿼리를 통해 데이터를 조회하기 위한 기능을 제공합니다. AWS의 CloudWatch Logs와 Azure Log Analytics Workspace가 이에 해당합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이후에는 EKS에서 모니터링을 어떤 방식으로 제공하는지를 모니터링과 로깅 관점으로 살펴보겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;3. 실습 환경 생성&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;EKS에서 제공하는 모니터링과 로깅을 살펴보기 위해서 아래와 같은 실습환경을 구성하도록 하겠습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;2089&quot; data-origin-height=&quot;667&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/eot0x3/btsMAkjvBCn/G4LuebB5e3qQ8mslFZLxK0/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/eot0x3/btsMAkjvBCn/G4LuebB5e3qQ8mslFZLxK0/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/eot0x3/btsMAkjvBCn/G4LuebB5e3qQ8mslFZLxK0/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2Feot0x3%2FbtsMAkjvBCn%2FG4LuebB5e3qQ8mslFZLxK0%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;2089&quot; height=&quot;667&quot; data-origin-width=&quot;2089&quot; data-origin-height=&quot;667&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;blockquote data-ke-style=&quot;style3&quot;&gt;Note: 본 실습 환경은 AEWS(AWS EKS Workshop Study) 3기를 진행하 과정에서 제공받았습니다.&lt;/blockquote&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아래와 같이 CloudFormation을 통해 실습 환경을 배포합니다.&lt;/p&gt;
&lt;pre class=&quot;jboss-cli&quot;&gt;&lt;code&gt;# YAML 파일 다운로드
curl -O https://s3.ap-northeast-2.amazonaws.com/cloudformation.cloudneta.net/K8S/myeks-4week.yaml

# 변수 지정
CLUSTER_NAME=myeks
SSHKEYNAME=ekskey
MYACCESSKEY=&amp;lt;IAM Uesr 액세스 키&amp;gt;
MYSECRETKEY=&amp;lt;IAM Uesr 시크릿 키&amp;gt;
WorkerNodeInstanceType=t3.xlarge # 워커노드 인스턴스 타입 변경 가능

# CloudFormation 스택 배포
aws cloudformation deploy --template-file myeks-4week.yaml --stack-name $CLUSTER_NAME --parameter-overrides KeyName=$SSHKEYNAME SgIngressSshCidr=$(curl -s ipinfo.io/ip)/32  MyIamUserAccessKeyID=$MYACCESSKEY MyIamUserSecretAccessKey=$MYSECRETKEY ClusterBaseName=$CLUSTER_NAME WorkerNodeInstanceType=$WorkerNodeInstanceType --region ap-northeast-2

# CloudFormation 스택 배포 완료 후 작업용 EC2 IP 출력
aws cloudformation describe-stacks --stack-name myeks --query 'Stacks[*].Outputs[0].OutputValue' --output text

# EC2 접속
ssh -i ~/.ssh/&amp;lt;key&amp;gt;.pem ec2-user@$(aws cloudformation describe-stacks --stack-name myeks --query 'Stacks[*].Outputs[0].OutputValue' --output text)&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;해당 CloudFormation 스택에 배포된 운영서버를 통해서 EKS 배포까지 포함되어 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;EKS 배포가 완료되기 위해 20분 정도를 기다리고, 이후 설치된 EKS를 확인해 보겠습니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;# 변수 지정
CLUSTER_NAME=myeks
SSHKEYNAME=ekskey

# 클러스터 설치 확인
eksctl get cluster
NAME    REGION          EKSCTL CREATED
myeks   ap-northeast-2  True

eksctl get nodegroup --cluster $CLUSTER_NAME
CLUSTER NODEGROUP       STATUS  CREATED                 MIN SIZE        MAX SIZE        DESIRED CAPACITY        INSTANCE TYPE   IMAGE ID                ASG NAME                                   TYPE
myeks   ng1             ACTIVE  2025-02-28T15:05:57Z    3               3               3                       t3.xlarge       AL2023_x86_64_STANDARD  eks-ng1-b4caa68b-dac3-4a9c-a489-7d63a0d70934       managed

eksctl get addon --cluster $CLUSTER_NAME
2025-03-01 00:26:31 [ℹ]  Kubernetes version &quot;1.31&quot; in use by cluster &quot;myeks&quot;
2025-03-01 00:26:31 [ℹ]  getting all addons
2025-03-01 00:26:33 [ℹ]  to see issues for an addon run `eksctl get addon --name &amp;lt;addon-name&amp;gt; --cluster &amp;lt;cluster-name&amp;gt;`
NAME                    VERSION                 STATUS  ISSUES  IAMROLE                                                                                 UPDATE AVAILABLE   CONFIGURATION VALUES            POD IDENTITY ASSOCIATION ROLES
aws-ebs-csi-driver      v1.40.0-eksbuild.1      ACTIVE  0       arn:aws:iam::430118812536:role/eksctl-myeks-addon-aws-ebs-csi-driver-Role1-Ks7b8mzq4vmu
coredns                 v1.11.4-eksbuild.2      ACTIVE  0
kube-proxy              v1.31.3-eksbuild.2      ACTIVE  0
metrics-server          v0.7.2-eksbuild.2       ACTIVE  0
vpc-cni                 v1.19.3-eksbuild.1      ACTIVE  0       arn:aws:iam::430118812536:role/eksctl-myeks-addon-vpc-cni-Role1-He4lLHyBeE62              enableNetworkPolicy: &quot;true&quot;

eksctl get iamserviceaccount --cluster $CLUSTER_NAME

NAMESPACE       NAME                            ROLE ARN
kube-system     aws-load-balancer-controller    arn:aws:iam::430118812536:role/eksctl-myeks-addon-iamserviceaccount-kube-sys-Role1-1GFoeNZ7Z43o

# kubeconfig 생성
aws sts get-caller-identity --query Arn
aws eks update-kubeconfig --name myeks --user-alias &amp;lt;위 출력된 자격증명 사용자&amp;gt;

# 기본 구성 정보 확인
kubectl cluster-info
Kubernetes control plane is running at https://7984C504F1BE86380015EB205905A2C5.gr7.ap-northeast-2.eks.amazonaws.com
CoreDNS is running at https://7984C504F1BE86380015EB205905A2C5.gr7.ap-northeast-2.eks.amazonaws.com/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

kubectl get node
NAME                                               STATUS   ROLES    AGE   VERSION
ip-192-168-1-115.ap-northeast-2.compute.internal   Ready    &amp;lt;none&amp;gt;   21m   v1.31.5-eks-5d632ec
ip-192-168-2-178.ap-northeast-2.compute.internal   Ready    &amp;lt;none&amp;gt;   21m   v1.31.5-eks-5d632ec
ip-192-168-3-168.ap-northeast-2.compute.internal   Ready    &amp;lt;none&amp;gt;   21m   v1.31.5-eks-5d632ec


kubectl get node --label-columns=node.kubernetes.io/instance-type,eks.amazonaws.com/capacityType,topology.kubernetes.io/zone
NAME                                               STATUS   ROLES    AGE   VERSION               INSTANCE-TYPE   CAPACITYTYPE   ZONE
ip-192-168-1-115.ap-northeast-2.compute.internal   Ready    &amp;lt;none&amp;gt;   21m   v1.31.5-eks-5d632ec   t3.xlarge       ON_DEMAND      ap-northeast-2a
ip-192-168-2-178.ap-northeast-2.compute.internal   Ready    &amp;lt;none&amp;gt;   21m   v1.31.5-eks-5d632ec   t3.xlarge       ON_DEMAND      ap-northeast-2b
ip-192-168-3-168.ap-northeast-2.compute.internal   Ready    &amp;lt;none&amp;gt;   21m   v1.31.5-eks-5d632ec   t3.xlarge       ON_DEMAND      ap-northeast-2c


kubectl get pod -A
NAMESPACE     NAME                                  READY   STATUS    RESTARTS   AGE
kube-system   aws-node-4jsvr                        2/2     Running   0          21m
kube-system   aws-node-bkp8n                        2/2     Running   0          21m
kube-system   aws-node-v5rhv                        2/2     Running   0          21m
kube-system   coredns-86f5954566-4j74x              1/1     Running   0          27m
kube-system   coredns-86f5954566-mcw5d              1/1     Running   0          27m
kube-system   ebs-csi-controller-549bf6879f-26wqx   6/6     Running   0          17m
kube-system   ebs-csi-controller-549bf6879f-qgqtz   6/6     Running   0          17m
kube-system   ebs-csi-node-8zr72                    3/3     Running   0          17m
kube-system   ebs-csi-node-sc6tt                    3/3     Running   0          17m
kube-system   ebs-csi-node-v48kr                    3/3     Running   0          17m
kube-system   kube-proxy-6wkjg                      1/1     Running   0          21m
kube-system   kube-proxy-v8228                      1/1     Running   0          21m
kube-system   kube-proxy-xw8hc                      1/1     Running   0          21m
kube-system   metrics-server-6bf5998d9c-2gngg       1/1     Running   0          27m
kube-system   metrics-server-6bf5998d9c-wv68w       1/1     Running   0          27m&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이후 실습에 사용될 일부 구성 요소를 배포하겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;blockquote data-ke-style=&quot;style3&quot;&gt;[Note] &lt;br /&gt;참고로 본 실습 전에 Route 53에서 도메인을 생성했습니다. 만약 도메인 없는 경우 Loadbalancer 등으로 서비스를 변경해야 해서 실습에 제한이 있을 수 있습니다.&lt;br /&gt;- Route53을 통한 도메인 구매: https://www.youtube.com/watch?v=4HBFozkJUeU&lt;br /&gt;&lt;br /&gt;또한 해당 도메인에 대해서 AWS Certificate Manager를 통해 인증서를 발급 받아야 합니다.&lt;br /&gt;- 인증서 발급: https://www.youtube.com/watch?v=mMpPlaUj-vI&lt;/blockquote&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이어서 진행하겠습니다.&lt;/p&gt;
&lt;pre class=&quot;yaml&quot;&gt;&lt;code&gt;# 환경 변수
MyDomain=aperson.link # 각자 자신의 도메인 이름 입력
MyDnzHostedZoneId=$(aws route53 list-hosted-zones-by-name --dns-name &quot;$MyDomain.&quot; --query &quot;HostedZones[0].Id&quot; --output text)
CERT_ARN=$(aws acm list-certificates --query 'CertificateSummaryList[].CertificateArn[]' --output text) #사용 리전의 인증서 ARN 확인

# kube-ops-view
helm repo add geek-cookbook https://geek-cookbook.github.io/charts/
helm install kube-ops-view geek-cookbook/kube-ops-view --version 1.2.2 --set service.main.type=ClusterIP  --set env.TZ=&quot;Asia/Seoul&quot; --namespace kube-system

# gp3 스토리지 클래스 생성
cat &amp;lt;&amp;lt;EOF | kubectl apply -f -
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: gp3
  annotations:
    storageclass.kubernetes.io/is-default-class: &quot;true&quot;
allowVolumeExpansion: true
provisioner: ebs.csi.aws.com
volumeBindingMode: WaitForFirstConsumer
parameters:
  type: gp3
  allowAutoIOPSPerGBIncrease: 'true'
  encrypted: 'true'
  fsType: xfs # 기본값이 ext4
EOF
kubectl get sc

# ExternalDNS
curl -s https://raw.githubusercontent.com/gasida/PKOS/main/aews/externaldns.yaml | MyDomain=$MyDomain MyDnzHostedZoneId=$MyDnzHostedZoneId envsubst | kubectl apply -f -

# AWS LoadBalancerController
helm repo add eks https://aws.github.io/eks-charts
helm install aws-load-balancer-controller eks/aws-load-balancer-controller -n kube-system --set clusterName=$CLUSTER_NAME \
  --set serviceAccount.create=false --set serviceAccount.name=aws-load-balancer-controller

# kubeopsview 용 Ingress 설정 : group 설정으로 1대의 ALB를 여러개의 ingress 에서 공용 사용
cat &amp;lt;&amp;lt;EOF | kubectl apply -f -
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    alb.ingress.kubernetes.io/certificate-arn: $CERT_ARN
    alb.ingress.kubernetes.io/group.name: study
    alb.ingress.kubernetes.io/listen-ports: '[{&quot;HTTPS&quot;:443}, {&quot;HTTP&quot;:80}]'
    alb.ingress.kubernetes.io/load-balancer-name: $CLUSTER_NAME-ingress-alb
    alb.ingress.kubernetes.io/scheme: internet-facing
    alb.ingress.kubernetes.io/ssl-redirect: &quot;443&quot;
    alb.ingress.kubernetes.io/success-codes: 200-399
    alb.ingress.kubernetes.io/target-type: ip
  labels:
    app.kubernetes.io/name: kubeopsview
  name: kubeopsview
  namespace: kube-system
spec:
  ingressClassName: alb
  rules:
  - host: kubeopsview.$MyDomain
    http:
      paths:
      - backend:
          service:
            name: kube-ops-view
            port:
              number: 8080
        path: /
        pathType: Prefix
EOF&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;또한 더불어 실습에 필요한 모니터링 데이터를 누적하기 위해서 샘플 애플리케이션도 같이 배포하겠습니다.&lt;/p&gt;
&lt;pre class=&quot;yaml&quot;&gt;&lt;code&gt;# Bookinfo 애플리케이션 배포
kubectl apply -f https://raw.githubusercontent.com/istio/istio/refs/heads/master/samples/bookinfo/platform/kube/bookinfo.yaml

# 확인
kubectl get all,sa

# product 웹 접속 확인
kubectl exec &quot;$(kubectl get pod -l app=ratings -o jsonpath='{.items[0].metadata.name}')&quot; -c ratings -- curl -sS productpage:9080/productpage | grep -o &quot;&amp;lt;title&amp;gt;.*&amp;lt;/title&amp;gt;&quot;

# 로그
kubectl log -l app=productpage -f


# Ingress 배포
cat &amp;lt;&amp;lt;EOF | kubectl apply -f -
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    alb.ingress.kubernetes.io/certificate-arn: $CERT_ARN
    alb.ingress.kubernetes.io/group.name: study
    alb.ingress.kubernetes.io/listen-ports: '[{&quot;HTTPS&quot;:443}, {&quot;HTTP&quot;:80}]'
    alb.ingress.kubernetes.io/load-balancer-name: $CLUSTER_NAME-ingress-alb
    alb.ingress.kubernetes.io/scheme: internet-facing
    alb.ingress.kubernetes.io/ssl-redirect: &quot;443&quot;
    alb.ingress.kubernetes.io/success-codes: 200-399
    alb.ingress.kubernetes.io/target-type: ip
  labels:
    app.kubernetes.io/name: bookinfo
  name: bookinfo
spec:
  ingressClassName: alb
  rules:
  - host: bookinfo.$MyDomain
    http:
      paths:
      - backend:
          service:
            name: productpage
            port:
              number: 9080
        path: /
        pathType: Prefix
EOF
kubectl get ingress

# bookinfo 접속 정보 확인 
echo -e &quot;bookinfo URL = https://bookinfo.$MyDomain/productpage&quot;
open &quot;https://bookinfo.$MyDomain/productpage&quot; # macOS
&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;배포된 리소스를 확인해봅니다. external dns 를 통해 DNS가 등록되지만 전파에 시간이 걸릴 수 있습니다.&lt;/p&gt;
&lt;pre class=&quot;routeros&quot;&gt;&lt;code&gt;kubectl get ingress -n kube-system
NAME          CLASS   HOSTS                      ADDRESS                                                        PORTS   AGE
kubeopsview   alb     kubeopsview.aperson.link   myeks-ingress-alb-665851389.ap-northeast-2.elb.amazonaws.com   80      5m

kubectl get ingress
NAME       CLASS   HOSTS                   ADDRESS                                                        PORTS   AGE
bookinfo   alb     bookinfo.aperson.link   myeks-ingress-alb-665851389.ap-northeast-2.elb.amazonaws.com   80      7s&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;접속을 확인해봅니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;709&quot; data-origin-height=&quot;304&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/d7DsJd/btsMyut4Jhh/jLtsRupkkkhD3h2KGrzFG0/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/d7DsJd/btsMyut4Jhh/jLtsRupkkkhD3h2KGrzFG0/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/d7DsJd/btsMyut4Jhh/jLtsRupkkkhD3h2KGrzFG0/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2Fd7DsJd%2FbtsMyut4Jhh%2FjLtsRupkkkhD3h2KGrzFG0%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;709&quot; height=&quot;304&quot; data-origin-width=&quot;709&quot; data-origin-height=&quot;304&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;샘플 애플리케이션도 정상적으로 실행 되었습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1947&quot; data-origin-height=&quot;1134&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/n5ET5/btsMAWvGjMt/9ZLz9uOCkQoKMXaZArwk8k/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/n5ET5/btsMAWvGjMt/9ZLz9uOCkQoKMXaZArwk8k/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/n5ET5/btsMAWvGjMt/9ZLz9uOCkQoKMXaZArwk8k/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2Fn5ET5%2FbtsMAWvGjMt%2F9ZLz9uOCkQoKMXaZArwk8k%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1947&quot; height=&quot;1134&quot; data-origin-width=&quot;1947&quot; data-origin-height=&quot;1134&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;로그 발생을 위해서 아래와 같이 반복 접속을 해볼 수 있습니다.&lt;/p&gt;
&lt;pre class=&quot;bash&quot;&gt;&lt;code&gt;curl -s -k https://bookinfo.$MyDomain/productpage | grep -o &quot;&amp;lt;title&amp;gt;.*&amp;lt;/title&amp;gt;&quot;
while true; do curl -s -k https://bookinfo.$MyDomain/productpage | grep -o &quot;&amp;lt;title&amp;gt;.*&amp;lt;/title&amp;gt;&quot; ; echo &quot;--------------&quot; ; sleep 1; done
for i in {1..100};  do curl -s -k https://bookinfo.$MyDomain/productpage | grep -o &quot;&amp;lt;title&amp;gt;.*&amp;lt;/title&amp;gt;&quot; ; done&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;4. EKS의 모니터링과 로깅&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;먼저 EKS의 모니터링과 로깅에 대한 설명을 아래 두가지 문서에서 살펴볼 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;https://docs.aws.amazon.com/prescriptive-guidance/latest/implementing-logging-monitoring-cloudwatch/amazon-eks-logging-monitoring.html&quot;&gt;https://docs.aws.amazon.com/prescriptive-guidance/latest/implementing-logging-monitoring-cloudwatch/amazon-eks-logging-monitoring.html&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;EKS에서는 CloudWatch Logs를 통합하여 컨트롤 플레인 로그를 확인할 수 있습니다. 또한 CloudWatch agent를 EKS 노드에 배포하여 노드와 컨테이너 로그를 수집하는 방법도 제공합니다. 이때 Fluent Bit과 Fluentd가 컨테이너 로그를 수집하여 CloudWatch Logs로 전송하도록 지원합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;CloudWatch Container Insight는 EKS 클러스터, 노드, 파드, 서비스와 같은 수준의 모니터링을 제공하는 도구입니다. 또한 Prometheus를 통해 다양한 메트릭을 수집하는 방식도 제공합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;EKS의 모니터링 솔루션을 아래와 같은 그림으로 확인하실 수 있습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;879&quot; data-origin-height=&quot;480&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/D7jnI/btsMx8dGsB6/Krb9M0szvyghQcIJiSNysK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/D7jnI/btsMx8dGsB6/Krb9M0szvyghQcIJiSNysK/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/D7jnI/btsMx8dGsB6/Krb9M0szvyghQcIJiSNysK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FD7jnI%2FbtsMx8dGsB6%2FKrb9M0szvyghQcIJiSNysK%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;879&quot; height=&quot;480&quot; data-origin-width=&quot;879&quot; data-origin-height=&quot;480&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;출처: &lt;a href=&quot;https://www.youtube.com/watch?v=349ywnrrROg&quot;&gt;https://www.youtube.com/watch?v=349ywnrrROg&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;본 포스트에서는 CloudWatch Logs와 CloudWatch Container Insight 활용해 EKS 모니터링과 로깅을 확인해보겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;EKS 로깅&lt;/h3&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;컨트롤 프레인 로깅&lt;/h4&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;먼저 컨트롤 플레인 로깅을 먼저 살펴 보겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;컨트롤 플레인 로깅에서는 API Server, Audit, Authenticator, Controller manager, Scheduler와 같은 로그 유형을 제공하고 있습니다. 이에 대한 설명은 아래 문서를 참고 하실 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;https://docs.aws.amazon.com/eks/latest/userguide/control-plane-logs.html&quot;&gt;https://docs.aws.amazon.com/eks/latest/userguide/control-plane-logs.html&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;컨트롤 플레인 로깅이 필요한 경우를 예를 들어보면, 특정 시점 생성된 오브젝트를 추적하기 위해 Audit 로그를 보거나, 클러스터가 비정상 동작하는 경우에 대한 API 서버 로그를 확인 하는 것, AWS 수준의 리소스와 연관된 경우 Controller manager 로그를 점검하는 것과 같은 상황이 있을 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;컨트롤 플레인 로깅은 웹 콘솔에서 Observability 탭으로 이동하여 아래로 이동하면 Control plane logs가 확인됩니다. eksctl 로 설치한 클러스터에는 기본적으로 모든 옵션이 off 인 것을 알 수 있습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1714&quot; data-origin-height=&quot;1253&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/wm85a/btsMy4BGk12/BmeVghusNAp2g0Dda1vhzK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/wm85a/btsMy4BGk12/BmeVghusNAp2g0Dda1vhzK/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/wm85a/btsMy4BGk12/BmeVghusNAp2g0Dda1vhzK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2Fwm85a%2FbtsMy4BGk12%2FBmeVghusNAp2g0Dda1vhzK%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1714&quot; height=&quot;1253&quot; data-origin-width=&quot;1714&quot; data-origin-height=&quot;1253&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;컨트롤 플레인 로깅을 활성화 하고 로그를 살펴보겠습니다.&lt;/p&gt;
&lt;pre class=&quot;jboss-cli&quot;&gt;&lt;code&gt;# 모든 로깅 활성화
aws eks update-cluster-config --region ap-northeast-2 --name $CLUSTER_NAME \
    --logging '{&quot;clusterLogging&quot;:[{&quot;types&quot;:[&quot;api&quot;,&quot;audit&quot;,&quot;authenticator&quot;,&quot;controllerManager&quot;,&quot;scheduler&quot;],&quot;enabled&quot;:true}]}'&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;웹 콘솔에서 확인해보면 로그가 활성화되었습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1707&quot; data-origin-height=&quot;260&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bpbQIa/btsMAGs8UNd/8UQWZdaRwVgZTKgcNzGgik/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bpbQIa/btsMAGs8UNd/8UQWZdaRwVgZTKgcNzGgik/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bpbQIa/btsMAGs8UNd/8UQWZdaRwVgZTKgcNzGgik/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbpbQIa%2FbtsMAGs8UNd%2F8UQWZdaRwVgZTKgcNzGgik%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1707&quot; height=&quot;260&quot; data-origin-width=&quot;1707&quot; data-origin-height=&quot;260&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;그리고 CloudWatch를 접근해보면 새로운 Log group이 생성된 것이 확인됩니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1752&quot; data-origin-height=&quot;276&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bGTxMI/btsMA6ydXa0/t7MvlXccASxM2p7KK9EKnK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bGTxMI/btsMA6ydXa0/t7MvlXccASxM2p7KK9EKnK/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bGTxMI/btsMA6ydXa0/t7MvlXccASxM2p7KK9EKnK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbGTxMI%2FbtsMA6ydXa0%2Ft7MvlXccASxM2p7KK9EKnK%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1752&quot; height=&quot;276&quot; data-origin-width=&quot;1752&quot; data-origin-height=&quot;276&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;해당 로그 그룹으로 진입하면, 아래와 같이 각 로그에 해당하는 Log stream이 생성된 것을 확인할 수 있습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1697&quot; data-origin-height=&quot;725&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/Omfu0/btsMyvNlb26/CR6F6lUTZvq1cYuU7tBku0/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/Omfu0/btsMyvNlb26/CR6F6lUTZvq1cYuU7tBku0/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/Omfu0/btsMyvNlb26/CR6F6lUTZvq1cYuU7tBku0/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FOmfu0%2FbtsMyvNlb26%2FCR6F6lUTZvq1cYuU7tBku0%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1697&quot; height=&quot;725&quot; data-origin-width=&quot;1697&quot; data-origin-height=&quot;725&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;로그 스트림 중 하나를 선택하면 실제 로그를 확인하실 수 있습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1711&quot; data-origin-height=&quot;234&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bS2fqG/btsMyIloFY1/5sSMoEGxXQtswgEHUEdts1/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bS2fqG/btsMyIloFY1/5sSMoEGxXQtswgEHUEdts1/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bS2fqG/btsMyIloFY1/5sSMoEGxXQtswgEHUEdts1/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbS2fqG%2FbtsMyIloFY1%2F5sSMoEGxXQtswgEHUEdts1%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1711&quot; height=&quot;234&quot; data-origin-width=&quot;1711&quot; data-origin-height=&quot;234&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;또한 CloudWatch Log Insights를 통해서 쿼리를 통해 로그를 확인할 수 있습니다.&lt;/p&gt;
&lt;pre class=&quot;routeros&quot;&gt;&lt;code&gt;# EC2 Instance가 NodeNotReady 상태인 로그 검색
fields @timestamp, @message
| filter @message like /NodeNotReady/
| sort @timestamp desc

# kube-apiserver-audit 로그에서 userAgent 정렬해서 결과 확인
fields userAgent, requestURI, @timestamp, @message
| filter @logStream ~= &quot;kube-apiserver-audit&quot;
| stats count(userAgent) as count by userAgent
| sort count desc

# kube-scheduler 로그 확인
fields @timestamp, @message
| filter @logStream ~= &quot;kube-scheduler&quot;
| sort @timestamp desc

# authenticator 로그 확인
fields @timestamp, @message
| filter @logStream ~= &quot;authenticator&quot;
| sort @timestamp desc

# kube-controller-manager 로그 확인
fields @timestamp, @message
| filter @logStream ~= &quot;kube-controller-manager&quot;
| sort @timestamp desc&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이 중 kube-audit 로그를 통해 접근한 userAgent의 갯수를 확인한 예시 입니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1709&quot; data-origin-height=&quot;1033&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bd86A4/btsMxGaLvnS/fvkPD6eonYJTl9p02FYtKK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bd86A4/btsMxGaLvnS/fvkPD6eonYJTl9p02FYtKK/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bd86A4/btsMxGaLvnS/fvkPD6eonYJTl9p02FYtKK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2Fbd86A4%2FbtsMxGaLvnS%2FfvkPD6eonYJTl9p02FYtKK%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1709&quot; height=&quot;1033&quot; data-origin-width=&quot;1709&quot; data-origin-height=&quot;1033&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;또한 &lt;code&gt;aws logs&lt;/code&gt; 명령으로 로그를 확인할 수도 있습니다.&lt;/p&gt;
&lt;pre class=&quot;jboss-cli&quot;&gt;&lt;code&gt;# 로그 그룹 확인
aws logs describe-log-groups | jq

# 로그 tail 확인 : aws logs tail help
aws logs tail /aws/eks/$CLUSTER_NAME/cluster | more

# 신규 로그를 바로 출력
aws logs tail /aws/eks/$CLUSTER_NAME/cluster --follow

# 필터 패턴
aws logs tail /aws/eks/$CLUSTER_NAME/cluster --filter-pattern &amp;lt;필터 패턴&amp;gt;

# 로그 스트림이름
aws logs tail /aws/eks/$CLUSTER_NAME/cluster --log-stream-name-prefix &amp;lt;로그 스트림 prefix&amp;gt; --follow
aws logs tail /aws/eks/$CLUSTER_NAME/cluster --log-stream-name-prefix kube-apiserver --follow
aws logs tail /aws/eks/$CLUSTER_NAME/cluster --log-stream-name-prefix kube-apiserver-audit --follow
aws logs tail /aws/eks/$CLUSTER_NAME/cluster --log-stream-name-prefix kube-scheduler --follow
aws logs tail /aws/eks/$CLUSTER_NAME/cluster --log-stream-name-prefix authenticator --follow
aws logs tail /aws/eks/$CLUSTER_NAME/cluster --log-stream-name-prefix kube-controller-manager --follow
aws logs tail /aws/eks/$CLUSTER_NAME/cluster --log-stream-name-prefix cloud-controller-manager --follow
kubectl scale deployment -n kube-system coredns --replicas=1
kubectl scale deployment -n kube-system coredns --replicas=2

# 시간 지정: 1초(s) 1분(m) 1시간(h) 하루(d) 한주(w)
aws logs tail /aws/eks/$CLUSTER_NAME/cluster --since 1h30m

# 짧게 출력
aws logs tail /aws/eks/$CLUSTER_NAME/cluster --since 1h30m --format short&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;CloudWatch Logs에서 aws logs tail 과 같은 방식으로 손 쉽게 로그를 확인할 수 있는 방식을 제공하는 점이 큰 장점으로 보였습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;실습을 종료하고, 컨트롤 플레인 로그를 비활성화 화도록 하겠습니다.&lt;/p&gt;
&lt;pre class=&quot;dsconfig&quot;&gt;&lt;code&gt;# EKS Control Plane 로깅(CloudWatch Logs) 비활성화
eksctl utils update-cluster-logging --cluster $CLUSTER_NAME --region ap-northeast-2 --disable-types all --approve

# 로그 그룹 삭제
aws logs delete-log-group --log-group-name /aws/eks/$CLUSTER_NAME/cluster&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;노드와 애플리케이션 로깅&lt;/h4&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;EKS의 노드와 컨테이너 모니터링을 위해서 CloudWatch agent와 Fluent Bit을 사용합니다. 두 파드는 데몬 셋으로 구성되어 아래와 같은 형태로 구성됩니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;751&quot; data-origin-height=&quot;570&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/LnlYp/btsMzsbhHj9/UbBWwjAMVV7drvRkSbp1t0/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/LnlYp/btsMzsbhHj9/UbBWwjAMVV7drvRkSbp1t0/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/LnlYp/btsMzsbhHj9/UbBWwjAMVV7drvRkSbp1t0/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FLnlYp%2FbtsMzsbhHj9%2FUbBWwjAMVV7drvRkSbp1t0%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;751&quot; height=&quot;570&quot; data-origin-width=&quot;751&quot; data-origin-height=&quot;570&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;출처: &lt;a href=&quot;https://aws.amazon.com/ko/blogs/containers/fluent-bit-integration-in-cloudwatch-container-insights-for-eks/&quot;&gt;https://aws.amazon.com/ko/blogs/containers/fluent-bit-integration-in-cloudwatch-container-insights-for-eks/&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이들은 CloudWatch Observability라는 Addon으로 제공되므로, 아래와 같이 설치를 진행합니다.&lt;/p&gt;
&lt;pre class=&quot;sql&quot;&gt;&lt;code&gt;# IRSA 설정
eksctl create iamserviceaccount \
  --name cloudwatch-agent \
  --namespace amazon-cloudwatch --cluster $CLUSTER_NAME \
  --role-name $CLUSTER_NAME-cloudwatch-agent-role \
  --attach-policy-arn arn:aws:iam::aws:policy/CloudWatchAgentServerPolicy \
  --role-only \
  --approve

# addon 배포 (사전에 환경 변수가 정의된 EC2 인스턴스에서 실행)
aws eks create-addon --addon-name amazon-cloudwatch-observability --cluster-name $CLUSTER_NAME --service-account-role-arn arn:aws:iam::$ACCOUNT_ID:role/$CLUSTER_NAME-cloudwatch-agent-role

# addon 확인
aws eks list-addons --cluster-name myeks --output table
---------------------------------------
|             ListAddons              |
+-------------------------------------+
||              addons               ||
|+-----------------------------------+|
||  amazon-cloudwatch-observability  ||
||  aws-ebs-csi-driver               ||
||  coredns                          ||
||  kube-proxy                       ||
||  metrics-server                   ||
||  vpc-cni                          ||
|+-----------------------------------+|

# 설치 확인
kubectl get crd | grep -i cloudwatch

amazoncloudwatchagents.cloudwatch.aws.amazon.com   2025-02-28T16:27:24Z
dcgmexporters.cloudwatch.aws.amazon.com            2025-02-28T16:27:24Z
instrumentations.cloudwatch.aws.amazon.com         2025-02-28T16:27:25Z
neuronmonitors.cloudwatch.aws.amazon.com           2025-02-28T16:27:25Z

kubectl get all -n amazon-cloudwatch

NAME                                                                  READY   STATUS    RESTARTS   AGE
pod/amazon-cloudwatch-observability-controller-manager-6f76854w9rvx   1/1     Running   0          69s
pod/cloudwatch-agent-dcfqq                                            1/1     Running   0          64s
pod/cloudwatch-agent-jcvk5                                            1/1     Running   0          65s
pod/cloudwatch-agent-r8tcw                                            1/1     Running   0          64s
pod/fluent-bit-6zbmk                                                  1/1     Running   0          69s
pod/fluent-bit-j9hl8                                                  1/1     Running   0          69s
pod/fluent-bit-zrw4v                                                  1/1     Running   0          69s

..


# cloudwatch-agent 설정 확인
kubectl describe cm cloudwatch-agent -n amazon-cloudwatch
kubectl get cm cloudwatch-agent -n amazon-cloudwatch -o jsonpath=&quot;{.data.cwagentconfig\.json}&quot; | jq
{
  &quot;agent&quot;: {
    &quot;region&quot;: &quot;ap-northeast-2&quot;
  },
  &quot;logs&quot;: {
    &quot;metrics_collected&quot;: {
      &quot;application_signals&quot;: {
        &quot;hosted_in&quot;: &quot;myeks&quot;
      },
      &quot;kubernetes&quot;: {
        &quot;cluster_name&quot;: &quot;myeks&quot;,
        &quot;enhanced_container_insights&quot;: true
      }
    }
  },
  &quot;traces&quot;: {
    &quot;traces_collected&quot;: {
      &quot;application_signals&quot;: {}
    }
  }
}

#Fluent bit 파드 수집하는 방법 : Volumes에 HostPath를 통해서 Node Log, Container Log에 접근함
kubectl describe -n amazon-cloudwatch ds cloudwatch-agent
...
  Volumes:
   ...
   rootfs:
    Type:          HostPath (bare host directory volume)
    Path:          /
    HostPathType:  


# Fluent Bit 로그 INPUT/FILTER/OUTPUT 설정 확인
## 설정 부분 구성 : application-log.conf , dataplane-log.conf , fluent-bit.conf , host-log.conf , parsers.conf
kubectl describe cm fluent-bit-config -n amazon-cloudwatch
...
application-log.conf:
----
[INPUT]
    Name                tail
    Tag                 application.*
    Exclude_Path        /var/log/containers/cloudwatch-agent*, /var/log/containers/fluent-bit*, /var/log/containers/aws-node*, /var/log/containers/kube-proxy*
    Path                /var/log/containers/*.log
    multiline.parser    docker, cri
    DB                  /var/fluent-bit/state/flb_container.db
    Mem_Buf_Limit       50MB
    Skip_Long_Lines     On
    Refresh_Interval    10
    Rotate_Wait         30
    storage.type        filesystem
    Read_from_Head      ${READ_FROM_HEAD}
...

[FILTER]
    Name                kubernetes
    Match               application.*
    Kube_URL            https://kubernetes.default.svc:443
    Kube_Tag_Prefix     application.var.log.containers.
    Merge_Log           On
    Merge_Log_Key       log_processed
    K8S-Logging.Parser  On
    K8S-Logging.Exclude Off
    Labels              Off
    Annotations         Off
    Use_Kubelet         On
    Kubelet_Port        10250
    Buffer_Size         0

[OUTPUT]
    Name                cloudwatch_logs
    Match               application.*
    region              ${AWS_REGION}
    log_group_name      /aws/containerinsights/${CLUSTER_NAME}/application
    log_stream_prefix   ${HOST_NAME}-
    auto_create_group   true
    extra_user_agent    container-insights
...&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Addon을 통해 생성되는 로그 그룹과 대응하는 로그는 아래와 같습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1118&quot; data-origin-height=&quot;367&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/7kM7P/btsMAL17EYz/ukhv8DxZcix63KsfxBGFHK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/7kM7P/btsMAL17EYz/ukhv8DxZcix63KsfxBGFHK/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/7kM7P/btsMAL17EYz/ukhv8DxZcix63KsfxBGFHK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2F7kM7P%2FbtsMAL17EYz%2Fukhv8DxZcix63KsfxBGFHK%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1118&quot; height=&quot;367&quot; data-origin-width=&quot;1118&quot; data-origin-height=&quot;367&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;출처: &lt;a href=&quot;https://docs.aws.amazon.com/prescriptive-guidance/latest/implementing-logging-monitoring-cloudwatch/kubernetes-eks-logging.html#eks-node-application-logging&quot;&gt;https://docs.aws.amazon.com/prescriptive-guidance/latest/implementing-logging-monitoring-cloudwatch/kubernetes-eks-logging.html#eks-node-application-logging&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;CloudWatch Logs에 보면 아래와 같은 로그 그룹이 생성된 것을 알 수 있습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1465&quot; data-origin-height=&quot;192&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/v2sSg/btsMAiMMUAY/ns4OFzElyQSJQ7S10KHEx0/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/v2sSg/btsMAiMMUAY/ns4OFzElyQSJQ7S10KHEx0/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/v2sSg/btsMAiMMUAY/ns4OFzElyQSJQ7S10KHEx0/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2Fv2sSg%2FbtsMAiMMUAY%2Fns4OFzElyQSJQ7S10KHEx0%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1465&quot; height=&quot;192&quot; data-origin-width=&quot;1465&quot; data-origin-height=&quot;192&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;다만 의아한 부분은 분명 configmap에 log_group_name에 /host가 있는데 이것은 생성되지 않았고, /performance라는 로그 그룹이 추가 되어 있습니다.&lt;/p&gt;
&lt;pre class=&quot;gradle&quot;&gt;&lt;code&gt;kubectl describe cm fluent-bit-config -n amazon-cloudwatch |grep log_group_name
  log_group_name      /aws/containerinsights/${CLUSTER_NAME}/application
  log_group_name      /aws/containerinsights/${CLUSTER_NAME}/dataplane
  log_group_name      /aws/containerinsights/${CLUSTER_NAME}/host&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;fluent-bit 에서도 log group 생성이 실패한 것으로 보입니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;kubectl logs -f -n amazon-cloudwatch fluent-bit-zrw4v
AWS for Fluent Bit Container Image Version 2.32.5
Fluent Bit v1.9.10
* Copyright (C) 2015-2022 The Fluent Bit Authors
* Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
* https://fluentbit.io

[2025/02/28 16:27:34] [error] [filter:kubernetes:kubernetes.1] [kubernetes] no upstream connections available to cloudwatch-agent.amazon-cloudwatch:4311
[2025/02/28 16:27:39] [error] [output:cloudwatch_logs:cloudwatch_logs.0] CreateLogGroup API responded with error='OperationAbortedException', message='A conflicting operation is currently in progress against this resource. Please try again.'
[2025/02/28 16:27:39] [error] [output:cloudwatch_logs:cloudwatch_logs.0] Failed to create log group
[2025/02/28 16:27:39] [error] [output:cloudwatch_logs:cloudwatch_logs.0] Failed to send events&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;버그인거 같지만 Addon에서 관리되는 영역이라 확인이 어려운 것 같습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;https://docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_CreateLogGroup.html&quot;&gt;https://docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_CreateLogGroup.html&lt;/a&gt;&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;&lt;b&gt;OperationAbortedException&lt;/b&gt;&lt;/li&gt;
&lt;li&gt;Multiple concurrent requests to update the same resource were in conflict.HTTP Status Code: 400&lt;/li&gt;
&lt;/ul&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;/performance 로그 그룹은 container Insight를 위한 성능 데이터를 CloudWatch Agent가 쌓고 있는 것으로 보입니다.&lt;/p&gt;
&lt;pre class=&quot;dts&quot;&gt;&lt;code&gt;kubectl logs -f -n amazon-cloudwatch   cloudwatch-agent-dcfqq |grep log_group_name
        log_group_name: /aws/application-signals/data
        log_group_name: /aws/containerinsights/{ClusterName}/performance&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;EKS 메트릭 기반 모니터링&lt;/h3&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;앞서 설치한 CloudWatch Observability 애드온에 으해서 Container Insight에 해당하는 메트릭도 수집됩니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;웹 콘솔에서 내용을 살펴 보겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;CloudWatch &amp;rarr; Insights &amp;rarr; Container Insights 으로 접근할 수 있습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;2035&quot; data-origin-height=&quot;1757&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bjCrsc/btsMyDYIctu/5ku8TSiUbTwnt4UEOaqaf1/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bjCrsc/btsMyDYIctu/5ku8TSiUbTwnt4UEOaqaf1/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bjCrsc/btsMyDYIctu/5ku8TSiUbTwnt4UEOaqaf1/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbjCrsc%2FbtsMyDYIctu%2F5ku8TSiUbTwnt4UEOaqaf1%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;2035&quot; height=&quot;1757&quot; data-origin-width=&quot;2035&quot; data-origin-height=&quot;1757&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;우측 상단의 &lt;code&gt;View performance dashboard&lt;/code&gt;를 눌러면 여러가지 뷰로 다양한 메트릭과 그래프를 확인 가능합니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;2068&quot; data-origin-height=&quot;1123&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/oOEb7/btsMzghLh7T/wlp8Gjk5EkTw9GXevi2bnK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/oOEb7/btsMzghLh7T/wlp8Gjk5EkTw9GXevi2bnK/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/oOEb7/btsMzghLh7T/wlp8Gjk5EkTw9GXevi2bnK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FoOEb7%2FbtsMzghLh7T%2Fwlp8Gjk5EkTw9GXevi2bnK%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;2068&quot; height=&quot;1123&quot; data-origin-width=&quot;2068&quot; data-origin-height=&quot;1123&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;생성된 특정 리소스(네임스페이스와 Workload로 선택)를 선택한 경우 해당 각종 메트릭을 확인할 수 있습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;2043&quot; data-origin-height=&quot;1510&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/KhLQo/btsMytIK1uB/tdKtbn9ebKnXrLKIq8120K/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/KhLQo/btsMytIK1uB/tdKtbn9ebKnXrLKIq8120K/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/KhLQo/btsMytIK1uB/tdKtbn9ebKnXrLKIq8120K/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FKhLQo%2FbtsMytIK1uB%2FtdKtbn9ebKnXrLKIq8120K%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;2043&quot; height=&quot;1510&quot; data-origin-width=&quot;2043&quot; data-origin-height=&quot;1510&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;전반적으로 Container Insight를 통해 확인하는 정보들이 체계적으로 분류되어 있고, 각 항목에서 시각화가 잘되어 있는 점이 인상 깊습니다. 그리고 각 View에 대한 response time도 빨랐습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;실습을 마무리하고 애드온과 생성된 로그 그룹을 삭제하도록 하겠습니다.&lt;/p&gt;
&lt;pre class=&quot;dsconfig&quot;&gt;&lt;code&gt;aws eks delete-addon --cluster-name $CLUSTER_NAME --addon-name amazon-cloudwatch-observability

aws logs delete-log-group --log-group-name /aws/containerinsights/$CLUSTER_NAME/application
aws logs delete-log-group --log-group-name /aws/containerinsights/$CLUSTER_NAME/dataplane
aws logs delete-log-group --log-group-name /aws/containerinsights/$CLUSTER_NAME/host
aws logs delete-log-group --log-group-name /aws/containerinsights/$CLUSTER_NAME/performance&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;EKS 리소스 이벤트 확인&lt;/h3&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;EKS는 AWS CloudTrail과 통합되어 있습니다. CloudTrail는 리소스에 일어난 활동을 기록하는 서비스입니다. CloudTrail은 EKS에 일어난 모든 API 요청을 이벤트로 기록하고 있습니다. 여기에는 웹 콘솔이나 코드를 통한 Amazon EKS API operation을 모두 포함하고 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Trails를 생성했다면 이러한 로그를 Amazon S3 bucket을 통해 장기 보관할 수 있으며, 특별한 설정을 하지 않아도 CloudTrail의 Event histry를 통해서 리소스에 발생한 활동을 확인할 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;보통 이런 이벤트를 확인하는 경우는 특정 리소스의 변경이나 이벤트가 EKS에 의한 것인지 아니면 사용자에 의한 것인지 확인이 필요한 경우 등이 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아래와 같이 확인이 가능합니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1844&quot; data-origin-height=&quot;801&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bQOmXs/btsMA7jBpMT/tNgFPTEKb20Ox7y8JrqVAk/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bQOmXs/btsMA7jBpMT/tNgFPTEKb20Ox7y8JrqVAk/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bQOmXs/btsMA7jBpMT/tNgFPTEKb20Ox7y8JrqVAk/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbQOmXs%2FbtsMA7jBpMT%2FtNgFPTEKb20Ox7y8JrqVAk%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1844&quot; height=&quot;801&quot; data-origin-width=&quot;1844&quot; data-origin-height=&quot;801&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;해당 항목을 누르면 UserIdentity, sourceIPAddress, userAgent와 같은 상세한 내용을 확인할 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;AWS CloudTrail은 리소스의 변경 사항 뿐 아니라, Read를 발생시킨 요청에 대해서도 기록하고 있는 점이 인상 깊습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;EKS의 CloudTail에 대해서 아래 문서를 확인하실 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;https://docs.aws.amazon.com/eks/latest/userguide/logging-using-cloudtrail.html&quot;&gt;https://docs.aws.amazon.com/eks/latest/userguide/logging-using-cloudtrail.html&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;여기서 EKS의 모니터링과 로깅을 마무리 하겠습니다. EKS에서도 다양한 메트릭을 제공과 시각화를 위해서 Amazon Managed Prometheus 와 Amazon Managed Grafana 서비스를 제공하고 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;5. AKS의 모니터링과 로깅&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Azure의 모니터링과 로깅 솔루션으로 Azure Monitor와 Log Analytics Workspace가 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;각 AWS의 CloudWatch와 CloudWatch Logs와 대응합니다. Azure Monitor에서는 사전에 제공하는 뷰나 신규 블레이드를 생성하여 데이터를 확인할 수 있으며, Log Analytics Workspace는 테이블 형태로 데이터를 수집하므로 KQL(Kusto Query Language)를 통해서 쿼리를 수행할 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;또한 AKS 환경에 전문화된 메트릭/로깅을 제공하기 위해 Container Insight를 제공하고 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;전반적인 AKS 모니터링 옵션을 아래 문서에서 설명하고 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;https://learn.microsoft.com/en-us/azure/aks/monitor-aks?tabs=cilium&quot;&gt;https://learn.microsoft.com/en-us/azure/aks/monitor-aks?tabs=cilium&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1067&quot; data-origin-height=&quot;572&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/8iSb6/btsMz8Q2deT/tDvwnhpEKj3td2le8KC9N0/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/8iSb6/btsMz8Q2deT/tDvwnhpEKj3td2le8KC9N0/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/8iSb6/btsMz8Q2deT/tDvwnhpEKj3td2le8KC9N0/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2F8iSb6%2FbtsMz8Q2deT%2FtDvwnhpEKj3td2le8KC9N0%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1067&quot; height=&quot;572&quot; data-origin-width=&quot;1067&quot; data-origin-height=&quot;572&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이 중 EKS에서 살펴본 순서대로 로깅과 메트릭 부분을 살펴보겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;AKS 로깅&lt;/h3&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;먼저 컨트롤 플레인 로그는 위 테이블의 Resource logs 에서 설명하고 있습니다. Azure에서는 각 상품별로 &lt;code&gt;진단 설정(Diagnostics setting)&lt;/code&gt;을 할 수 있는데, AKS에서는 진단 설정을 통해서 컨트롤 플레인 로그를 선택적으로 수집할 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;진단 설정에서 제공되는 항목은 아래와 같습니다. EKS와 다르게 CA나 CSI controller에 해당하는 파드들이 컨트롤 플레인에 구성되므로 해당 컴포넌트에 대한 로그도 진단 설정에서 선택할 수 있습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;307&quot; data-origin-height=&quot;277&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bAwU9a/btsMzfC6M9J/857T47Kff6Sn7iXlcW0Lz1/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bAwU9a/btsMzfC6M9J/857T47Kff6Sn7iXlcW0Lz1/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bAwU9a/btsMzfC6M9J/857T47Kff6Sn7iXlcW0Lz1/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbAwU9a%2FbtsMzfC6M9J%2F857T47Kff6Sn7iXlcW0Lz1%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;307&quot; height=&quot;277&quot; data-origin-width=&quot;307&quot; data-origin-height=&quot;277&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;다음으로 노드와 애플리케이션 모니터링을 위해서 &lt;code&gt;Container Insight&lt;/code&gt;를 설정할 수 있습니다. Container Insight의 로그는 Log Analytics Workspace에 저장되어, 비용 측면에 아래와 같이 사전에 정의된 세트를 지정할 수 있습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;980&quot; data-origin-height=&quot;403&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bRObGS/btsMz6yW7fi/3cnI6cPhrXQkBIynj2HFP1/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bRObGS/btsMz6yW7fi/3cnI6cPhrXQkBIynj2HFP1/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bRObGS/btsMz6yW7fi/3cnI6cPhrXQkBIynj2HFP1/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbRObGS%2FbtsMz6yW7fi%2F3cnI6cPhrXQkBIynj2HFP1%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;980&quot; height=&quot;403&quot; data-origin-width=&quot;980&quot; data-origin-height=&quot;403&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이후 수집 설정을 수정을 눌러보면 어떤 로그/메트릭 유형이 수집되는지 확인할 수 있습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1026&quot; data-origin-height=&quot;387&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bYbL6L/btsMAjkzNLp/Mg6HGrrTr54GRYauPi3GDK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bYbL6L/btsMAjkzNLp/Mg6HGrrTr54GRYauPi3GDK/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bYbL6L/btsMAjkzNLp/Mg6HGrrTr54GRYauPi3GDK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbYbL6L%2FbtsMAjkzNLp%2FMg6HGrrTr54GRYauPi3GDK%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1026&quot; height=&quot;387&quot; data-origin-width=&quot;1026&quot; data-origin-height=&quot;387&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;수집을 원하는 항목을 선택할 수 있으며, 성능관련 지표나, 컨테이너 로그, 그리고 각 오브젝트의 상태나 쿠버네티스 Event와 같은 정보를 수집할 수 있는 것을 알 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이러한 항목은 Log Analytics Workspace에 개별 테이블로 저장되며, 클러스터의 Monitoring&amp;gt;Logs를 통해서 접근하거나 혹은 Log Analytics Workspace로 직접 접근해 쿼리를 사용할 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아래 샘플 쿼리를 참고 부탁드립니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;https://docs.azure.cn/en-us/azure-monitor/reference/queries/containerlog&quot; target=&quot;_blank&quot; rel=&quot;noopener&amp;nbsp;noreferrer&quot;&gt;https://docs.azure.cn/en-us/azure-monitor/reference/queries/containerlog&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Container Insight의 모니터링에도 Performance, Metrics를 볼 수 있지만 최근에는 Prometheus Metric으로 전환되는 방향성을 가진 것 같기도 합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;AKS 메트릭 기반 모니터링&lt;/h3&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;다음으로 메트릭을 살펴보겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Azure는 플랫폼 메트릭으로 리소스 별로 기본 제공되는 메트릭을 무료로 제공합니다. 보통 고급 모니터링 기능을 활성화 하지 않은 상태에서도 AKS&amp;gt;Monitoring&amp;gt;Metrics에서 일부 값들을 확인하실 수 있습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1589&quot; data-origin-height=&quot;1397&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bsu20f/btsMy2cMM0f/WTbLR6TcMXeg96Z7giwZ6K/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bsu20f/btsMy2cMM0f/WTbLR6TcMXeg96Z7giwZ6K/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bsu20f/btsMy2cMM0f/WTbLR6TcMXeg96Z7giwZ6K/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2Fbsu20f%2FbtsMy2cMM0f%2FWTbLR6TcMXeg96Z7giwZ6K%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1589&quot; height=&quot;1397&quot; data-origin-width=&quot;1589&quot; data-origin-height=&quot;1397&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;예를 들어, 노드 상태, 파드 상태나 노드 리소스 메트릭 등이 선택가능하며, AKS도 최근 컨트롤 플레인 메트릭을 Preview로 제공하고 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;플랫폼 메트릭에 대한 전체 메트릭 설명은 아래를 참고하실 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;https://learn.microsoft.com/en-us/azure/aks/monitor-aks-reference#metrics&quot;&gt;https://learn.microsoft.com/en-us/azure/aks/monitor-aks-reference#metrics&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;최근 Container Insight에서 Prometheus 메트릭과 로깅을 활성화 한 경우 AKS Monitor Experience이 크게 개선되었으며 현재 Preview 상태입니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;https://techcommunity.microsoft.com/blog/azureobservabilityblog/public-preview-the-new-aks-monitoring-experience/4297181&quot; target=&quot;_blank&quot; rel=&quot;noopener&amp;nbsp;noreferrer&quot;&gt;https://techcommunity.microsoft.com/blog/azureobservabilityblog/public-preview-the-new-aks-monitoring-experience/4297181&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1280&quot; data-origin-height=&quot;603&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/dG9Rc8/btsMx2Y0IIn/CawNN6sQkp5cUupGBPZyJ1/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/dG9Rc8/btsMx2Y0IIn/CawNN6sQkp5cUupGBPZyJ1/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/dG9Rc8/btsMx2Y0IIn/CawNN6sQkp5cUupGBPZyJ1/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FdG9Rc8%2FbtsMx2Y0IIn%2FCawNN6sQkp5cUupGBPZyJ1%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1280&quot; height=&quot;603&quot; data-origin-width=&quot;1280&quot; data-origin-height=&quot;603&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h3 style=&quot;color: #000000; text-align: start;&quot; data-ke-size=&quot;size23&quot;&gt;AKS 리소스 이벤트 확인&lt;/h3&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;마지막으로 &lt;code&gt;Activity Log&lt;/code&gt;에서 해당 리로스에 대한 이벤트를 확인할 수 있습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1606&quot; data-origin-height=&quot;585&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/YGEjx/btsMynaLvMz/1v6AKPEZRfB8BLwjZiEfEK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/YGEjx/btsMynaLvMz/1v6AKPEZRfB8BLwjZiEfEK/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/YGEjx/btsMynaLvMz/1v6AKPEZRfB8BLwjZiEfEK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FYGEjx%2FbtsMynaLvMz%2F1v6AKPEZRfB8BLwjZiEfEK%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1606&quot; height=&quot;585&quot; data-origin-width=&quot;1606&quot; data-origin-height=&quot;585&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;EKS와 마찬가지로 AKS에서도 Managed Prometheus와 Managed Grafana를 통해서 모니터링을 통합할 수 있는 기능이 제공됩니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;참고로 기본 AKS&amp;gt;Monitoring&amp;gt;Insight로 접근하던 Container Insight가 AKS의 Monitor로 변경이 되었습니다. 아래 화면은 Monitor Settings으로, &lt;code&gt;Container Logs&lt;/code&gt; 설정 외에도 &lt;code&gt;Managed Prometheus&lt;/code&gt;와 &lt;code&gt;Managed Grafana&lt;/code&gt;를 선택 할 수 있습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1612&quot; data-origin-height=&quot;706&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/cah5rT/btsMyCFCBtL/xmY7nHSpMt6EijoKrtmMSk/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/cah5rT/btsMyCFCBtL/xmY7nHSpMt6EijoKrtmMSk/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/cah5rT/btsMyCFCBtL/xmY7nHSpMt6EijoKrtmMSk/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2Fcah5rT%2FbtsMyCFCBtL%2FxmY7nHSpMt6EijoKrtmMSk%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1612&quot; height=&quot;706&quot; data-origin-width=&quot;1612&quot; data-origin-height=&quot;706&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;다만 EKS와 비교해 보면 CloudWatch Container Insight가 조금 더 완성도 있는 구성과 시각화를 보여주는 것 같습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;6. 리소스 정리&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;실습에 사용된 환경을 아래와 같이 정리하도록 하겠습니다.&lt;/p&gt;
&lt;pre class=&quot;nginx&quot;&gt;&lt;code&gt;nohup sh -c &quot;eksctl delete cluster --name $CLUSTER_NAME &amp;amp;&amp;amp; aws cloudformation delete-stack --stack-name $CLUSTER_NAME&quot; &amp;gt; /root/delete.log 2&amp;gt;&amp;amp;1 &amp;amp;&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;CloudWatch Logs가 비용이 많이 드는 것으로 알려져 있어 모든 로그 그룹이 삭제되었는지 꼭 확인하시기 바랍니다.&lt;/p&gt;
&lt;pre class=&quot;dsconfig&quot;&gt;&lt;code&gt;# 로그 그룹 삭제 : 컨트롤 플레인
aws logs delete-log-group --log-group-name /aws/eks/$CLUSTER_NAME/cluster

# 로그 그룹 삭제 : 데이터 플레인
aws logs delete-log-group --log-group-name /aws/containerinsights/$CLUSTER_NAME/application
aws logs delete-log-group --log-group-name /aws/containerinsights/$CLUSTER_NAME/dataplane
aws logs delete-log-group --log-group-name /aws/containerinsights/$CLUSTER_NAME/host
aws logs delete-log-group --log-group-name /aws/containerinsights/$CLUSTER_NAME/performance&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;마지막으로 EC2에서 혹시나 남아 있는 볼륨(Prometheus용 PV 등)들이 있다면 확인 후 모두 삭제해야 하시기 바랍니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;마무리&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;해당 포스트를 통해서 Kubernetes 환경의 모니터링에 대해서 살펴보고, 이러한 모니터링을 EKS에서 어떻게 활성화 하고 지표를 살펴볼 수 있는지 확인했습니다. 또한 AKS의 모니터링 제공 수준과 살펴보고 서로 비교해봤습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이 과정은 대체적으로 CSP에서 제공하는 옵션을 위주로 설명을 하였습니다. 반면 일부 사용자는 Prometheus나 Grafana와 같은 오픈소스로 직접 모니터링을 구성하기도 합니다. 그리고 모니터링을 전문으로 하는 SaaS 서비스를 사용할 수도 있습니다. CSP의 모니터링 솔루션을 사용할 것인지 혹은 오픈소스나 다른 형태의 모니터링을 사용할지는 사용자에게 달려있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;살펴보기로는 CSP에서는 옵저버빌리티 수준으로 모니터링을 고도화 해가는 방향성을 가지고 있는 것을 확인할 수 있습니다. 다만 CSP 모니터링 솔루션이 비용 효율적인지에 대한 의문과, 또한 블레이드나 혹은 알림과 같은 기능에 커스터마이즈에 한계가 있기도 합니다. 그리고 멀티 클라우드 환경이라면 각 사별로 서로 다른 모니터링 스택을 관리해야하는 문제도 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;그러한 측면에서 오픈소스 모니터링 솔루션을 사용할 수 있습니다. 다만 각 클러스터별로 Prometheus Stack을 위한 컴포넌트가 배포되는 중복성이나 비용이 발생하는 측면과, 또한 모니터링 솔루션을 자체를 관리해야 하는 부가적인 업무도 부담이 되기는 합니다. 다만 시각화나 모니터링 항목의 커스터마이즈가 가능한 점, 다양한 환경에 동일한 모니터링 스택을 배포하며 통일된 환경을 구성할 수 있는 장점이 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;모니터링을 전문으로 하는 SaaS 서비스를 사용하는 옵션도 있습니다. 대체적으로 이러한 솔루션은 우수하지만 데이터를 전송하는 비용 및 보안적인 우려가 있을 수도 있고, 솔루션 자체의 비용도 부담이 될 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;어떤 모니터링을 솔루션을 사용하는 것에는 장/단점이 있기 때문에 이는 사용자의 선택이나 기술적 판단이 필요할 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;해당 포스트는 여기서 마무리 하도록 하겠습니다.&lt;/p&gt;</description>
      <category>EKS</category>
      <category>AKS</category>
      <category>Container Insight</category>
      <category>EKS</category>
      <category>logging</category>
      <category>Monitoring</category>
      <author>한명</author>
      <guid isPermaLink="true">https://a-person.tistory.com/36</guid>
      <comments>https://a-person.tistory.com/36#entry36comment</comments>
      <pubDate>Sat, 1 Mar 2025 02:54:33 +0900</pubDate>
    </item>
    <item>
      <title>KCNA, KCSA  후기</title>
      <link>https://a-person.tistory.com/35</link>
      <description>&lt;p data-ke-size=&quot;size16&quot;&gt;최근에 만료된 Kubernetes 자격증을 갱신하면서 신규 자격증인 KCNA와 KCSA가 있어 추가로 취득했습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이번 포스트에서는 두 가지 자격증에 대해서 간략히 소개하고 시험 후기를 간단히 남기려고 합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;공통&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;두 시험 모두 PSI 를 통해서 시험을 응시하는데 다른 시험들에 비해서 노트북 캠으로 사전 점검을 엄청 열심히 합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;듀얼 모니터는 지원되지 않으며,&amp;nbsp;책이나 종이 필기류 뭔가 눈에 띄는 모든 것에 대해서 확인을 합니다.&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;그냥 노트북, 마우스만 두고 시험을 응시하는게 시간 낭비하지 않는 방법입니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;KCNA와 KCSA는 90분간 60문제 4지선다의 시험이고, 자격은 2년간 유효합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;또한 클라우드 자격증과는 다르게 상황 설명이 복잡하지 않습니다. 대체적으로 질문과 제시된 응답이 간결한 편입니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;예를 들어, 어떤 역할을 하는 컴포넌트나 제품을 찾거나, 혹은 특정 상황에서 발생할 수 있는 이슈를 선택하는 등입니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;시험에 큰 부담은 없지만 언어 선택은 영어만 가능하기 때문에 시험 후 약간의 피로는 있을 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;또한 두 가지 시험 모두 쿠버네티스의 컨트롤 플레인 구성 요소와 역할, 그리고 각 리소스가 어떤 경우 필요한 지에 대한 기본적인 이해를 묻는 문항이 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;KCNA(Kubernetes and Cloud Native Associate)&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;KCNA는 쿠버네티스와 클라우드 네이티브 어소시에이트 자격입니다. Associate 자체가 Expert 수준의 자격은 아니기 때문에 업무에서 쿠버네티스 환경을 사용하고, 기본적인 이해가 있는 분은 큰 부담없이 접수하고 시험을 처도 무방한 시험입니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;출제 범위(Domain)&lt;/h4&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;915&quot; data-origin-height=&quot;292&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/peQnC/btsMxjSzIsS/IU6Z7d7BTyhKfpJSckRQj0/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/peQnC/btsMxjSzIsS/IU6Z7d7BTyhKfpJSckRQj0/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/peQnC/btsMxjSzIsS/IU6Z7d7BTyhKfpJSckRQj0/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FpeQnC%2FbtsMxjSzIsS%2FIU6Z7d7BTyhKfpJSckRQj0%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;915&quot; height=&quot;292&quot; data-origin-width=&quot;915&quot; data-origin-height=&quot;292&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;출처: &lt;a href=&quot;https://training.linuxfoundation.org/certification/kubernetes-cloud-native-associate/&quot; target=&quot;_blank&quot; rel=&quot;noopener&amp;nbsp;noreferrer&quot;&gt;https://training.linuxfoundation.org/certification/kubernetes-cloud-native-associate/&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;위 Linux Foundation 링크에서 세부 주제들을 확인하실 수 있으며, 대충 아는 부분은 넘어가고 아리송한 부분만 한번 학습하시면 될 것 같습니다. 시험 범위는 쿠버네티스 + 클라우드 네이티브이기 때문에 관련된 용어이나 개발 프로세스 등도 포함된다고 보면 됩니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;다만, 정말 어렵지 않은 시험입니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아래와 같이 참고 자료 남겨드립니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;https://github.com/moabukar/Kubernetes-and-Cloud-Native-Associate-KCNA/blob/main/docs/kcna/questions.md&quot; target=&quot;_blank&quot; rel=&quot;noopener&amp;nbsp;noreferrer&quot;&gt;https://github.com/moabukar/Kubernetes-and-Cloud-Native-Associate-KCNA/blob/main/docs/kcna/questions.md&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;https://www.itexams.com/exam/KCNA&quot; target=&quot;_blank&quot; rel=&quot;noopener&amp;nbsp;noreferrer&quot;&gt;https://www.itexams.com/exam/KCNA&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;(두번째 링크는 초반에 20문제 정도는 무료로 볼 수 있습니다)&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;KCSA(Kubernetes and Cloud Native Security Associate)&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;KCSA는 쿠버네티스와 클라우드 네이티브에 대한 보안 어소시에이트 자격입니다. 마찬가지로 쿠버네티스 환경에 익숙하다면 큰 부담은 없을 수 있지만, 이 시험은 보안과 관련된 주제에 대해서는 추가로 학습이 필요합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;출제 범위(Domain)&lt;/h4&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;908&quot; data-origin-height=&quot;346&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/D2zoH/btsMvdTHvRt/8cCDdlumZhd3iXHlbsUZQ0/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/D2zoH/btsMvdTHvRt/8cCDdlumZhd3iXHlbsUZQ0/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/D2zoH/btsMvdTHvRt/8cCDdlumZhd3iXHlbsUZQ0/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FD2zoH%2FbtsMvdTHvRt%2F8cCDdlumZhd3iXHlbsUZQ0%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;908&quot; height=&quot;346&quot; data-origin-width=&quot;908&quot; data-origin-height=&quot;346&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;출처: &lt;a href=&quot;https://training.linuxfoundation.org/certification/kubernetes-and-cloud-native-security-associate-kcsa/&quot; target=&quot;_blank&quot; rel=&quot;noopener&amp;nbsp;noreferrer&quot;&gt;https://training.linuxfoundation.org/certification/kubernetes-and-cloud-native-security-associate-kcsa/&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;해당 시험에도 쿠버네티스 컨트롤 플레인에 대한 문항이 있고, 또 대략 10+문제 정도는 그냥 감으로 봐도 보안적으로 좋은(?) 답변을 선택하면 됩니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;다만 한 2~30문제 정도는 정말 해당 토픽에 대한 이해가 필요하기 때문에 대략 2~3시간 정도는 공부를 하시는 게 좋을 것 같습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;쿠버네티스의 보안을 위한 서비스계정, 네트워크 정책, RBAC, Authz, Admission Controller 같은 내용도 알아야 하고, 그외 쿠버네티스 환경에서 보안을 강화해주는 구성요소나 제품(SECCOMP, AppArmor, gVisor, Falco, FireCraker 등)들도 어떤 역할을 하는지 잘 이해해야 합니다. &lt;span style=&quot;letter-spacing: 0px;&quot;&gt;마지막으로 보안 관련 규정이나 용어들도 친숙해질 필요가 있습니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;위 출제 범위를 확장해서 각 카테고리에서 모르는 용어가 있으면 보고 가시기 바랍니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아쉽게도 KCSA는 잘 정리된 자료를 찾기가 어렵습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;개인적으로 KCSA는 후기들을 한번씩 보고, 참고 링크들을 위주로 학습 했습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;혹시나 도움이 되셨으면 하며 마무리 하겠습니다.&lt;/p&gt;</description>
      <category>기타</category>
      <category>kcna</category>
      <category>kcsa</category>
      <category>kubernetes</category>
      <author>한명</author>
      <guid isPermaLink="true">https://a-person.tistory.com/35</guid>
      <comments>https://a-person.tistory.com/35#entry35comment</comments>
      <pubDate>Tue, 25 Feb 2025 23:36:23 +0900</pubDate>
    </item>
    <item>
      <title>[3-2] EKS 노드 그룹</title>
      <link>https://a-person.tistory.com/34</link>
      <description>&lt;p data-ke-size=&quot;size16&quot;&gt;Kubernetes의 노드를 EKS에서는 노드 그룹으로 제공하고 있습니다. 본 포스트에서는 EKS의 노드그룹 유형을 살펴보고, 세부적으로는 관리형 노드 그룹(Managed node groups)에서 생성 가능한 노드 유형을 살펴보도록 하겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;목차&lt;/h2&gt;
&lt;ol style=&quot;list-style-type: decimal;&quot; data-ke-list-type=&quot;decimal&quot;&gt;
&lt;li&gt;CSP에서 제공하는 Kubernetes 노드 리소스&lt;/li&gt;
&lt;li&gt;EKS 노드 그룹 유형&lt;/li&gt;
&lt;li&gt;EKS 노드 그룹 AL2 -&amp;gt; AL2023&lt;/li&gt;
&lt;li&gt;다양한 노드 그룹 사용해 보기&lt;/li&gt;
&lt;li&gt;리소스 정리&lt;/li&gt;
&lt;/ol&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;1. CSP에서 제공하는 Kubernetes 노드 리소스&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Kubernetes의 노드는 워크로드를 실행하기 위한 리소스입니다. 이는 가상 머신이 될 수도 물리 머신이 될 수도 있습니다. 그리고 노드는 파드를 실행하기 위한 컴포넌트를 가지는데 파드를 실행하기 위한 Kubelet, 컨테이너 런타임, 그리고 Service 를 구현하는 kube-proxy 입니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;633&quot; data-origin-height=&quot;217&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/lOrSA/btsMtIdP0dY/yxiA5UzyBl8IlYPvi5SwkK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/lOrSA/btsMtIdP0dY/yxiA5UzyBl8IlYPvi5SwkK/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/lOrSA/btsMtIdP0dY/yxiA5UzyBl8IlYPvi5SwkK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FlOrSA%2FbtsMtIdP0dY%2FyxiA5UzyBl8IlYPvi5SwkK%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;633&quot; height=&quot;217&quot; data-origin-width=&quot;633&quot; data-origin-height=&quot;217&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;출처: &lt;a href=&quot;https://learn.microsoft.com/ko-kr/azure/architecture/aws-professional/eks-to-aks/node-pools&quot;&gt;https://learn.microsoft.com/ko-kr/azure/architecture/aws-professional/eks-to-aks/node-pools&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이때 CSP(Cloud Service Provider)에서 제공하는 Managed Kubernetes Service에서는 컨트롤 플레인은 내부적으로 관리하므로, 사용자는 보통 데이터 플레인 혹은 워커 노드라고 불리는 노드들을 관리하게 됩니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;보통 CSP에는 Stateless한 애플리케이션의 스케일 인/아웃을 지원하는 목적으로 동일한 목적을 가지는 가상 머신을 이미지로 만들고 이를 바탕으로 가상 머신의 스케일링을 지원하는 서비스를 제공합니다. Kubernetes의 노드는 상태를 가질 필요가 없기 때문에 이러한 가상 머신의 세트를 기반으로 노드를 제공합니다.&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;그래서 EKS에서는 ASG(Auto Scaling Group)를 바탕으로 노드 그룹(Node Group)이란 이름으로 Kubernetes 노드를 오토 스케일링 하도록 제공하고 있으며, AKS에서는 VMSS(Virtual Machine Scale Set)을 바탕으로 한 노드 풀(Node Pool)로 동일한 기능을 제공하고 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;그래서 Managed Kubernetes Service에서 Kubernetes의 노드를 늘이거나 줄이는 행위, 여기서 더 나아가 ClusterAutoscaler의 동작은 이러한 제반 가상 머신 세트의 API를 호출하는 방식으로 구현됩니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;2. EKS 노드 그룹 유형&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;문서의 카테고리를 바탕으로 확인해보면 EKS에서 제공하는 노드 그룹에는 아래와 같은 유형이 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;https://docs.aws.amazon.com/eks/latest/userguide/eks-compute.html&quot;&gt;https://docs.aws.amazon.com/eks/latest/userguide/eks-compute.html&lt;/a&gt;&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;관리형 노드 그룹(Managed node grups)&lt;/li&gt;
&lt;li&gt;자체 관리형 노드그룹(Self-managed nodes)&lt;/li&gt;
&lt;li&gt;AWS Fargate&lt;/li&gt;
&lt;li&gt;하이브리드 노드(Hybrid nodes)&lt;/li&gt;
&lt;/ul&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;또한 Pre-built optimized AMI라는 카테고리가 있는데, 작성자가 이해하기로는 노드 그룹의 유형이라기 보다는 AWS에서 미리 Amazon Linux, Windows, Bottlerocket, Ubuntu와 같은 OS를 기반으로 최적화된 AMI를 선택할 수 있다는 의미로 이해했습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;그리고 관리형 노드 그룹의 용량 유형(Capacity type)은 온디맨드(OnDemand)와 스팟(Spot)으로 나뉩니다. 지정하지 않은 경우 온디맨드가 기본이며, Spot은 큰폭의 할인을 제공하는 여분 인스턴스를 사용하는 의미로, 필요한 경우 AWS에 의해서 리소스를 빼앗길 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;지금까지 살펴본 EKS와 AKS에는 몇 가지 부분에서 사상이 다른 점을 느낄 수 있는데, 먼저 EKS는 가볍하다는 느낌이 있습니다. 앞서 살펴본 바와 같이 노드 구성 요소가 최소화 되어 있고 CNI나 네트워크/스토리지 구현체도 최소화되어 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;두번째 차이는 EKS는 Managed Service의 일부를 customize 하도록 제공하고 있다는 부분입니다. 노드를 제공하는 방식에서도 AKS와 크게 다른데 AKS는 Fully Managed Service를 지향하기 때문에 관리형 노드 풀만 제공하며, 또한 노드의 설정을 Customize하는 데 있어도 인터페이스를 제공(Custom Node Configuration for AKS node pools, &lt;a href=&quot;https://learn.microsoft.com/en-us/azure/aks/custom-node-configuration?tabs=linux-node-pools&quot;&gt;https://learn.microsoft.com/en-us/azure/aks/custom-node-configuration?tabs=linux-node-pools&lt;/a&gt;)하거나 그게 아니면 엄격하게 제한하는 점에 차이가 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;노드 customization에 대한 관점의 차이를 살펴보면 관리형 노드 그룹에서도 &lt;code&gt;preBootstrapCommands&lt;/code&gt; 과 같은 방식으로 노드에 추가로 필요한 명령을 전달하는 형태로 customize를 가능 하도록 합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아래는 amazon linux 2023 노드 그룹에서 dnf(패키지 관리자)로 추가 패키지를 설치하는 예시입니다.&lt;/p&gt;
&lt;pre class=&quot;yaml&quot;&gt;&lt;code&gt;managedNodeGroups:
- amiFamily: AmazonLinux2023
  desiredCapacity: 3
  iam:
    withAddonPolicies:
      certManager: true # Enable cert-manager
      externalDNS: true # Enable ExternalDNS
  instanceType: t3.medium
  preBootstrapCommands:
    # install additional packages
    - &quot;dnf install nvme-cli links tree tcpdump sysstat ipvsadm ipset bind-utils htop -y&quot;
  labels:
    alpha.eksctl.io/cluster-name: myeks
    alpha.eksctl.io/nodegroup-name: ng1
  maxPodsPerNode: 100
  maxSize: 3
  minSize: 3
  name: ng1
  ssh:
    allow: true
    publicKeyName: $SSHKEYNAME
  tags:
    alpha.eksctl.io/nodegroup-name: ng1
    alpha.eksctl.io/nodegroup-type: managed
  volumeIOPS: 3000
  volumeSize: 120
  volumeThroughput: 125
  volumeType: gp3&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;그 외에도 eksctl 의 Config File Schema(&lt;a href=&quot;https://eksctl.io/usage/schema/)%EB%A5%BC&quot;&gt;https://eksctl.io/usage/schema/)를&lt;/a&gt; 살펴보면 kubelet custom config을 제공하는 &lt;code&gt;kubeletExtraConfig&lt;/code&gt; 와 같은 필드도 있으며, 아래와 같이 Launch Template을 통해서 User Data를 전달하면서 가능한 변경사항을 살펴보실 수 있습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1150&quot; data-origin-height=&quot;478&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/d1DrOU/btsMrEqx3Sg/4EltLS7bRTikUFD6uOnKc1/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/d1DrOU/btsMrEqx3Sg/4EltLS7bRTikUFD6uOnKc1/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/d1DrOU/btsMrEqx3Sg/4EltLS7bRTikUFD6uOnKc1/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2Fd1DrOU%2FbtsMrEqx3Sg%2F4EltLS7bRTikUFD6uOnKc1%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1150&quot; height=&quot;478&quot; data-origin-width=&quot;1150&quot; data-origin-height=&quot;478&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;출처: &lt;a href=&quot;https://www.youtube.com/watch?v=lHYiew91iHY&quot;&gt;https://www.youtube.com/watch?v=lHYiew91iHY&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;내용을 살펴보면 userdata를 base64로 변경하여 LaunchTempateData.UserData로 전달하는 것으로 이해됩니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1187&quot; data-origin-height=&quot;696&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/94pZy/btsMrF33LDs/tJElYGiiKFseG5Z5JYvF70/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/94pZy/btsMrF33LDs/tJElYGiiKFseG5Z5JYvF70/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/94pZy/btsMrF33LDs/tJElYGiiKFseG5Z5JYvF70/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2F94pZy%2FbtsMrF33LDs%2FtJElYGiiKFseG5Z5JYvF70%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1187&quot; height=&quot;696&quot; data-origin-width=&quot;1187&quot; data-origin-height=&quot;696&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;출처: &lt;a href=&quot;https://youtu.be/lHYiew91iHY?t=821&quot;&gt;https://youtu.be/lHYiew91iHY?t=821&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;마지막으로 이를 lauch template으로 생성해 새로운 노드 그룹을 생성합니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1189&quot; data-origin-height=&quot;286&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/GCYdf/btsMuy9C9PG/sRAs7Rsl5MaPOhdizT1081/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/GCYdf/btsMuy9C9PG/sRAs7Rsl5MaPOhdizT1081/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/GCYdf/btsMuy9C9PG/sRAs7Rsl5MaPOhdizT1081/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FGCYdf%2FbtsMuy9C9PG%2FsRAs7Rsl5MaPOhdizT1081%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1189&quot; height=&quot;286&quot; data-origin-width=&quot;1189&quot; data-origin-height=&quot;286&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;결국엔 특별한 목적이나 요구 사항이 있는 경우에는 Custom AMI 를 통해서 자체 관리형 노드 그룹을 사용할 수 있다는 결론에 다다릅니다. AKS는 자체 이미지를 통한 노드 풀 사용이 불가합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;또 한가지 차이로 EKS의 노드 그룹은 여러가서 인스턴스 타입(아래 &lt;code&gt;--instance-types&lt;/code&gt; 옵션)을 지정할 수 있는 점이 있습니다.&lt;/p&gt;
&lt;pre class=&quot;dsconfig&quot;&gt;&lt;code&gt;aws eks create-nodegroup \
  --cluster-name $CLUSTER_NAME \
  --nodegroup-name managed-spot \
  --subnets $PubSubnet1 $PubSubnet2 $PubSubnet3 \
  --node-role $NODEROLEARN \
  --instance-types c5.large c5d.large c5a.large \
  --capacity-type SPOT \
  --scaling-config minSize=2,maxSize=3,desiredSize=2 \
  --disk-size 20&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;최근 AKS에서는 Virtual Machine 노드 풀이라는 새로운 유형의 노드 풀을 Preview로 발표했으며, 이 가상 머신 노드풀은 여러가지 유형의 VMSize를 가질 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;https://learn.microsoft.com/en-us/azure/aks/virtual-machines-node-pools&quot;&gt;https://learn.microsoft.com/en-us/azure/aks/virtual-machines-node-pools&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;노드와 관련해 AKS에만 있는 개념은 시스템 노드 풀과 유저 노드 풀의 분리입니다. 이는 사용자 워크로드로 인해 시스템 컴포넌트(coredns, metrics-server 등)에 영향을 미치지 않도록 분리하려는 의도로 만들어진 구분입니다. dedicated system node pool은 taint로 구성될 수 있으며, 상세한 내용은 아래 문서를 참고 하시기 바랍니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;https://learn.microsoft.com/en-us/azure/aks/use-system-pools?tabs=azure-cli&quot;&gt;https://learn.microsoft.com/en-us/azure/aks/use-system-pools?tabs=azure-cli&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;살펴보기로는 EKS는 사용자에 상당 부분의 자율성을 제공하는 것으로도 이해되고, 반대로 AKS는 최대한 Managed 영역으로 서비스를 제공하려는 입장(상품에서 검증된 인터페이스만 제공)을 가진 것으로도 보입니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;몇 가지 근본적인 차이 외에는 제공하는 노드 형태는 EKS와 AKS와 유사합니다. EKS의 노드 그룹과 AKS의 노드풀에 대해서 아래 문서를 참고하실 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;https://learn.microsoft.com/ko-kr/azure/architecture/aws-professional/eks-to-aks/node-pools&quot;&gt;https://learn.microsoft.com/ko-kr/azure/architecture/aws-professional/eks-to-aks/node-pools&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;3. EKS 노드 그룹 AL2 -&amp;gt; AL2023&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;앞선 실습을 진행한 상황이라면 Amazon Linux 2023을 기반으로한 노드풀이 생성되어 있을 것입니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;만약 앞선 실습 환경 배포를 하지 않았다면 [3-1] EKS의 스토리지 옵션(&lt;a href=&quot;https://a-person.tistory.com/33&quot;&gt;https://a-person.tistory.com/33&lt;/a&gt;)을 확인 부탁드립니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Amazon Linux2 는 &lt;code&gt;/etc/eks/bootstrap.sh&lt;/code&gt; 를 사용해 노드 초기화 프로세스를 가지고 있었다면, Amazon Linux 2023는 선언형 방식으로 YAML 구성 스키마를 사용하는 &lt;code&gt;nodeadm&lt;/code&gt;을 통해서 노드 세팅을 하도록 변경하였습니다.&lt;/p&gt;
&lt;pre class=&quot;dos&quot;&gt;&lt;code&gt;[root@ip-192-168-1-6 /]# cat  /etc/eks/bootstrap.sh
#!/usr/bin/env bash

echo &amp;gt;&amp;amp;2 '
!!!!!!!!!!
!!!!!!!!!! ERROR: bootstrap.sh has been removed from AL2023-based EKS AMIs.
!!!!!!!!!!
!!!!!!!!!! EKS nodes are now initialized by nodeadm.
!!!!!!!!!!
!!!!!!!!!! To migrate your user data, see:
!!!!!!!!!!
!!!!!!!!!!     https://awslabs.github.io/amazon-eks-ami/nodeadm/
!!!!!!!!!!
'

exit 1&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;예를 들어, Max Pod 개수의 변경을 한 경우 /etc/kubernetes/kubelet/config.json.d/00-nodeadm.conf 을 생성하여 기본 설정 파일인 /etc/kubernetes/kubelet/config.json을 overwrite하는 방식을 사용 합니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;[root@ip-192-168-1-6 /]# cat /etc/kubernetes/kubelet/config.json | grep maxPods
    &quot;maxPods&quot;: 17,
[root@ip-192-168-1-6 /]# cat /etc/kubernetes/kubelet/config.json.d/00-nodeadm.conf | grep maxPods
    &quot;maxPods&quot;: 100&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;또한 &lt;code&gt;cgroupv1&lt;/code&gt; 이 &lt;code&gt;cgroupv2&lt;/code&gt;로 변경되는 것과 같은 주요 변경이 있으므로, 상세한 내용은 아래를 참고 부탁드립니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;https://docs.aws.amazon.com/ko_kr/eks/latest/userguide/al2023.html&quot;&gt;https://docs.aws.amazon.com/ko_kr/eks/latest/userguide/al2023.html&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;4. 다양한 노드 사용해 보기&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;EKS에서 제공하는 다양한 노드 그룹을 생성해보겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;Graviton Instance 노드 그룹&lt;/h3&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Graviton Instance는 Amazon에서 제공하는 ARM 인스턴스 입니다. 아래와 같이 테스트 할 수 있습니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;# 노드의 아키텍처를 확인
kubectl get nodes -L kubernetes.io/arch

NAME                                               STATUS   ROLES    AGE    VERSION               ARCH
ip-192-168-1-6.ap-northeast-2.compute.internal     Ready    &amp;lt;none&amp;gt;   156m   v1.31.5-eks-5d632ec   amd64
ip-192-168-2-172.ap-northeast-2.compute.internal   Ready    &amp;lt;none&amp;gt;   157m   v1.31.5-eks-5d632ec   amd64
ip-192-168-3-246.ap-northeast-2.compute.internal   Ready    &amp;lt;none&amp;gt;   156m   v1.31.5-eks-5d632ec   amd64

# 신규 노드 그룹 생성 (eksctl create nodegroup --help)
eksctl create nodegroup -c $CLUSTER_NAME -r ap-northeast-2 --subnet-ids &quot;$PubSubnet1&quot;,&quot;$PubSubnet2&quot;,&quot;$PubSubnet3&quot; \
  -n ng3 -t t4g.medium -N 1 -m 1 -M 1 --node-volume-size=30 --node-labels family=graviton --dry-run &amp;gt; myng3.yaml
eksctl create nodegroup -f myng3.yaml

# 확인 (arm64)
kubectl get nodes -L kubernetes.io/arch
NAME                                               STATUS   ROLES    AGE     VERSION               ARCH
ip-192-168-1-6.ap-northeast-2.compute.internal     Ready    &amp;lt;none&amp;gt;   161m    v1.31.5-eks-5d632ec   amd64
ip-192-168-2-172.ap-northeast-2.compute.internal   Ready    &amp;lt;none&amp;gt;   161m    v1.31.5-eks-5d632ec   amd64
ip-192-168-3-188.ap-northeast-2.compute.internal   Ready    &amp;lt;none&amp;gt;   2m12s   v1.31.5-eks-5d632ec   arm64
ip-192-168-3-246.ap-northeast-2.compute.internal   Ready    &amp;lt;none&amp;gt;   161m    v1.31.5-eks-5d632ec   amd64

 kubectl get nodes --label-columns eks.amazonaws.com/nodegroup,kubernetes.io/arch,eks.amazonaws.com/capacityType
NAME                                               STATUS   ROLES    AGE     VERSION               NODEGROUP   ARCH    CAPACITYTYPE
ip-192-168-1-6.ap-northeast-2.compute.internal     Ready    &amp;lt;none&amp;gt;   162m    v1.31.5-eks-5d632ec   ng1         amd64   ON_DEMAND
ip-192-168-2-172.ap-northeast-2.compute.internal   Ready    &amp;lt;none&amp;gt;   162m    v1.31.5-eks-5d632ec   ng1         amd64   ON_DEMAND
ip-192-168-3-188.ap-northeast-2.compute.internal   Ready    &amp;lt;none&amp;gt;   2m53s   v1.31.5-eks-5d632ec   ng3         arm64   ON_DEMAND
ip-192-168-3-246.ap-northeast-2.compute.internal   Ready    &amp;lt;none&amp;gt;   162m    v1.31.5-eks-5d632ec   ng1         amd64   ON_DEMAND&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;다만 이러한 환경은 multi-platform을 가진 노드를 가지기 때문에 파드들의 스케줄링에 주의가 필요합니다. 잘못된 스케줄링을 방지하기 위해서 taint를 사용할 수 있습니다.&lt;/p&gt;
&lt;pre class=&quot;jboss-cli&quot;&gt;&lt;code&gt;# taint 정보 확인
aws eks describe-nodegroup --cluster-name $CLUSTER_NAME --nodegroup-name ng3 | jq .nodegroup.taints

# taints 적용 (바로 적용되지 않음)
aws eks update-nodegroup-config --cluster-name $CLUSTER_NAME --nodegroup-name ng3 --taints &quot;addOrUpdateTaints=[{key=arm64, value=true, effect=NO_EXECUTE}]&quot;

# 확인
kubectl describe nodes --selector family=graviton | grep Taints
Taints:             arm64=true:NoExecute

aws eks describe-nodegroup --cluster-name $CLUSTER_NAME --nodegroup-name ng3 | jq .nodegroup.taints
[
  {
    &quot;key&quot;: &quot;arm64&quot;,
    &quot;value&quot;: &quot;true&quot;,
    &quot;effect&quot;: &quot;NO_EXECUTE&quot;
  }
]&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;또한 워크로드 자체도 ARM은 CPU 아키텍처가 AMD64와 상이하기 때문에 실행되는 이미지 빌드 시점에 다른 아키텍처로 빌드가 이루어져야 합니다. Multi(Cross)-Platform build 에 대해서는 docker buildx와 같은 문서를 참고 부탁드립니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;https://docs.docker.com/build/building/multi-platform/&quot;&gt;https://docs.docker.com/build/building/multi-platform/&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;busybox는 다양한 Plaform을 지원하고 있어, 샘플 예제는 busybox를 통해 진행 하였습니다.&lt;/p&gt;
&lt;pre class=&quot;yaml&quot;&gt;&lt;code&gt;# 파드 생성
cat &amp;lt;&amp;lt;EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
  name: busybox
spec:
  terminationGracePeriodSeconds: 3
  containers:
  - name: busybox
    image: busybox
    command:
    - &quot;/bin/sh&quot;
    - &quot;-c&quot;
    - &quot;while true; do date &amp;gt;&amp;gt; /home/pod-out.txt; cd /home; sync; sync; sleep 10; done&quot;
  tolerations:
    - effect: NoExecute
      key: arm64
      operator: Exists
  nodeSelector:
    family: graviton
EOF

# 파드가 배포된 노드 정보 확인
kubectl get pod -owide
NAME             READY   STATUS    RESTARTS   AGE   IP             NODE                                               NOMINATED NODE   READINESS GATES
busybox          1/1     Running   0          12s   192.168.3.97   ip-192-168-3-188.ap-northeast-2.compute.internal   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;

kubectl exec -it busybox -- arch
aarch64

# 삭제
kubectl delete pod busybox&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;실습을 종료하고 해당 노드 그룹을 삭제하도록 하겠습니다.&lt;/p&gt;
&lt;pre class=&quot;awk&quot;&gt;&lt;code&gt;eksctl delete nodegroup -c $CLUSTER_NAME -n ng3&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;Spot 노드 그룹&lt;/h3&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;앞서 노드 그룹에 용량 유형(Capacity Type)을 지정할 수 있다고 했습니다. 아래에서는 Spot 노드 그룹을 추가로 생성해 보겠습니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;# 노드의 Capacity Type 확인
kubectl get nodes -l eks.amazonaws.com/capacityType=ON_DEMAND
kubectl get nodes -L eks.amazonaws.com/capacityType
NAME                                               STATUS   ROLES    AGE    VERSION               CAPACITYTYPE
ip-192-168-1-6.ap-northeast-2.compute.internal     Ready    &amp;lt;none&amp;gt;   170m   v1.31.5-eks-5d632ec   ON_DEMAND
ip-192-168-2-172.ap-northeast-2.compute.internal   Ready    &amp;lt;none&amp;gt;   170m   v1.31.5-eks-5d632ec   ON_DEMAND
ip-192-168-3-246.ap-northeast-2.compute.internal   Ready    &amp;lt;none&amp;gt;   170m   v1.31.5-eks-5d632ec   ON_DEMAND

# 노드 그룹 생성
NODEROLEARN=$(aws iam list-roles --query &quot;Roles[?contains(RoleName, 'nodegroup-ng1')].Arn&quot; --output text)
echo $NODEROLEARN

aws eks create-nodegroup \
  --cluster-name $CLUSTER_NAME \
  --nodegroup-name managed-spot \
  --subnets $PubSubnet1 $PubSubnet2 $PubSubnet3 \
  --node-role $NODEROLEARN \
  --instance-types c5.large c5d.large c5a.large \
  --capacity-type SPOT \
  --scaling-config minSize=2,maxSize=3,desiredSize=2 \
  --disk-size 20


# 명령이 바로 프롬프트가 떨어지므로, spot 노드그룹이 완전 Ready가 될때까지 대기하도록 합니다.
aws eks wait nodegroup-active --cluster-name $CLUSTER_NAME --nodegroup-name managed-spot

# 확인
kubectl get nodes -L eks.amazonaws.com/capacityType,eks.amazonaws.com/nodegroup
kubectl get nodes -L eks.amazonaws.com/capacityType,eks.amazonaws.com/nodegroup
NAME                                               STATUS   ROLES    AGE    VERSION               CAPACITYTYPE   NODEGROUP
ip-192-168-1-167.ap-northeast-2.compute.internal   Ready    &amp;lt;none&amp;gt;   68s    v1.31.5-eks-5d632ec   SPOT           managed-spot
ip-192-168-1-6.ap-northeast-2.compute.internal     Ready    &amp;lt;none&amp;gt;   173m   v1.31.5-eks-5d632ec   ON_DEMAND      ng1
ip-192-168-2-15.ap-northeast-2.compute.internal    Ready    &amp;lt;none&amp;gt;   68s    v1.31.5-eks-5d632ec   SPOT           managed-spot
ip-192-168-2-172.ap-northeast-2.compute.internal   Ready    &amp;lt;none&amp;gt;   173m   v1.31.5-eks-5d632ec   ON_DEMAND      ng1
ip-192-168-3-246.ap-northeast-2.compute.internal   Ready    &amp;lt;none&amp;gt;   173m   v1.31.5-eks-5d632ec   ON_DEMAND      ng1&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;2대의 Spot 노드가 추가되었습니다. 노드풀에 실행한 파드를 생성하겠습니다.&lt;/p&gt;
&lt;pre class=&quot;yaml&quot;&gt;&lt;code&gt;cat &amp;lt;&amp;lt;EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
  name: busybox
spec:
  terminationGracePeriodSeconds: 3
  containers:
  - name: busybox
    image: busybox
    command:
    - &quot;/bin/sh&quot;
    - &quot;-c&quot;
    - &quot;while true; do date &amp;gt;&amp;gt; /home/pod-out.txt; cd /home; sync; sync; sleep 10; done&quot;
  nodeSelector:
    eks.amazonaws.com/capacityType: SPOT
EOF

# 파드가 배포된 노드 정보 확인
kubectl get pod -owide
NAME             READY   STATUS    RESTARTS   AGE   IP             NODE                                              NOMINATED NODE   READINESS GATES
busybox          1/1     Running   0          8s    192.168.2.98   ip-192-168-2-15.ap-northeast-2.compute.internal   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;

# 삭제
kubectl delete pod busybox&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;파드가 Spot 노드 중 하나에 배포된 것을 알 수 있습니다. 다만 Spot 노드는 Preempt 될 수 있기 때문에 일시적이거나 혹은 distruption에 문제가 없는 워크로드를 실행하는 것을 권장합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Spot 노드 그룹 또한 taint를 주는 방법으로 일반 워크로드가 실행되지 않도록 유도하는 것이 알맞습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;AKS에서도 spot 노드 풀을 생성할 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;https://learn.microsoft.com/en-us/azure/aks/spot-node-pool&quot;&gt;https://learn.microsoft.com/en-us/azure/aks/spot-node-pool&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;AKS에서는 spot 노드 풀에 기본적으로 &lt;code&gt;kubernetes.azure.com/scalesetpriority=spot:NoSchedule&lt;/code&gt; taint를 적용합니다. 이는 고객의 워크로드가 의도치 않게 스케줄링 되는 이슈를 방지하고자 하는 목적으로, Spot 노드풀에 실행하고자 하는 워크로드에는 toleration이 필요한 점을 유의해야 합니다.&lt;/p&gt;
&lt;pre class=&quot;less&quot;&gt;&lt;code&gt;spec:
  containers:
  - name: spot-example
  tolerations:
  - key: &quot;kubernetes.azure.com/scalesetpriority&quot;
    operator: &quot;Equal&quot;
    value: &quot;spot&quot;
    effect: &quot;NoSchedule&quot;
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
        - matchExpressions:
          - key: &quot;kubernetes.azure.com/scalesetpriority&quot;
            operator: In
            values:
            - &quot;spot&quot;
      preferredDuringSchedulingIgnoredDuringExecution:
      - weight: 1
        preference:
          matchExpressions:
          - key: another-node-label-key
            operator: In
            values:
            - another-node-label-value&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;마지막으로 실습에 사용한 노드풀을 삭제합니다.&lt;/p&gt;
&lt;pre class=&quot;awk&quot;&gt;&lt;code&gt; eksctl delete nodegroup -c $CLUSTER_NAME -n managed-spot&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;5. 리소스 정리&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;실습을 모두 마쳤다면, 아래와 같이 EKS를 삭제하고, 삭제를 확인한 뒤 CloudFormation으로 생성한 실습 환경을 삭제합니다.&lt;/p&gt;
&lt;pre class=&quot;dsconfig&quot;&gt;&lt;code&gt;# EKS 삭제
eksctl delete cluster --name $CLUSTER_NAME

# 실습 환경 삭제
aws cloudformation delete-stack --stack-name myeks&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;마무리&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;EKS의 스토리지 옵션과 노드 그룹에 대해서 확인해보았습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Azure 환경을 주로 사용하기 때문에, EKS에 생성된 리소스를 바탕으로 AKS와 일부 비교를 해볼 수 있었습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;생각보다 시간이 너무 오래걸리기도 했고, 또 어떤 부분은 아직 EKS의 개념이 완벽하지 않은 부분도 있어서, 이후 학습을 더 진행해보고 내용을 보강하도록 하겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;다음 시간에는 EKS의 Observability에 대해서 학습해보고 내용을 정리하도록 하겠습니다.&lt;/p&gt;</description>
      <category>EKS</category>
      <category>AKS</category>
      <category>aws</category>
      <category>Azure</category>
      <category>EKS</category>
      <category>node group</category>
      <category>node pool</category>
      <author>한명</author>
      <guid isPermaLink="true">https://a-person.tistory.com/34</guid>
      <comments>https://a-person.tistory.com/34#entry34comment</comments>
      <pubDate>Sun, 23 Feb 2025 02:05:24 +0900</pubDate>
    </item>
    <item>
      <title>[3-1] EKS 스토리지 옵션</title>
      <link>https://a-person.tistory.com/33</link>
      <description>&lt;p data-ke-size=&quot;size16&quot;&gt;이번 포스트에서는 Kubernetes의 Persistent Volume을 지원하기 위해서 EKS에서 사용 가능한 옵션을 살펴보겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;목차&lt;/h2&gt;
&lt;ol style=&quot;list-style-type: decimal;&quot; data-ke-list-type=&quot;decimal&quot;&gt;
&lt;li&gt;Kubernetes 스토리지 옵션&lt;/li&gt;
&lt;li&gt;EKS의 스토리지 옵션&lt;/li&gt;
&lt;li&gt;AKS의 스토리지 옵션&lt;/li&gt;
&lt;li&gt;실습 환경과 사전 정보 확인&lt;/li&gt;
&lt;li&gt;Amazon EBS CSI Driver 사용&lt;/li&gt;
&lt;li&gt;Amazon EFS CSI Driver 사용&lt;/li&gt;
&lt;/ol&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;1. Kubernetes 스토리지 옵션&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Kubernetes 환경에서 실행되는 파드(컨테이너)는 컨테이너 이미지가 자체가 특별한 형태(격리되고 제한된 리소스를 가진)의 프로세스로 실행되는 것으로 이해할 수 있습니다. 이때 컨테이너 이미지에 존재하지 않는 추가적인 스토리지가 필요할 수 있는데 Kubernetes에서는 이를 Volume으로 제공할 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Kubernetes의 Volumes은 일반적으로 이야기하는 스토리지 보다는 더 큰 개념으로 configMap, secret, 임시 저장 공간, 영구적 저장 공간을 포함합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;https://kubernetes.io/docs/concepts/storage/volumes/&quot;&gt;https://kubernetes.io/docs/concepts/storage/volumes/&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;910&quot; data-origin-height=&quot;577&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/d1ryQg/btsMr7MLMnu/9YkHirKBd5ypggYbYRBpX0/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/d1ryQg/btsMr7MLMnu/9YkHirKBd5ypggYbYRBpX0/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/d1ryQg/btsMr7MLMnu/9YkHirKBd5ypggYbYRBpX0/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2Fd1ryQg%2FbtsMr7MLMnu%2F9YkHirKBd5ypggYbYRBpX0%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;910&quot; height=&quot;577&quot; data-origin-width=&quot;910&quot; data-origin-height=&quot;577&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Kubernetes의 Volume을 Ephemeral volume과 Persistent volume으로 나눌 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Ephemeral volume은 파드의 수명주기를 가지고, Persistent volume은 파드의 수명주기와 관계 없이 존재할 수 있습니다. 즉, 파드가 삭제되면 kubernetes는 ephemeral volume을 제거하지만 persistent volume은 제거하지 않습니다. 또한 어떤 volume이건 파드의 수명주기 혹은 그 이상을 가지기 때문에 &quot;컨테이너&quot;를 재시작하더라도 volume의 데이터는 유지됩니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;또한 한 가지 중요한 점은 volume은 pod spec에 존재하기 때문에, 파드 내에 존재하는 컨테이너 간에는 공유가 가능한 점이 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아래로 ephemeral volume의 종류를 확인 할 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;https://kubernetes.io/docs/concepts/storage/ephemeral-volumes/&quot;&gt;https://kubernetes.io/docs/concepts/storage/ephemeral-volumes/&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;906&quot; data-origin-height=&quot;381&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/nu25R/btsMs6GeOCF/DgCKAl3dPCFtm33PEKUbk1/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/nu25R/btsMs6GeOCF/DgCKAl3dPCFtm33PEKUbk1/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/nu25R/btsMs6GeOCF/DgCKAl3dPCFtm33PEKUbk1/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2Fnu25R%2FbtsMs6GeOCF%2FDgCKAl3dPCFtm33PEKUbk1%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;906&quot; height=&quot;381&quot; data-origin-width=&quot;906&quot; data-origin-height=&quot;381&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아마 처음 Kubernetes를 접하시는 분들은 emptyDir의 의미에 대해 헷갈릴 수 있는데, 이를 실습을 통해 알아 보겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;먼저 비교를 위해서 volume 정의가 없는 파드를 실행해보고, 컨테이너의 재시작에서 데이터가 유지되는지 확인해보겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;참고: &lt;a href=&quot;https://kubernetes.io/docs/tasks/configure-pod-container/configure-volume-storage/&quot;&gt;https://kubernetes.io/docs/tasks/configure-pod-container/configure-volume-storage/&lt;/a&gt;&lt;/p&gt;
&lt;pre class=&quot;yaml&quot;&gt;&lt;code&gt;# redis 파드 생성
$ cat &amp;lt;&amp;lt;EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
  name: redis
spec:
  terminationGracePeriodSeconds: 0
  containers:
  - name: redis
    image: redis
EOF

# redis 파드 내에 파일 작성
$ kubectl exec -it redis -- pwd
/data
$ kubectl exec -it redis -- sh -c &quot;echo hello &amp;gt; /data/hello.txt&quot;
$ kubectl exec -it redis -- cat /data/hello.txt
hello

# ps 설치 (컨테이너가 실행된 PID 확인을 위함)
$ kubectl exec -it redis -- sh -c &quot;apt update &amp;amp;&amp;amp; apt install procps -y&quot;
&amp;lt;생략&amp;gt;
$ kubectl exec -it redis -- ps aux
USER         PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
redis          1  0.2  0.2 133636 15600 ?        Ssl  14:35   0:00 redis-server
root         233  0.0  0.0   8088  3916 pts/0    Rs+  14:37   0:00 ps aux

# redis 프로세스 강제 종료
$ kubectl exec -it redis -- kill 1

# container 가 restart 됨
$ kubectl get pod
NAME    READY   STATUS    RESTARTS      AGE
redis   1/1     Running   1 (45s ago)   2m52s

# redis 파드 내에 파일 확인
$ kubectl exec -it redis -- cat /data/hello.txt
cat: /data/hello.txt: No such file or directory
$ kubectl exec -it redis -- ls -l /data
total 0

# 파드 삭제
$ kubectl delete pod redis&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;volume이 없이는 실행 중인 컨테이너에서 파일을 쓰는 건은 단순히 컨테이너가 가진 Runtime이 유효한 layer를 쓰는 것에 불과합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아래는 emptyDir을 통한 테스트 입니다. 결과를 명확하게 보기 위해서 이번에는 deployment로 배포하겠습니다.&lt;/p&gt;
&lt;pre class=&quot;bash&quot; data-ke-language=&quot;bash&quot;&gt;&lt;code&gt;# redis 파드 생성 with emptyDir
$ cat &amp;lt;&amp;lt;EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis
spec:
  replicas: 1
  selector:
    matchLabels:
      app: redis
  template:
    metadata:
      labels:
        app: redis
    spec:
      terminationGracePeriodSeconds: 0
      containers:
      - name: redis
        image: redis
        volumeMounts:
        - name: redis-storage
          mountPath: /data/redis
      volumes:
      - name: redis-storage
        emptyDir: {}
EOF

# redis 파드 내에 파일 작성
$ kubectl get po
NAME                     READY   STATUS    RESTARTS   AGE
redis-78fdb689f4-pbmz7   1/1     Running   0          3s
$ kubectl exec -it redis-78fdb689f4-pbmz7 -- pwd
/data
$ kubectl exec -it redis-78fdb689f4-pbmz7 -- sh -c &quot;echo hello &amp;gt; /data/redis/hello.txt&quot;
$ kubectl exec -it redis-78fdb689f4-pbmz7 -- cat /data/redis/hello.txt
hello

# ps 설치
$ kubectl exec -it redis-78fdb689f4-pbmz7 -- sh -c &quot;apt update &amp;amp;&amp;amp; apt install procps -y&quot;
&amp;lt;생략&amp;gt;
$ kubectl exec -it redis-78fdb689f4-pbmz7 -- ps aux

# redis 프로세스 강제 종료 
$ kubectl exec -it redis-78fdb689f4-pbmz7 -- kill 1

# 컨테이너가 재시작됨
$ kubectl get pod
NAME                     READY   STATUS    RESTARTS     AGE
redis-78fdb689f4-pbmz7   1/1     Running   1 (4s ago)   88s

# 컨테이너 스토리지를 사용하는 것과 다르게 파드 내에 파일이 유지되엇습니다.
$ kubectl exec -it redis-78fdb689f4-pbmz7 -- cat /data/redis/hello.txt
hello
$ kubectl exec -it redis-78fdb689f4-pbmz7 -- ls -l /data/redis
total 4
-rw-r--r-- 1 redis root 6 Feb 19 14:46 hello.txt

# 파드 삭제 후 파일 확인
$ kubectl delete pod redis-78fdb689f4-pbmz7
pod &quot;redis-78fdb689f4-pbmz7&quot; deleted

# 신규 파드가 배포되었습니다.
$ kubectl get po
NAME                     READY   STATUS    RESTARTS   AGE
redis-78fdb689f4-8thct   1/1     Running   0          4s

# redis 파드 내에 파일을 확인해보면 파드가 종료되면서 없어진 것을 알 수 있습니다.
$ kubectl exec -it redis-78fdb689f4-8thct  -- cat /data/redis/hello.txt
cat: /data/redis/hello.txt: No such file or directory
command terminated with exit code 1
$ kubectl exec -it redis-78fdb689f4-8thct  -- ls -l /data/redis
total 0

# 파드 삭제
kubectl delete pod redis&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;결국, 중요한 점은 emptyDir은 Ephemeral volume의 일종이기 때문에 파드의 수명주기 동안 유지된다는 점입니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Persistent Volume은 Ephemeral Volume과 다르게 파드의 수명주기와 관련없이 유지될 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아래는 persistent volume에 대한 문서로, 아마 Persistent volume, Persistent volume Claim과 같은 용어는 이미 익숙하실 것이라 생각합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;https://kubernetes.io/docs/concepts/storage/persistent-volumes/&quot;&gt;https://kubernetes.io/docs/concepts/storage/persistent-volumes/&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이후 다뤄지는 EKS의 스토리지 옵션은 이러한 Persistent Voluem을 제공하는 AWS의 스토리지 서비스와 그 구현에 대한 내용입니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;2. EKS의 스토리지 옵션&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;먼저 AWS의 스토리지 옵션에는 아래와 같은 선택지가 있습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1114&quot; data-origin-height=&quot;790&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bfzoNK/btsMuyu19H2/uQwleH5POD932akS99MCgK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bfzoNK/btsMuyu19H2/uQwleH5POD932akS99MCgK/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bfzoNK/btsMuyu19H2/uQwleH5POD932akS99MCgK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbfzoNK%2FbtsMuyu19H2%2FuQwleH5POD932akS99MCgK%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1114&quot; height=&quot;790&quot; data-origin-width=&quot;1114&quot; data-origin-height=&quot;790&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;출처: &lt;a href=&quot;https://aws.amazon.com/ko/getting-started/decision-guides/storage-on-aws-how-to-choose/&quot;&gt;https://aws.amazon.com/ko/getting-started/decision-guides/storage-on-aws-how-to-choose/&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;일반적인 스토리지의 구분을 block storage, file storage, object storage 유형으로 나눠 본다면, AW에서는 cache를 스토리지 옵션으로 제공하고 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Block 스토리지에 대응하는 Amazon EBS, File 스토리지의 Amazon EFS, 그리고 Object 스토리지를 위한 Amazon S3 가 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이 중 EKS에서 제공하는 옵션으로 Amazon EBS, Amazon EFS을 중점으로 살펴보겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;기타 전체 스토리지 옵션은 아래에서 살펴보실 수 있습니다. 본 포스트에서는 다루지 않지만, Windows 를 위한 Amazon FSx, 그리고 Amazon S3도 CSI driver로 제공하고 있는 걸로 알 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;https://docs.aws.amazon.com/ko_kr/eks/latest/userguide/storage.html&quot;&gt;https://docs.aws.amazon.com/ko_kr/eks/latest/userguide/storage.html&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1061&quot; data-origin-height=&quot;785&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/dHYscb/btsMsOlMGeZ/lB7GbhcDJkuSts8pYLcjr1/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/dHYscb/btsMsOlMGeZ/lB7GbhcDJkuSts8pYLcjr1/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/dHYscb/btsMsOlMGeZ/lB7GbhcDJkuSts8pYLcjr1/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FdHYscb%2FbtsMsOlMGeZ%2FlB7GbhcDJkuSts8pYLcjr1%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1061&quot; height=&quot;785&quot; data-origin-width=&quot;1061&quot; data-origin-height=&quot;785&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;3. AKS의 스토리지 옵션&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Azure에서도 Block, File, Object에 대응하는 스토리지 옵션을 가지고 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;다만 Azure에서 File, Object, Queue, Table 범주의 스토리지는 개별 상품으로 제공하는 것이 아닌 스토리지 계정(Storage Account)이란 상위 개념으로 두고, 하위에 각 스토리지에 대한 기능을 제공하고 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이로써 전반적인 보안/네트워크, 데이터 관리 등에 대한 전역 설정은 스토리지 계정에 남겨두고, 각 스토리지에서는 스토리지게 국한된 기능을 설정하거나 사용하도록 하는 계층적 구조를 가지고 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Azure에서의 Block 스토리지는 Azure Disk로, File 스토리지는 Azure File, Object 스토리지는 Azure Blob Storage로 제공합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;AKS에서는 이들 스토리지 옵션을 위해 CSI Driver를 제공하며, 클러스터를 설치하면 기본적으로는 Azure Disk CSI driver와 Azure File CSI driver는 enable 되어 있지만 Azure Blob Storage CSI Driver는 기본적으로 disable되어 있어 필요하면 추가로 enable을 해야합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;AKS의 CSI driver 제공에 대해서 아래 문서를 참고하실 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;https://learn.microsoft.com/ko-kr/azure/aks/csi-storage-drivers&quot;&gt;https://learn.microsoft.com/ko-kr/azure/aks/csi-storage-drivers&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;또한 EKS에 대응하는 AKS 스토리지 옵션을 설명한 아래 문서를 참고하실 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;https://learn.microsoft.com/ko-kr/azure/architecture/aws-professional/eks-to-aks/storage&quot;&gt;https://learn.microsoft.com/ko-kr/azure/architecture/aws-professional/eks-to-aks/storage&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;3. 실습 환경과 사전 정보 확인&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아래와 같이 실습 환경을 구성하도록 하겠습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1705&quot; data-origin-height=&quot;546&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/LZWvN/btsMtPjKv0j/Fey5z1jFiXdQ3qeWqTUUq1/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/LZWvN/btsMtPjKv0j/Fey5z1jFiXdQ3qeWqTUUq1/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/LZWvN/btsMtPjKv0j/Fey5z1jFiXdQ3qeWqTUUq1/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FLZWvN%2FbtsMtPjKv0j%2FFey5z1jFiXdQ3qeWqTUUq1%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1705&quot; height=&quot;546&quot; data-origin-width=&quot;1705&quot; data-origin-height=&quot;546&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;앞선 실습과 다른 점은 각 Public Subnet에 ENI를 추가하여 EFS를 사용할 수 있도록 구성했습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;CloudFormation을 통해서 아래와 같이 배포를 진행합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아래에서 CloudFormation 배포 명령에서 개인 환경에 알맞게 KeyName을 변경하시면 됩니다.&lt;/p&gt;
&lt;pre class=&quot;reasonml&quot;&gt;&lt;code&gt;# yaml 파일 다운로드
curl -O https://s3.ap-northeast-2.amazonaws.com/cloudformation.cloudneta.net/K8S/myeks-3week.yaml

# 배포
aws cloudformation deploy --template-file ./myeks-3week.yaml \
--stack-name myeks --parameter-overrides KeyName=ekskey SgIngressSshCidr=$(curl -s ipinfo.io/ip)/32 --region ap-northeast-2

# CloudFormation 스택 배포 완료 후 운영서버 EC2 IP 출력
aws cloudformation describe-stacks --stack-name myeks --query 'Stacks[*].Outputs[*].OutputValue' --output text

# 운영서버 EC2 에 SSH 접속
ssh -i &amp;lt;ssh 키파일&amp;gt; ec2-user@$(aws cloudformation describe-stacks --stack-name myeks --query 'Stacks[*].Outputs[0].OutputValue' --output text)&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;웹 콘솔에서 확인해보면 각 서브넷에 EFS 용 Network interface가 생성된 것을 알 수 있습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1754&quot; data-origin-height=&quot;296&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/btuBa3/btsMsKKrxsd/TerykkC0RDf4O2ghla6bl0/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/btuBa3/btsMsKKrxsd/TerykkC0RDf4O2ghla6bl0/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/btuBa3/btsMsKKrxsd/TerykkC0RDf4O2ghla6bl0/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbtuBa3%2FbtsMsKKrxsd%2FTerykkC0RDf4O2ghla6bl0%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1754&quot; height=&quot;296&quot; data-origin-width=&quot;1754&quot; data-origin-height=&quot;296&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;기본 실습 환경이 배포되면 아래와 같이 EKS를 배포합니다.&lt;/p&gt;
&lt;pre class=&quot;reasonml&quot;&gt;&lt;code&gt;# 환경 변수 선언
export CLUSTER_NAME=myeks

# myeks-VPC/Subnet 정보 확인 및 변수 지정
export VPCID=$(aws ec2 describe-vpcs --filters &quot;Name=tag:Name,Values=$CLUSTER_NAME-VPC&quot; --query 'Vpcs[*].VpcId' --output text)
echo $VPCID

export PubSubnet1=$(aws ec2 describe-subnets --filters Name=tag:Name,Values=&quot;$CLUSTER_NAME-Vpc1PublicSubnet1&quot; --query &quot;Subnets[0].[SubnetId]&quot; --output text)
export PubSubnet2=$(aws ec2 describe-subnets --filters Name=tag:Name,Values=&quot;$CLUSTER_NAME-Vpc1PublicSubnet2&quot; --query &quot;Subnets[0].[SubnetId]&quot; --output text)
export PubSubnet3=$(aws ec2 describe-subnets --filters Name=tag:Name,Values=&quot;$CLUSTER_NAME-Vpc1PublicSubnet3&quot; --query &quot;Subnets[0].[SubnetId]&quot; --output text)
echo $PubSubnet1 $PubSubnet2 $PubSubnet3

SSHKEYNAME=ekskey # 개인 Key Pair 이름으로 변경&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;환경 변수를 바탕으로 clusterConfig를 생성합니다.&lt;/p&gt;
&lt;pre class=&quot;yaml&quot;&gt;&lt;code&gt;cat &amp;lt;&amp;lt; EOF &amp;gt; myeks.yaml
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
  name: myeks
  region: ap-northeast-2
  version: &quot;1.31&quot;

iam:
  withOIDC: true # enables the IAM OIDC provider as well as IRSA for the Amazon CNI plugin

  serviceAccounts: # service accounts to create in the cluster. See IAM Service Accounts
  - metadata:
      name: aws-load-balancer-controller
      namespace: kube-system
    wellKnownPolicies:
      awsLoadBalancerController: true

vpc:
  cidr: 192.168.0.0/16
  clusterEndpoints:
    privateAccess: true # if you only want to allow private access to the cluster
    publicAccess: true # if you want to allow public access to the cluster
  id: $VPCID
  subnets:
    public:
      ap-northeast-2a:
        az: ap-northeast-2a
        cidr: 192.168.1.0/24
        id: $PubSubnet1
      ap-northeast-2b:
        az: ap-northeast-2b
        cidr: 192.168.2.0/24
        id: $PubSubnet2
      ap-northeast-2c:
        az: ap-northeast-2c
        cidr: 192.168.3.0/24
        id: $PubSubnet3

addons:
  - name: vpc-cni # no version is specified so it deploys the default version
    version: latest # auto discovers the latest available
    attachPolicyARNs: # attach IAM policies to the add-on's service account
      - arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy
    configurationValues: |-
      enableNetworkPolicy: &quot;true&quot;

  - name: kube-proxy
    version: latest

  - name: coredns
    version: latest

  - name: metrics-server
    version: latest

managedNodeGroups:
- amiFamily: AmazonLinux2023
  desiredCapacity: 3
  iam:
    withAddonPolicies:
      certManager: true # Enable cert-manager
      externalDNS: true # Enable ExternalDNS
  instanceType: t3.medium
  preBootstrapCommands:
    # install additional packages
    - &quot;dnf install nvme-cli links tree tcpdump sysstat ipvsadm ipset bind-utils htop -y&quot;
  labels:
    alpha.eksctl.io/cluster-name: myeks
    alpha.eksctl.io/nodegroup-name: ng1
  maxPodsPerNode: 100
  maxSize: 3
  minSize: 3
  name: ng1
  ssh:
    allow: true
    publicKeyName: $SSHKEYNAME
  tags:
    alpha.eksctl.io/nodegroup-name: ng1
    alpha.eksctl.io/nodegroup-type: managed
  volumeIOPS: 3000
  volumeSize: 120
  volumeThroughput: 125
  volumeType: gp3
EOF&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이제 EKS를 생성합니다.&lt;/p&gt;
&lt;pre class=&quot;pgsql&quot;&gt;&lt;code&gt;eksctl create cluster -f myeks.yaml --verbose 4&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;EKS 클러스터에서 스토리지와 관련하여 기본 설정을 확인해보도록 하겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;앞서 EKS 생성에서 확인한 바와 같이 EKS를 기본 생성했을 때는 CSI Driver는 설치되어 있지 않습니다. 그렇기 때문에 EKS에서는 Add-on을 설치해야 합니다.&lt;/p&gt;
&lt;pre class=&quot;lsl&quot;&gt;&lt;code&gt;$kubectl get po -A
NAMESPACE     NAME                              READY   STATUS    RESTARTS   AGE
default       nsenter-58d5bd                    1/1     Running   0          42s
kube-system   aws-node-9rz7w                    2/2     Running   0          7m23s
kube-system   aws-node-nfxqk                    2/2     Running   0          7m26s
kube-system   aws-node-x4sbw                    2/2     Running   0          7m22s
kube-system   coredns-86f5954566-bjxj2          1/1     Running   0          13m
kube-system   coredns-86f5954566-qf5b6          1/1     Running   0          13m
kube-system   kube-proxy-8jpdb                  1/1     Running   0          7m22s
kube-system   kube-proxy-bmrk9                  1/1     Running   0          7m23s
kube-system   kube-proxy-jmqzl                  1/1     Running   0          7m26s
kube-system   metrics-server-6bf5998d9c-4lg9h   1/1     Running   0          13m
kube-system   metrics-server-6bf5998d9c-8xszh   1/1     Running   0          13m

$ kubectl get ds -A
NAMESPACE     NAME         DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
kube-system   aws-node     3         3         3       3            3           &amp;lt;none&amp;gt;          14m
kube-system   kube-proxy   3         3         3       3            3           &amp;lt;none&amp;gt;          14m&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;EKS를 생성한 시점 built-in Storage Class를 확인해보면 gp2가 생성되어 있습니다.&lt;/p&gt;
&lt;pre class=&quot;dts&quot;&gt;&lt;code&gt;kubectl get storageclass
NAME   PROVISIONER             RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
gp2    kubernetes.io/aws-ebs   Delete          WaitForFirstConsumer   false                  17m

kubectl describe storageclass gp2
Name:            gp2
IsDefaultClass:  No
Annotations:     kubectl.kubernetes.io/last-applied-configuration={&quot;apiVersion&quot;:&quot;storage.k8s.io/v1&quot;,&quot;kind&quot;:&quot;StorageClass&quot;,&quot;metadata&quot;:{&quot;annotations&quot;:{},&quot;name&quot;:&quot;gp2&quot;},&quot;parameters&quot;:{&quot;fsType&quot;:&quot;ext4&quot;,&quot;type&quot;:&quot;gp2&quot;},&quot;provisioner&quot;:&quot;kubernetes.io/aws-ebs&quot;,&quot;volumeBindingMode&quot;:&quot;WaitForFirstConsumer&quot;}

Provisioner:           kubernetes.io/aws-ebs
Parameters:            fsType=ext4,type=gp2
AllowVolumeExpansion:  &amp;lt;unset&amp;gt;
MountOptions:          &amp;lt;none&amp;gt;
ReclaimPolicy:         Delete
VolumeBindingMode:     WaitForFirstConsumer
Events:                &amp;lt;none&amp;gt;&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;gp2를 바탕으로 pvc와 파드를 생성해보겠습니다.&lt;/p&gt;
&lt;pre class=&quot;yaml&quot;&gt;&lt;code&gt;cat &amp;lt;&amp;lt;EOF | kubectl apply -f -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: gp2-ebs-claim
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
  storageClassName: gp2
EOF


# 파드 생성
cat &amp;lt;&amp;lt;EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
  name: app
spec:
  terminationGracePeriodSeconds: 3
  containers:
  - name: app
    image: centos
    command: [&quot;/bin/sh&quot;]
    args: [&quot;-c&quot;, &quot;while true; do echo \$(date -u) &amp;gt;&amp;gt; /data/out.txt; sleep 5; done&quot;]
    volumeMounts:
    - name: persistent-storage
      mountPath: /data
  volumes:
  - name: persistent-storage
    persistentVolumeClaim:
      claimName: gp2-ebs-claim
EOF&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;혹시나 intree가 동작하는 건가 생각했는데, Provisioner를 보면 ebs.csi.aws.com으로 처리되는 것으로 보입니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;(아마 intree volume이 deprecated 되어 이제는 모든 intree volume 형식도 csi가 처리하도록 변경된 것일 수 있습니다)&lt;/p&gt;
&lt;pre class=&quot;routeros&quot;&gt;&lt;code&gt;$ kubectl get po,pvc,pv
NAME                 READY   STATUS    RESTARTS   AGE
pod/app              0/1     Pending   0          8s
pod/nsenter-58d5bd   1/1     Running   0          6m6s

NAME                                  STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   VOLUMEATTRIBUTESCLASS   AGE
persistentvolumeclaim/gp2-ebs-claim   Pending                                      gp2            &amp;lt;unset&amp;gt;                 25s
$ kubectl describe pvc gp2-ebs-claim
Name:          gp2-ebs-claim
Namespace:     default
StorageClass:  gp2
Status:        Pending
Volume:
Labels:        &amp;lt;none&amp;gt;
Annotations:   volume.beta.kubernetes.io/storage-provisioner: ebs.csi.aws.com
               volume.kubernetes.io/selected-node: ip-192-168-1-6.ap-northeast-2.compute.internal
               volume.kubernetes.io/storage-provisioner: ebs.csi.aws.com
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode:    Filesystem
Used By:       app
Events:
  Type    Reason                Age                From                         Message
  ----    ------                ----               ----                         -------
  Normal  WaitForFirstConsumer  26s (x3 over 42s)  persistentvolume-controller  waiting for first consumer to be created before binding
  Normal  ExternalProvisioning  11s (x3 over 25s)  persistentvolume-controller  Waiting for a volume to be created either by the external provisioner 'ebs.csi.aws.com' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered.&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;각 노드에 대응하는 csinodes 정보가 생성되어 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;CSINode 오브젝트에 대한 문서를 확인해보면 CSI driver를 설치하면 해당 노드에 CSI 관련 정보가 CSINode에 들어가는 것이라고 합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;https://kubernetes-csi.github.io/docs/csi-node-object.html&quot;&gt;https://kubernetes-csi.github.io/docs/csi-node-object.html&lt;/a&gt;&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;$ kubectl get csinodes
NAME                                               DRIVERS   AGE
ip-192-168-1-6.ap-northeast-2.compute.internal     0         20m
ip-192-168-2-172.ap-northeast-2.compute.internal   0         20m
ip-192-168-3-246.ap-northeast-2.compute.internal   0         20m

$ kubectl get csinodes ip-192-168-1-6.ap-northeast-2.compute.internal -oyaml
apiVersion: storage.k8s.io/v1
kind: CSINode
metadata:
  annotations:
    storage.alpha.kubernetes.io/migrated-plugins: kubernetes.io/aws-ebs,kubernetes.io/azure-disk,kubernetes.io/azure-file,kubernetes.io/cinder,kubernetes.io/gce-pd,kubernetes.io/portworx-volume,kubernetes.io/vsphere-volume
  creationTimestamp: &quot;2025-02-22T13:43:45Z&quot;
  name: ip-192-168-1-6.ap-northeast-2.compute.internal
  ownerReferences:
  - apiVersion: v1
    kind: Node
    name: ip-192-168-1-6.ap-northeast-2.compute.internal
    uid: d5f55b6f-d90c-4e5e-b984-6dedfa396116
  resourceVersion: &quot;2233&quot;
  uid: da7b254b-9dff-451d-9abf-02edc9c31eac
spec:
  drivers: null&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이후 CSI driver 설치한 이후 비교를 위해서 정보를 남겨 두겠습니다. 참고로 위에서 보면 spec.drivers가 null로 보입니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;AKS는 기본 생성 시점에 2개의 CSI driver가 설치되고, 아래와 같이 drivers에 등록이 된걸로 보입니다.&lt;/p&gt;
&lt;pre class=&quot;yaml&quot;&gt;&lt;code&gt;$ kubectl get csinodes
NAME                                DRIVERS   AGE
aks-nodepool1-76251328-vmss000008   2         49s
aks-nodepool1-76251328-vmss000009   2         47s

$ kubectl get csinodes aks-nodepool1-76251328-vmss000008 -oyaml
apiVersion: storage.k8s.io/v1
kind: CSINode
metadata:
  annotations:
    storage.alpha.kubernetes.io/migrated-plugins: kubernetes.io/aws-ebs,kubernetes.io/azure-disk,kubernetes.io/azure-file,kubernetes.io/cinder,kubernetes.io/gce-pd,kubernetes.io/portworx-volume,kubernetes.io/vsphere-volume
  creationTimestamp: &quot;2025-02-22T14:10:54Z&quot;
  name: aks-nodepool1-76251328-vmss000008
  ownerReferences:
  - apiVersion: v1
    kind: Node
    name: aks-nodepool1-76251328-vmss000008
    uid: 1704269d-1aca-40fc-bb47-70675ec1cbcf
  resourceVersion: &quot;768823&quot;
  uid: 6d5e60da-5cc3-473f-b8c0-ffa880be9f3a
spec:
  drivers:
  - name: file.csi.azure.com
    nodeID: aks-nodepool1-76251328-vmss000008
    topologyKeys: null
  - allocatable:
      count: 8
    name: disk.csi.azure.com
    nodeID: aks-nodepool1-76251328-vmss000008
    topologyKeys:
    - topology.disk.csi.azure.com/zone&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;마지막으로 앞서 생성이 실패한 리소스는 삭제하고 다음 실습으로 넘어가서 EBS CSI driver를 살펴보겠습니다.&lt;/p&gt;
&lt;pre class=&quot;vim&quot;&gt;&lt;code&gt;$ kubectl delete po app
pod &quot;app&quot; deleted
$ kubectl delete pvc gp2-ebs-claim
persistentvolumeclaim &quot;gp2-ebs-claim&quot; deleted&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;4. Amazon EBS CSI Driver 사용&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;EBS CSI Driver를 사용하기 위해서 Addon을 설치 하겠습니다. 또한 ebs-csi-controller에서 사용하는 권한을 위해서 AmazonEBSCSIDriverPolicy를 사용해 IRSA 설정을 하였습니다.&lt;/p&gt;
&lt;pre class=&quot;elixir&quot;&gt;&lt;code&gt;# 아래는 aws-ebs-csi-driver 전체 버전 정보와 기본 설치 버전(True) 정보 확인
$ aws eks describe-addon-versions \
    --addon-name aws-ebs-csi-driver \
    --kubernetes-version 1.31 \
    --query &quot;addons[].addonVersions[].[addonVersion, compatibilities[].defaultVersion]&quot; \
    --output text

# IRSA 설정 : AWS관리형 정책 AmazonEBSCSIDriverPolicy 사용
$ eksctl create iamserviceaccount \
  --name ebs-csi-controller-sa \
  --namespace kube-system \
  --cluster ${CLUSTER_NAME} \
  --attach-policy-arn arn:aws:iam::aws:policy/service-role/AmazonEBSCSIDriverPolicy \
  --approve \
  --role-only \
  --role-name AmazonEKS_EBS_CSI_DriverRole

# IRSA 확인
$ eksctl get iamserviceaccount --cluster ${CLUSTER_NAME}
NAMESPACE       NAME                            ROLE ARN
kube-system     aws-load-balancer-controller    arn:aws:iam::430118812536:role/eksctl-myeks-addon-iamserviceaccount-kube-sys-Role1-KfYBX6UfNuOM
kube-system     ebs-csi-controller-sa           arn:aws:iam::430118812536:role/AmazonEKS_EBS_CSI_DriverRole

# Amazon EBS CSI driver addon 배포(설치)
$ export ACCOUNT_ID=$(aws sts get-caller-identity --query 'Account' --output text)
$ eksctl create addon --name aws-ebs-csi-driver --cluster ${CLUSTER_NAME} --service-account-role-arn arn:aws:iam::${ACCOUNT_ID}:role/AmazonEKS_EBS_CSI_DriverRole --force
2025-02-22 23:17:25 [ℹ]  Kubernetes version &quot;1.31&quot; in use by cluster &quot;myeks&quot;
2025-02-22 23:17:26 [ℹ]  IRSA is set for &quot;aws-ebs-csi-driver&quot; addon; will use this to configure IAM permissions
2025-02-22 23:17:26 [!]  the recommended way to provide IAM permissions for &quot;aws-ebs-csi-driver&quot; addon is via pod identity associations; after addon creation is completed, run `eksctl utils migrate-to-pod-identity`
2025-02-22 23:17:26 [ℹ]  using provided ServiceAccountRoleARN &quot;arn:aws:iam::430118812536:role/AmazonEKS_EBS_CSI_DriverRole&quot;
2025-02-22 23:17:26 [ℹ]  creating addon&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Addon 설치가 완료되고 정보를 살펴보겠습니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;# 확인
$ eksctl get addon --cluster ${CLUSTER_NAME}
2025-02-22 23:18:10 [ℹ]  Kubernetes version &quot;1.31&quot; in use by cluster &quot;myeks&quot;
2025-02-22 23:18:10 [ℹ]  getting all addons
2025-02-22 23:18:11 [ℹ]  to see issues for an addon run `eksctl get addon --name &amp;lt;addon-name&amp;gt; --cluster &amp;lt;cluster-name&amp;gt;`
NAME                    VERSION                 STATUS          ISSUES  IAMROLE                                                                  UPDATE AVAILABLE CONFIGURATION VALUES            POD IDENTITY ASSOCIATION ROLES
aws-ebs-csi-driver      v1.39.0-eksbuild.1      CREATING        0       arn:aws:iam::430118812536:role/AmazonEKS_EBS_CSI_DriverRole
coredns                 v1.11.4-eksbuild.2      ACTIVE          0
kube-proxy              v1.31.3-eksbuild.2      ACTIVE          0
metrics-server          v0.7.2-eksbuild.2       ACTIVE          0
vpc-cni                 v1.19.2-eksbuild.5      ACTIVE          0       arn:aws:iam::430118812536:role/eksctl-myeks-addon-vpc-cni-Role1-Y1231NkEEKPX                              enableNetworkPolicy: &quot;true&quot;

$ kubectl get deploy,ds -l=app.kubernetes.io/name=aws-ebs-csi-driver -n kube-system
NAME                                 READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/ebs-csi-controller   2/2     2            2           2m15s

NAME                                  DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR              AGE
daemonset.apps/ebs-csi-node           3         3         3       3            3           kubernetes.io/os=linux     2m16s
daemonset.apps/ebs-csi-node-windows   0         0         0       0            0           kubernetes.io/os=windows   2m16s


# ebs-csi-controller 파드에 6개 컨테이너 확인
kubectl get pod -n kube-system -l app=ebs-csi-controller -o jsonpath='{.items[0].spec.containers[*].name}' ; echo
ebs-plugin csi-provisioner csi-attacher csi-snapshotter csi-resizer liveness-probe&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;디플로이먼트와 데몬셋 정보를 볼때 ebs-csi-controller 자체가 사용자 노드 그룹에서 실행되는 것을 알 수 있습니다. 그러하므로 별도의 권한을 주어서 컨트롤러가 직접 스토리지를 관리할 수 있도록 권한 할당하는 방식으로 진행됩니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;다만 AKS의 CSI controller는 컨트롤 플레인에 위치하고 있고 CSI driver 역할을 하는 컴포넌트만 워커 노드에 데몬셋으로 실행됩니다. 또한 컨트롤 프레인의 CSI controller는 AKS 클러스터의 Identity를 통해 ARM(Azure Resource Manager)으로 요청을 하게 되어 있습니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;$ kubectl get deploy,ds -n kube-system
NAME                                 READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/coredns              2/2     2            2           8d
deployment.apps/coredns-autoscaler   1/1     1            1           8d
deployment.apps/konnectivity-agent   2/2     2            2           8d
deployment.apps/metrics-server       2/2     2            2           8d

NAME                                        DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
daemonset.apps/azure-ip-masq-agent          2         2         2       2            2           &amp;lt;none&amp;gt;          8d
daemonset.apps/cloud-node-manager           2         2         2       2            2           &amp;lt;none&amp;gt;          8d
daemonset.apps/cloud-node-manager-windows   0         0         0       0            0           &amp;lt;none&amp;gt;          8d
daemonset.apps/csi-azuredisk-node           2         2         2       2            2           &amp;lt;none&amp;gt;          8d
daemonset.apps/csi-azuredisk-node-win       0         0         0       0            0           &amp;lt;none&amp;gt;          8d
daemonset.apps/csi-azurefile-node           2         2         2       2            2           &amp;lt;none&amp;gt;          8d
daemonset.apps/csi-azurefile-node-win       0         0         0       0            0           &amp;lt;none&amp;gt;          8d
daemonset.apps/kube-proxy                   2         2         2       2            2           &amp;lt;none&amp;gt;          8d

$ kubectl get po -n kube-system csi-azuredisk-node-4fjbg -o jsonpath='{.spec.containers[*].name}'; echo
liveness-probe node-driver-registrar azuredisk&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;앞서 살펴본 csinodes를 살펴보면 DRIVERS 값이 1로 확인되고, csinode를 살펴보면 spec에 driver와 allocatable(attach 가능한 수)이 확인됩니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;# csinodes 확인
$ kubectl get csinodes
NAME                                               DRIVERS   AGE
ip-192-168-1-6.ap-northeast-2.compute.internal     1         37m
ip-192-168-2-172.ap-northeast-2.compute.internal   1         37m
ip-192-168-3-246.ap-northeast-2.compute.internal   1         37m

$ kubectl get csinodes ip-192-168-1-6.ap-northeast-2.compute.internal -oyaml
apiVersion: storage.k8s.io/v1
kind: CSINode
metadata:
  annotations:
    storage.alpha.kubernetes.io/migrated-plugins: kubernetes.io/aws-ebs,kubernetes.io/azure-disk,kubernetes.io/azure-file,kubernetes.io/cinder,kubernetes.io/gce-pd,kubernetes.io/portworx-volume,kubernetes.io/vsphere-volume
  creationTimestamp: &quot;2025-02-22T13:43:45Z&quot;
  name: ip-192-168-1-6.ap-northeast-2.compute.internal
  ownerReferences:
  - apiVersion: v1
    kind: Node
    name: ip-192-168-1-6.ap-northeast-2.compute.internal
    uid: d5f55b6f-d90c-4e5e-b984-6dedfa396116
  resourceVersion: &quot;9341&quot;
  uid: da7b254b-9dff-451d-9abf-02edc9c31eac
spec:
  drivers:
  - allocatable:
      count: 26
    name: ebs.csi.aws.com
    nodeID: i-0c573120ce2302472
    topologyKeys:
    - kubernetes.io/os
    - topology.ebs.csi.aws.com/zone
    - topology.kubernetes.io/zone

$  kubectl get csidrivers
NAME              ATTACHREQUIRED   PODINFOONMOUNT   STORAGECAPACITY   TOKENREQUESTS   REQUIRESREPUBLISH   MODES        AGE
ebs.csi.aws.com   true             false            false             &amp;lt;unset&amp;gt;         false               Persistent   5m17s
efs.csi.aws.com   false            false            false             &amp;lt;unset&amp;gt;         false               Persistent   48m

$ kubectl describe csidrivers ebs.csi.aws.com
Name:         ebs.csi.aws.com
Namespace:
Labels:       app.kubernetes.io/component=csi-driver
              app.kubernetes.io/managed-by=EKS
              app.kubernetes.io/name=aws-ebs-csi-driver
              app.kubernetes.io/version=1.39.0
Annotations:  &amp;lt;none&amp;gt;
API Version:  storage.k8s.io/v1
Kind:         CSIDriver
Metadata:
  Creation Timestamp:  2025-02-22T14:17:33Z
  Resource Version:    9277
  UID:                 62f100ee-d3ad-463c-8674-23e4af00b280
Spec:
  Attach Required:     true
  Fs Group Policy:     ReadWriteOnceWithFSType
  Pod Info On Mount:   false
  Requires Republish:  false
  Se Linux Mount:      false
  Storage Capacity:    false
  Volume Lifecycle Modes:
    Persistent
Events:  &amp;lt;none&amp;gt;&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;참고로 부착 가능한 EBS 수량은 아래와 같이 변경 할 수 있습니다.&lt;/p&gt;
&lt;pre class=&quot;stata&quot;&gt;&lt;code&gt;# 노드에 최대 EBS 부착 수량 변경
$ cat &amp;lt;&amp;lt; EOF &amp;gt; node-attachments.yaml
&quot;node&quot;:
  &quot;volumeAttachLimit&quot;: 31
  &quot;enableMetrics&quot;: true
EOF

$ aws eks update-addon --cluster-name ${CLUSTER_NAME} --addon-name aws-ebs-csi-driver \
  --addon-version v1.39.0-eksbuild.1 --configuration-values 'file://node-attachments.yaml'
{
    &quot;update&quot;: {
        &quot;id&quot;: &quot;ab0de1c6-be8a-306f-8504-a6bbe332cb28&quot;,
        &quot;status&quot;: &quot;InProgress&quot;,
        &quot;type&quot;: &quot;AddonUpdate&quot;,
        &quot;params&quot;: [
            {
                &quot;type&quot;: &quot;AddonVersion&quot;,
                &quot;value&quot;: &quot;v1.39.0-eksbuild.1&quot;
            },
            {
                &quot;type&quot;: &quot;ConfigurationValues&quot;,
                &quot;value&quot;: &quot;\&quot;node\&quot;:\n  \&quot;volumeAttachLimit\&quot;: 31\n  \&quot;enableMetrics\&quot;: true&quot;
            }
        ],
        &quot;createdAt&quot;: &quot;2025-02-22T23:31:53.878000+09:00&quot;,
        &quot;errors&quot;: []
    }
}


## 확인
$ kubectl get csinodes ip-192-168-1-6.ap-northeast-2.compute.internal -oyaml
apiVersion: storage.k8s.io/v1
kind: CSINode
metadata:
  annotations:
    storage.alpha.kubernetes.io/migrated-plugins: kubernetes.io/aws-ebs,kubernetes.io/azure-disk,kubernetes.io/azure-file,kubernetes.io/cinder,kubernetes.io/gce-pd,kubernetes.io/portworx-volume,kubernetes.io/vsphere-volume
  creationTimestamp: &quot;2025-02-22T13:43:45Z&quot;
  name: ip-192-168-1-6.ap-northeast-2.compute.internal
  ownerReferences:
  - apiVersion: v1
    kind: Node
    name: ip-192-168-1-6.ap-northeast-2.compute.internal
    uid: d5f55b6f-d90c-4e5e-b984-6dedfa396116
  resourceVersion: &quot;13130&quot;
  uid: da7b254b-9dff-451d-9abf-02edc9c31eac
spec:
  drivers:
  - allocatable:
      count: 31
    name: ebs.csi.aws.com
    nodeID: i-0c573120ce2302472
    topologyKeys:
    - kubernetes.io/os
    - topology.ebs.csi.aws.com/zone
    - topology.kubernetes.io/zone&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이제 스토리지 클래스를 생성하여 샘플 파드를 실행해보겠습니다.&lt;/p&gt;
&lt;pre class=&quot;yaml&quot;&gt;&lt;code&gt;# gp3 스토리지 클래스 생성
$ kubectl get sc
cat &amp;lt;&amp;lt;EOF | kubectl apply -f -
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: gp3
  annotations:
    storageclass.kubernetes.io/is-default-class: &quot;true&quot;
allowVolumeExpansion: true
provisioner: ebs.csi.aws.com
volumeBindingMode: WaitForFirstConsumer
parameters:
  type: gp3
  #iops: &quot;5000&quot;
  #throughput: &quot;250&quot;
  allowAutoIOPSPerGBIncrease: 'true'
  encrypted: 'true'
  fsType: xfs # 기본값이 ext4
EOF

$ kubectl get storageclass
NAME            PROVISIONER             RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
gp2             kubernetes.io/aws-ebs   Delete          WaitForFirstConsumer   false                  71m
gp3 (default)   ebs.csi.aws.com         Delete          WaitForFirstConsumer   true                   16s

$ kubectl describe sc gp3 | grep Parameters
Parameters:            allowAutoIOPSPerGBIncrease=true,encrypted=true,fsType=xfs,type=gp3&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;참고로 스토리지 클래스에 사용되는 파라미터는 아래 문서를 참고할 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;https://github.com/kubernetes-sigs/aws-ebs-csi-driver/blob/master/docs/parameters.md&quot;&gt;https://github.com/kubernetes-sigs/aws-ebs-csi-driver/blob/master/docs/parameters.md&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이제 PVC와 파드를 배포해 실제 정상 동작 여부를 살펴보겠습니다.&lt;/p&gt;
&lt;pre class=&quot;bash&quot; data-ke-language=&quot;bash&quot;&gt;&lt;code&gt;# 워커노드의 EBS 볼륨 확인 : tag(키/값) 필터링 - 링크
aws ec2 describe-volumes --filters Name=tag:Name,Values=$CLUSTER_NAME-ng1-Node --query &quot;Volumes[].{VolumeId: VolumeId, VolumeType: VolumeType, InstanceId: Attachments[0].InstanceId, State: Attachments[0].State}&quot; | jq

[
  {
    &quot;VolumeId&quot;: &quot;vol-0d330ee4d601b3f61&quot;,
    &quot;VolumeType&quot;: &quot;gp3&quot;,
    &quot;InstanceId&quot;: &quot;i-09fae9da74f42fff6&quot;,
    &quot;State&quot;: &quot;attached&quot;
  },
  {
    &quot;VolumeId&quot;: &quot;vol-0be8852c50d581f9c&quot;,
    &quot;VolumeType&quot;: &quot;gp3&quot;,
    &quot;InstanceId&quot;: &quot;i-09378c64bd018dfbb&quot;,
    &quot;State&quot;: &quot;attached&quot;
  },
  {
    &quot;VolumeId&quot;: &quot;vol-0612ef77c2ebebb49&quot;,
    &quot;VolumeType&quot;: &quot;gp3&quot;,
    &quot;InstanceId&quot;: &quot;i-0c573120ce2302472&quot;,
    &quot;State&quot;: &quot;attached&quot;
  }
]

# 워커노드에서 파드에 추가한 EBS 볼륨 확인
aws ec2 describe-volumes --filters Name=tag:ebs.csi.aws.com/cluster,Values=true --query &quot;Volumes[].{VolumeId: VolumeId, VolumeType: VolumeType, InstanceId: Attachments[0].InstanceId, State: Attachments[0].State}&quot; | jq

[]

# PVC 생성
cat &amp;lt;&amp;lt;EOF | kubectl apply -f -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: ebs-claim
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 4Gi
  storageClassName: gp3
EOF

# WaitForFirstConsumer 이므로 파드가 없이는 Pending 상태로 머문다.
kubectl get pvc,pv
NAME                              STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   VOLUMEATTRIBUTESCLASS   AGE
persistentvolumeclaim/ebs-claim   Pending                                      gp3            &amp;lt;unset&amp;gt;                 2s

# 파드 생성
cat &amp;lt;&amp;lt;EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
  name: app
spec:
  terminationGracePeriodSeconds: 3
  containers:
  - name: app
    image: centos
    command: [&quot;/bin/sh&quot;]
    args: [&quot;-c&quot;, &quot;while true; do echo \$(date -u) &amp;gt;&amp;gt; /data/out.txt; sleep 5; done&quot;]
    volumeMounts:
    - name: persistent-storage
      mountPath: /data
  volumes:
  - name: persistent-storage
    persistentVolumeClaim:
      claimName: ebs-claim
EOF

# 워커노드에서 파드에 추가한 EBS 볼륨 모니터링
while true; do aws ec2 describe-volumes --filters Name=tag:ebs.csi.aws.com/cluster,Values=true --query &quot;Volumes[].{VolumeId: VolumeId, VolumeType: VolumeType, InstanceId: Attachments[0].InstanceId, State: Attachments[0].State}&quot; --output text; date; sleep 1; done

# Pod가 생성되는 시점
Sat Feb 22 23:50:11 KST 2025
None    None    vol-0dfe08ec2df70ab74   gp3
Sat Feb 22 23:50:14 KST 2025
i-0c573120ce2302472     attaching       vol-0dfe08ec2df70ab74   gp3
Sat Feb 22 23:50:17 KST 2025
i-0c573120ce2302472     attached        vol-0dfe08ec2df70ab74   gp3
Sat Feb 22 23:50:20 KST 2025
i-0c573120ce2302472     attached        vol-0dfe08ec2df70ab74   gp3
...
# PVC가 삭제되는 시점
Sun Feb 23 00:07:36 KST 2025
i-0c573120ce2302472     attached        vol-0dfe08ec2df70ab74   gp3
Sun Feb 23 00:07:39 KST 2025
i-0c573120ce2302472     detaching       vol-0dfe08ec2df70ab74   gp3
Sun Feb 23 00:07:42 KST 2025
None    None    vol-0dfe08ec2df70ab74   gp3
Sun Feb 23 00:07:45 KST 2025


# PVC, 파드 확인
$ kubectl get pod,pvc,pv
NAME                 READY   STATUS    RESTARTS   AGE
pod/app              1/1     Running   0          54s
pod/nsenter-58d5bd   1/1     Running   0          60m

NAME                              STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   VOLUMEATTRIBUTESCLASS   AGE
persistentvolumeclaim/ebs-claim   Bound    pvc-9d54f6eb-4458-40a3-9855-df4b48daeeef   4Gi        RWO            gp3            &amp;lt;unset&amp;gt;
     112s

NAME                                                        CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM               STORAGECLASS   VOLUMEATTRIBUTESCLASS   REASON   AGE
persistentvolume/pvc-9d54f6eb-4458-40a3-9855-df4b48daeeef   4Gi        RWO            Delete           Bound    default/ebs-claim   gp3
 &amp;lt;unset&amp;gt;                          52s

$ kubectl get VolumeAttachment
NAME                                                                   ATTACHER          PV                                         NODE
                                   ATTACHED   AGE
csi-d4e4fc85a15fcc5c44af222599968b28287cb967ee7d363e11aa070b74840ac0   ebs.csi.aws.com   pvc-9d54f6eb-4458-40a3-9855-df4b48daeeef   ip-192-168-1-6.ap-northeast-2.compute.internal   true       56s


# 파일 내용 추가 저장 확인
kubectl exec app -- tail -f /data/out.txt
Sat Feb 22 14:51:17 UTC 2025
Sat Feb 22 14:51:22 UTC 2025
Sat Feb 22 14:51:27 UTC 2025
Sat Feb 22 14:51:32 UTC 2025
Sat Feb 22 14:51:37 UTC 2025
Sat Feb 22 14:51:42 UTC 2025
Sat Feb 22 14:51:47 UTC 2025
Sat Feb 22 14:51:52 UTC 2025
Sat Feb 22 14:51:57 UTC 2025
Sat Feb 22 14:52:02 UTC 2025&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;생성된 EBS 볼륨의 volumeHandle을 확인해 웹 콘솔에서 확인할 수 있습니다.&lt;/p&gt;
&lt;pre class=&quot;routeros&quot;&gt;&lt;code&gt;# 추가된 EBS 볼륨 상세 정보 확인
$ kubectl get pv -o jsonpath=&quot;{.items[0].spec.csi.volumeHandle}&quot;
vol-0dfe08ec2df70ab74&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;웹 콘솔을 살펴보면 아래와 같이 확인됩니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1768&quot; data-origin-height=&quot;875&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/NPoKr/btsMsTN0xAJ/eDIYVXd3Vc4Zund23MTxuk/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/NPoKr/btsMsTN0xAJ/eDIYVXd3Vc4Zund23MTxuk/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/NPoKr/btsMsTN0xAJ/eDIYVXd3Vc4Zund23MTxuk/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FNPoKr%2FbtsMsTN0xAJ%2FeDIYVXd3Vc4Zund23MTxuk%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1768&quot; height=&quot;875&quot; data-origin-width=&quot;1768&quot; data-origin-height=&quot;875&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;PV 정보를 확인해보면 nodeAffinity가 지정되어 있고, &lt;code&gt;ap-northeast-2a&lt;/code&gt;로 지정이 되어 있는 것을 알 수 있습니다. 파드 배포된 노드를 확인해보면 &lt;code&gt;ip-192-168-1-6.ap-northeast-2.compute.interna&lt;/code&gt; 이고, 해당 노드의 zone 정보인 &lt;code&gt;ap-northeast-2a&lt;/code&gt; 일치하는 것을 알 수 있습니다.&lt;/p&gt;
&lt;pre class=&quot;yaml&quot;&gt;&lt;code&gt;# PV 상세 확인
$ kubectl get pv -o yaml
apiVersion: v1
items:
- apiVersion: v1
  kind: PersistentVolume
  metadata:
    annotations:
      pv.kubernetes.io/provisioned-by: ebs.csi.aws.com
      volume.kubernetes.io/provisioner-deletion-secret-name: &quot;&quot;
      volume.kubernetes.io/provisioner-deletion-secret-namespace: &quot;&quot;
    creationTimestamp: &quot;2025-02-22T14:50:12Z&quot;
    finalizers:
    - external-provisioner.volume.kubernetes.io/finalizer
    - kubernetes.io/pv-protection
    - external-attacher/ebs-csi-aws-com
    name: pvc-9d54f6eb-4458-40a3-9855-df4b48daeeef
    resourceVersion: &quot;17654&quot;
    uid: 657630c7-f924-4b00-af0b-38af7bbddc00
  spec:
    accessModes:
    - ReadWriteOnce
    capacity:
      storage: 4Gi
    claimRef:
      apiVersion: v1
      kind: PersistentVolumeClaim
      name: ebs-claim
      namespace: default
      resourceVersion: &quot;17628&quot;
      uid: 9d54f6eb-4458-40a3-9855-df4b48daeeef
    csi:
      driver: ebs.csi.aws.com
      fsType: xfs
      volumeAttributes:
        storage.kubernetes.io/csiProvisionerIdentity: 1740233857923-9409-ebs.csi.aws.com
      volumeHandle: vol-0dfe08ec2df70ab74
    nodeAffinity:
      required:
        nodeSelectorTerms:
        - matchExpressions:
          - key: topology.kubernetes.io/zone
            operator: In
            values:
            - ap-northeast-2a
    persistentVolumeReclaimPolicy: Delete
    storageClassName: gp3
    volumeMode: Filesystem
  status:
    lastPhaseTransitionTime: &quot;2025-02-22T14:50:12Z&quot;
    phase: Bound
kind: List
metadata:
  resourceVersion: &quot;&quot;

$ kubectl get po -owide
NAME             READY   STATUS    RESTARTS   AGE    IP              NODE                                             NOMINATED NODE   READINESS GATES
app              1/1     Running   0          4m4s   192.168.1.252   ip-192-168-1-6.ap-northeast-2.compute.internal   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
nsenter-58d5bd   1/1     Running   0          63m    192.168.1.6     ip-192-168-1-6.ap-northeast-2.compute.internal   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;

$ kubectl get node --label-columns=topology.ebs.csi.aws.com/zone,topology.k8s.aws/zone-id
NAME                                               STATUS   ROLES    AGE   VERSION               ZONE              ZONE-ID
ip-192-168-1-6.ap-northeast-2.compute.internal     Ready    &amp;lt;none&amp;gt;   68m   v1.31.5-eks-5d632ec   ap-northeast-2a   apne2-az1
ip-192-168-2-172.ap-northeast-2.compute.internal   Ready    &amp;lt;none&amp;gt;   68m   v1.31.5-eks-5d632ec   ap-northeast-2b   apne2-az2
ip-192-168-3-246.ap-northeast-2.compute.internal   Ready    &amp;lt;none&amp;gt;   68m   v1.31.5-eks-5d632ec   ap-northeast-2c   apne2-az3&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;일련의 과정을 다시 살펴보면 WaitForFirstConsumer로 지정된 PVC는 파드의 스케줄링이 된 이후 PV가 생성되며, PV는 파드가 스케줄링 된 노드의 토폴로지 정보를 따라 nodeAffinity로 동일한 토폴로지에 배포되는 것을 알 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;생성된 리소스를 삭제하고 실습을 마무리하겠습니다.&lt;/p&gt;
&lt;pre class=&quot;actionscript&quot;&gt;&lt;code&gt;kubectl delete pod app &amp;amp; kubectl delete pvc ebs-claim&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;5. Amazon EFS CSI Driver 사용&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Amazon EFS CSI Driver를 사용하기 위해서 addon을 설치하겠습니다.&lt;/p&gt;
&lt;pre class=&quot;yaml&quot;&gt;&lt;code&gt;# EFS 정보 확인 
aws efs describe-file-systems --query &quot;FileSystems[*].FileSystemId&quot; --output text

# 아래는 aws-efs-csi-driver 전체 버전 정보와 기본 설치 버전(True) 정보 확인
aws eks describe-addon-versions \
    --addon-name aws-efs-csi-driver \
    --kubernetes-version 1.31 \
    --query &quot;addons[].addonVersions[].[addonVersion, compatibilities[].defaultVersion]&quot; \
    --output text

# IAM 정책 생성
curl -s -O https://raw.githubusercontent.com/kubernetes-sigs/aws-efs-csi-driver/master/docs/iam-policy-example.json
aws iam create-policy --policy-name AmazonEKS_EFS_CSI_Driver_Policy --policy-document file://iam-policy-example.json

# ISRA 설정 : 고객관리형 정책 AmazonEKS_EFS_CSI_Driver_Policy 사용
eksctl create iamserviceaccount \
  --name efs-csi-controller-sa \
  --namespace kube-system \
  --cluster ${CLUSTER_NAME} \
  --attach-policy-arn arn:aws:iam::aws:policy/service-role/AmazonEFSCSIDriverPolicy \
  --approve \
  --role-only \
  --role-name AmazonEKS_EFS_CSI_DriverRole

# ISRA 확인
eksctl get iamserviceaccount --cluster ${CLUSTER_NAME}

NAMESPACE       NAME                            ROLE ARN
kube-system     aws-load-balancer-controller    arn:aws:iam::430118812536:role/eksctl-myeks-addon-iamserviceaccount-kube-sys-Role1-KfYBX6UfNuOM
kube-system     ebs-csi-controller-sa           arn:aws:iam::430118812536:role/AmazonEKS_EBS_CSI_DriverRole
kube-system     efs-csi-controller-sa           arn:aws:iam::430118812536:role/AmazonEKS_EFS_CSI_DriverRole

# Amazon EFS CSI driver addon 배포(설치)
export ACCOUNT_ID=$(aws sts get-caller-identity --query 'Account' --output text)
eksctl create addon --name aws-efs-csi-driver --cluster ${CLUSTER_NAME} --service-account-role-arn arn:aws:iam::${ACCOUNT_ID}:role/AmazonEKS_EFS_CSI_DriverRole --force

2025-02-23 00:15:55 [ℹ]  Kubernetes version &quot;1.31&quot; in use by cluster &quot;myeks&quot;
2025-02-23 00:15:56 [ℹ]  IRSA is set for &quot;aws-efs-csi-driver&quot; addon; will use this to configure IAM permissions
2025-02-23 00:15:56 [!]  the recommended way to provide IAM permissions for &quot;aws-efs-csi-driver&quot; addon is via pod identity associations; after addon creation is completed, run `eksctl utils migrate-to-pod-identity`
2025-02-23 00:15:56 [ℹ]  using provided ServiceAccountRoleARN &quot;arn:aws:iam::430118812536:role/AmazonEKS_EFS_CSI_DriverRole&quot;
2025-02-23 00:15:56 [ℹ]  creating addon


# 확인
eksctl get addon --cluster ${CLUSTER_NAME}

2025-02-23 00:16:20 [ℹ]  Kubernetes version &quot;1.31&quot; in use by cluster &quot;myeks&quot;
2025-02-23 00:16:20 [ℹ]  getting all addons
2025-02-23 00:16:22 [ℹ]  to see issues for an addon run `eksctl get addon --name &amp;lt;addon-name&amp;gt; --cluster &amp;lt;cluster-name&amp;gt;`
NAME                    VERSION                 STATUS          ISSUES  IAMROLE                                                                  UPDATE AVAILABLE CONFIGURATION VALUES                                            POD IDENTITY ASSOCIATION ROLES
aws-ebs-csi-driver      v1.39.0-eksbuild.1      ACTIVE          0                                                                                &quot;node&quot;:
  &quot;volumeAttachLimit&quot;: 31
  &quot;enableMetrics&quot;: true
aws-efs-csi-driver      v2.1.4-eksbuild.1       CREATING        0       arn:aws:iam::430118812536:role/AmazonEKS_EFS_CSI_DriverRole
coredns                 v1.11.4-eksbuild.2      ACTIVE          0
kube-proxy              v1.31.3-eksbuild.2      ACTIVE          0
metrics-server          v0.7.2-eksbuild.2       ACTIVE          0
vpc-cni                 v1.19.2-eksbuild.5      ACTIVE          0       arn:aws:iam::430118812536:role/eksctl-myeks-addon-vpc-cni-Role1-Y1231NkEEKPX                              enableNetworkPolicy: &quot;true&quot;

kubectl get pod -n kube-system -l &quot;app.kubernetes.io/name=aws-efs-csi-driver,app.kubernetes.io/instance=aws-efs-csi-driver&quot;

NAME                                  READY   STATUS    RESTARTS   AGE
efs-csi-controller-64fc4bc65d-64wjt   3/3     Running   0          38s
efs-csi-controller-64fc4bc65d-bnfvp   3/3     Running   0          38s
efs-csi-node-5sp5l                    3/3     Running   0          39s
efs-csi-node-62kwv                    3/3     Running   0          39s
efs-csi-node-7mw5g                    3/3     Running   0          39s


kubectl get pod -n kube-system -l app=efs-csi-controller -o jsonpath='{.items[0].spec.containers[*].name}' ; echo

efs-plugin csi-provisioner liveness-probe

kubectl get csidrivers efs.csi.aws.com -o yaml

apiVersion: storage.k8s.io/v1
kind: CSIDriver
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {&quot;apiVersion&quot;:&quot;storage.k8s.io/v1&quot;,&quot;kind&quot;:&quot;CSIDriver&quot;,&quot;metadata&quot;:{&quot;annotations&quot;:{},&quot;name&quot;:&quot;efs.csi.aws.com&quot;},&quot;spec&quot;:{&quot;attachRequired&quot;:false}}
  creationTimestamp: &quot;2025-02-22T13:34:14Z&quot;
  name: efs.csi.aws.com
  resourceVersion: &quot;24198&quot;
  uid: 09e4a6b6-5b58-44a0-8c46-0da5716364d1
spec:
  attachRequired: false
  fsGroupPolicy: ReadWriteOnceWithFSType
  podInfoOnMount: false
  requiresRepublish: false
  seLinuxMount: false
  storageCapacity: false
  volumeLifecycleModes:
  - Persistent&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;앞서 EBS CSI Driver와 유사한 결과이므로 부연 설명 없이 EFS를 사용하는 샘플 파드를 배포해 보게습니다.&lt;/p&gt;
&lt;pre class=&quot;yaml&quot;&gt;&lt;code&gt;# 모니터링
watch 'kubectl get sc efs-sc; echo; kubectl get pod,pvc,pv'

# 실습을 위한 코드 clone
git clone https://github.com/kubernetes-sigs/aws-efs-csi-driver.git /root/efs-csi
cd /root/efs-csi/examples/kubernetes/multiple_pods/specs &amp;amp;&amp;amp; tree
.
├── claim.yaml
├── pod1.yaml
├── pod2.yaml
├── pv.yaml
└── storageclass.yaml

0 directories, 5 files

# EFS 스토리지클래스 생성 및 확인
cat storageclass.yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: efs-sc
provisioner: efs.csi.aws.com

kubectl apply -f storageclass.yaml
kubectl get sc efs-sc
NAME     PROVISIONER       RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
efs-sc   efs.csi.aws.com   Delete          Immediate           false                  4s

# PV 생성 및 확인 : volumeHandle을 자신의 EFS 파일시스템ID로 변경
EfsFsId=$(aws efs describe-file-systems --query &quot;FileSystems[*].FileSystemId&quot; --output text)
sed -i &quot;s/fs-4af69aab/$EfsFsId/g&quot; pv.yaml
cat pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  name: efs-pv
spec:
  capacity:
    storage: 5Gi
  volumeMode: Filesystem
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain
  storageClassName: efs-sc
  csi:
    driver: efs.csi.aws.com
    volumeHandle: fs-0395692e3bac346bc # 변경되어 있어야 함

kubectl apply -f pv.yaml
kubectl get pv; kubectl describe pv

# PVC 생성 및 확인
cat claim.yaml
kubectl apply -f claim.yaml
kubectl get pvc

# 파드 생성 및 연동 : 파드 내에 /data 데이터는 EFS를 사용
cat &amp;lt;&amp;lt;EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
  name: app1
  labels:
    app: my-app
spec:
  affinity:
    podAntiAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        - labelSelector:
            matchExpressions:
              - key: app
                operator: In
                values:
                  - my-app
          topologyKey: &quot;kubernetes.io/hostname&quot;
  containers:
  - name: app1
    image: busybox
    command: [&quot;/bin/sh&quot;]
    args: [&quot;-c&quot;, &quot;while true; do echo $(date -u) &amp;gt;&amp;gt; /data/out1.txt; sleep 5; done&quot;]
    volumeMounts:
    - name: persistent-storage
      mountPath: /data
  volumes:
  - name: persistent-storage
    persistentVolumeClaim:
      claimName: efs-claim
---
apiVersion: v1
kind: Pod
metadata:
  name: app2
  labels:
    app: my-app
spec:
  affinity:
    podAntiAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        - labelSelector:
            matchExpressions:
              - key: app
                operator: In
                values:
                  - my-app
          topologyKey: &quot;kubernetes.io/hostname&quot;
  containers:
  - name: app2
    image: busybox
    command: [&quot;/bin/sh&quot;]
    args: [&quot;-c&quot;, &quot;while true; do echo $(date -u) &amp;gt;&amp;gt; /data/out2.txt; sleep 5; done&quot;]
    volumeMounts:
    - name: persistent-storage
      mountPath: /data
  volumes:
  - name: persistent-storage
    persistentVolumeClaim:
      claimName: efs-claim
EOF


# 파드가 각 다른 노드에서 실행 중입니다.
kubectl get po -owide
NAME             READY   STATUS    RESTARTS   AGE     IP              NODE                                               NOMINATED NODE   READINESS GATES
app1             1/1     Running   0          62s     192.168.1.11    ip-192-168-1-6.ap-northeast-2.compute.internal     &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
app2             1/1     Running   0          62s     192.168.2.19    ip-192-168-2-172.ap-northeast-2.compute.internal   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;

# 공유 저장소 저장 동작 확인
kubectl exec -ti app1 -- ls -l /data
total 8
-rw-r--r--    1 root     root           812 Feb 22 15:41 out1.txt
-rw-r--r--    1 root     root           522 Feb 22 15:41 out2.txt&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;각 노드에서 EFS에 대한 정보를 바탕으로 dig를 수행해보면 아래와 같이 private ip가 확인됩니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;# EFS 정보
aws efs describe-file-systems --query &quot;FileSystems[*].FileSystemId&quot; --output text
fs-0395692e3bac346bc

# NIC 정보
aws efs describe-mount-targets --file-system-id $(aws efs describe-file-systems --query &quot;FileSystems[*].FileSystemId&quot; --output text) --query &quot;MountTargets[*].IpAddress&quot; --output text
192.168.3.14    192.168.1.59    192.168.2.142

# 노드에서 확인
[root@ip-192-168-1-6 /]# dig +short fs-0395692e3bac346bc.efs.ap-northeast-2.amazonaws.com
192.168.1.59

[root@ip-192-168-1-6 /]# findmnt -t nfs4
TARGET                                                                                            SOURCE      FSTYPE OPTIONS
/var/lib/kubelet/pods/0239b41c-81fa-42bc-86f8-8c98de0fd54d/volumes/kubernetes.io~csi/efs-pv/mount 127.0.0.1:/ nfs4   rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namle
[root@ip-192-168-1-6 /]#


[root@ip-192-168-2-172 /]# dig +short fs-0395692e3bac346bc.efs.ap-northeast-2.amazonaws.com
192.168.2.142

[root@ip-192-168-2-172 /]# findmnt -t nfs4
TARGET                                                                                            SOURCE      FSTYPE OPTIONS
/var/lib/kubelet/pods/35a9db83-e58b-4a9f-a367-a2399c4e810b/volumes/kubernetes.io~csi/efs-pv/mount 127.0.0.1:/ nfs4   rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namle&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;노드에서 dns query 결과를 보면 각 EFS에 대해서 각 서브넷에 연결된 private IP로 응답하는 것을 볼 수 있는데 이 IP로 직접 연결된 것은 아니고, nfs의 연결이 127.0.0.1로 되어 있는 것을 알 수있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;노드에 &lt;code&gt;efs-proxy&lt;/code&gt; 라는 프로세스가 실행 중인 걸로 봐서는 이 프로세스가 연결에 어떤 역할을 하는 것이 아닌가 의심은 되는데, 공식 문서에도 정확한 내용은 확인 할 수 없었습니다.&amp;nbsp;다만 추정하기로는 EFS로 연결되는 각 서브넷의 ENI에 대해서 가용성을 보장하기 위해서 로컬 proxy로 연결하여 추상화 시키고, 이후 ENI 통신하도록 하는 것이 아닌가 생각이 됩니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;[root@ip-192-168-1-6 /]# ss -tlpn
State           Recv-Q          Send-Q                   Local Address:Port                    Peer Address:Port         Process
LISTEN          0               4096                         127.0.0.1:61679                        0.0.0.0:*             users:((&quot;aws-k8s-agent&quot;,pid=2984,fd=12))
LISTEN          0               4096                         127.0.0.1:10248                        0.0.0.0:*             users:((&quot;kubelet&quot;,pid=2497,fd=14))
LISTEN          0               1024                         127.0.0.1:20828                        0.0.0.0:*             users:((&quot;efs-proxy&quot;,pid=42230,fd=9))
LISTEN          0               4096                         127.0.0.1:39495             ..
[root@ip-192-168-1-6 /]# ps -ef |grep 42230
root       42230   34116  0 15:39 ?        00:00:00 /sbin/efs-proxy /var/run/efs/stunnel-config.fs-0395692e3bac346bc.var.lib.kubelet.pods.0239b41c-81fa-42bc-86f8-8c98de0fd54d.volumes.kubernetes.io~csi.efs-pv.mount.20828 --tls
root       48005   42982  0 15:53 ?        00:00:00 grep --color=auto 42230&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;마지막으로 리소스를 삭제하도록 하겠습니다.&lt;/p&gt;
&lt;pre class=&quot;maxima&quot;&gt;&lt;code&gt;# 쿠버네티스 리소스 삭제
kubectl delete pod app1 app2
kubectl delete pvc efs-claim &amp;amp;&amp;amp; kubectl delete pv efs-pv &amp;amp;&amp;amp; kubectl delete sc efs-sc&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;마무리&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;여기까지 해서 EKS의 스토리지 옵션을 살펴보았습니다. 물론 실습에서 다루지 않은 다른 옵션이 있으므로, 공식문서를 참고 부탁드립니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;참고로 EKS와 CloudFormation으로 생성된 리소스는 [3-2] EKS 노드 그룹(&lt;a href=&quot;https://a-person.tistory.com/34&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://a-person.tistory.com/34&lt;/a&gt;)에서 삭제하도록 하겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;</description>
      <category>EKS</category>
      <category>AKS</category>
      <category>aws</category>
      <category>Azure</category>
      <category>EBS</category>
      <category>efs</category>
      <category>EKS</category>
      <author>한명</author>
      <guid isPermaLink="true">https://a-person.tistory.com/33</guid>
      <comments>https://a-person.tistory.com/33#entry33comment</comments>
      <pubDate>Sun, 23 Feb 2025 01:56:03 +0900</pubDate>
    </item>
    <item>
      <title>CNI(Container Network Interface)란?</title>
      <link>https://a-person.tistory.com/32</link>
      <description>&lt;h3 style=&quot;background-color: #ffffff; color: #000000; text-align: start;&quot; data-ke-size=&quot;size23&quot;&gt;CNI란?&lt;/h3&gt;
&lt;p style=&quot;background-color: #ffffff; color: #333333; text-align: start;&quot; data-ke-size=&quot;size16&quot;&gt;CNI(Container Network Interface)는 CNCF(Cloud Native Computing Foundation)의 프로젝트로&lt;span&gt;&amp;nbsp;&lt;/span&gt;&lt;b&gt;Specification&lt;/b&gt;과 리눅스 컨테이너의 네트워크 인터페이스를 구성하기 위한&lt;span&gt;&amp;nbsp;&lt;/span&gt;&lt;b&gt;plugin을 작성하기 위한 라이브러리&lt;/b&gt;로 구성됩니다.&lt;/p&gt;
&lt;p style=&quot;background-color: #ffffff; color: #333333; text-align: start;&quot; data-ke-size=&quot;size16&quot;&gt;CNI는 컨테이너의 네트워크 연결성과 컨테이너가 삭제되었을 때 할당된 리소스를 제거하는 역할에 집중합니다.&lt;/p&gt;
&lt;p style=&quot;background-color: #ffffff; color: #333333; text-align: start;&quot; data-ke-size=&quot;size16&quot;&gt;참고:&lt;span&gt;&amp;nbsp;&lt;/span&gt;&lt;a href=&quot;https://github.com/containernetworking/cni&quot;&gt;https://github.com/containernetworking/cni&lt;/a&gt;&lt;/p&gt;
&lt;p style=&quot;background-color: #ffffff; color: #333333; text-align: start;&quot; data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;background-color: #ffffff; color: #333333; text-align: start;&quot; data-ke-size=&quot;size16&quot;&gt;보통 Kubernetes에 어떤 CNI를 쓰느냐라고 얘기를 하면 의미가 통하기는 하지만, 실제로 calico, cilium, flannel 등은 CNI plugin이라고 할 수 있습니다.&lt;/p&gt;
&lt;p style=&quot;background-color: #ffffff; color: #333333; text-align: start;&quot; data-ke-size=&quot;size16&quot;&gt;Kubernetes 에서 CNI Plugin의 동작은 간략히 아래와 같이 이뤄집니다.&lt;/p&gt;
&lt;ol style=&quot;list-style-type: decimal; background-color: #ffffff; color: #333333; text-align: start;&quot; data-ke-list-type=&quot;decimal&quot;&gt;
&lt;li style=&quot;list-style-type: decimal;&quot;&gt;Kubelet이 Container Runtime에 컨테이너 생성을 요청&lt;/li&gt;
&lt;li style=&quot;list-style-type: decimal;&quot;&gt;Container Runtime이 컨테이너의 Network Namespace를 생성&lt;/li&gt;
&lt;li style=&quot;list-style-type: decimal;&quot;&gt;Container Runtime이 CNI 설정과 환경변수를 표준 입력으로 CNI Plugin 호출&lt;/li&gt;
&lt;li style=&quot;list-style-type: decimal;&quot;&gt;CNI Plugin이 컨테이너의 네트워크 인터페이스를 구성하고, IP를 할당하고, 호스트 네트워크 간의 veth pair 를 생성&lt;/li&gt;
&lt;li style=&quot;list-style-type: decimal;&quot;&gt;CNI Plugin이 호스트 네트워크 네임스페이스와 컨테이너 네트워크 네임스페이스에 라우팅을 구성&lt;/li&gt;
&lt;/ol&gt;
&lt;p style=&quot;background-color: #ffffff; color: #333333; text-align: start;&quot; data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;background-color: #ffffff; color: #333333; text-align: start;&quot; data-ke-size=&quot;size16&quot;&gt;오래되긴 했지만 CNI와 CNI Plugin에 대해 잘 설명한 영상이 있습니다.&lt;/p&gt;
&lt;p style=&quot;background-color: #ffffff; color: #333333; text-align: start;&quot; data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=4E_l-B988Ek&amp;amp;t=1341s&quot;&gt;https://www.youtube.com/watch?v=4E_l-B988Ek&amp;amp;t=1341s&lt;/a&gt;&lt;/p&gt;
&lt;p style=&quot;background-color: #ffffff; color: #333333; text-align: start;&quot; data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;background-color: #ffffff; color: #333333; text-align: start;&quot; data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h3 style=&quot;background-color: #ffffff; color: #000000; text-align: start;&quot; data-ke-size=&quot;size23&quot;&gt;실습&lt;/h3&gt;
&lt;p style=&quot;background-color: #ffffff; color: #333333; text-align: start;&quot; data-ke-size=&quot;size16&quot;&gt;간단히 영상의 실습을 따라해보겠습니다.&lt;/p&gt;
&lt;p style=&quot;background-color: #ffffff; color: #333333; text-align: start;&quot; data-ke-size=&quot;size16&quot;&gt;먼저 go가 설치된 환경에서 아래와 같이 샘플 CNI plugin을 가져와 빌드를 진행합니다. 빌드가 끝나면 bin 폴더에 바이너리들이 위치합니다.&lt;/p&gt;
&lt;pre class=&quot;elixir&quot; style=&quot;background-color: #ffffff; color: #333333; text-align: start;&quot;&gt;&lt;code&gt;root@jumpVM:~# git clone https://github.com/containernetworking/plugins.git
Cloning into 'plugins'...
remote: Enumerating objects: 19825, done.
remote: Counting objects: 100% (268/268), done.
remote: Compressing objects: 100% (180/180), done.
remote: Total 19825 (delta 151), reused 86 (delta 86), pack-reused 19557 (from 2)
Receiving objects: 100% (19825/19825), 16.41 MiB | 19.77 MiB/s, done.
Resolving deltas: 100% (11214/11214), done.
root@jumpVM:~# cd plugins/
root@jumpVM:~/plugins# ls
CONTRIBUTING.md  README.md         go.mod       plugins
DCO              RELEASING.md      go.sum       test_linux.sh
LICENSE          build_linux.sh    integration  test_windows.sh
OWNERS.md        build_windows.sh  pkg          vendor
root@jumpVM:~/plugins# ./build_linux.sh
Building plugins
  bandwidth
  firewall
  portmap
  sbr
  tuning
  vrf
  bridge
  dummy
  host-device
  ipvlan
  loopback
  macvlan
  ptp
  tap
  vlan
  dhcp
  host-local
  static
root@jumpVM:~/plugins# ls
CONTRIBUTING.md  README.md       build_windows.sh  pkg              vendor
DCO              RELEASING.md    go.mod            plugins
LICENSE          bin             go.sum            test_linux.sh
OWNERS.md        build_linux.sh  integration       test_windows.sh
root@jumpVM:~/plugins# cd bin
root@jumpVM:~/plugins/bin# ls
bandwidth  dummy        host-local  macvlan  sbr     tuning
bridge     firewall     ipvlan      portmap  static  vlan
dhcp       host-device  loopback    ptp      tap     vrf&lt;/code&gt;&lt;/pre&gt;
&lt;p style=&quot;background-color: #ffffff; color: #333333; text-align: start;&quot; data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;background-color: #ffffff; color: #333333; text-align: start;&quot; data-ke-size=&quot;size16&quot;&gt;그리고 3개의 세션을 만들어서 아래의 명령을 실행합니다.&lt;/p&gt;
&lt;pre class=&quot;routeros&quot; style=&quot;background-color: #ffffff; color: #333333; text-align: start;&quot;&gt;&lt;code&gt;## 좌측 세션 (demons 생성)
$ sudo ip netns add demons
## 우측 상단 세션 (host ns의 ip/route 정보 확인)
$ watch -d -n 1 'ip a; echo &quot;&quot;; ip route'
## 우측 하단 세션 (domons의 ip/route 정보 확인)
$ watch -d -n 1 'ip netns exec demons ip a; echo &quot;&quot;; ip netns exec demons ip route;'&lt;/code&gt;&lt;/pre&gt;
&lt;p style=&quot;background-color: #ffffff; color: #333333; text-align: start;&quot; data-ke-size=&quot;size16&quot;&gt;최초 상태는 아래와 같습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1280&quot; data-origin-height=&quot;747&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/dcjhls/btsMo6MEoHg/5cImdznLRT9FHbCBWr5Tik/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/dcjhls/btsMo6MEoHg/5cImdznLRT9FHbCBWr5Tik/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/dcjhls/btsMo6MEoHg/5cImdznLRT9FHbCBWr5Tik/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2Fdcjhls%2FbtsMo6MEoHg%2F5cImdznLRT9FHbCBWr5Tik%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;2255&quot; height=&quot;1317&quot; data-origin-width=&quot;1280&quot; data-origin-height=&quot;747&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p style=&quot;background-color: #ffffff; color: #333333; text-align: start;&quot; data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;background-color: #ffffff; color: #333333; text-align: start;&quot; data-ke-size=&quot;size16&quot;&gt;컨테이너와 호스트간에 veth 디바이스를 생성해주는 ptp plugin을 사용합니다.&lt;/p&gt;
&lt;p style=&quot;background-color: #ffffff; color: #333333; text-align: start;&quot; data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;https://www.cni.dev/plugins/current/main/ptp/&quot;&gt;https://www.cni.dev/plugins/current/main/ptp/&lt;/a&gt;&lt;/p&gt;
&lt;p style=&quot;background-color: #ffffff; color: #333333; text-align: start;&quot; data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;background-color: #ffffff; color: #333333; text-align: start;&quot; data-ke-size=&quot;size16&quot;&gt;ptp 바이너리를 실행하면 version을 알 수 없다고 합니다. CNI는 단순히 필요한 정보를 환경 변수로, 그리고 spec을 표준 입력으로 전달합니다. CNI_COMMAND로 VERSION을 넣고 다시 바이너리를 실행하면 제공하는 버전을 알려줍니다.&lt;/p&gt;
&lt;pre class=&quot;elixir&quot; style=&quot;background-color: #ffffff; color: #333333; text-align: start;&quot;&gt;&lt;code&gt;root@jumpVM:~/plugins/bin# ./ptp
CNI ptp plugin version unknown
CNI protocol versions supported: 0.1.0, 0.2.0, 0.3.0, 0.3.1, 0.4.0, 1.0.0, 1.1.0
root@jumpVM:~/plugins/bin# CNI_COMMAND=VERSION ./ptp
{&quot;cniVersion&quot;:&quot;1.1.0&quot;,&quot;supportedVersions&quot;:[&quot;0.1.0&quot;,&quot;0.2.0&quot;,&quot;0.3.0&quot;,&quot;0.3.1&quot;,&quot;0.4.0&quot;,&quot;1.0.0&quot;,&quot;1.1.0&quot;]}&lt;/code&gt;&lt;/pre&gt;
&lt;p style=&quot;background-color: #ffffff; color: #333333; text-align: start;&quot; data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;background-color: #ffffff; color: #333333; text-align: start;&quot; data-ke-size=&quot;size16&quot;&gt;아래와 같이 표준 입력으로 전달할 CNI 설정도 준비합니다. CNI plugin에는 IPAM과 컨테이너 네트워크를 구성하는 역할을 하는 plugin으로 나뉩니다. 여기에서는 ptp와 ipam으로 host-local을 사용했습니다.&lt;/p&gt;
&lt;pre class=&quot;clean&quot; style=&quot;background-color: #ffffff; color: #333333; text-align: start;&quot;&gt;&lt;code&gt;{
    &quot;cniVersion&quot;: &quot;0.3.1&quot;,
    &quot;name&quot;: &quot;demonet&quot;,
    &quot;type&quot;: &quot;ptp&quot;, ## CNI binary
    &quot;ipam&quot;: {
        &quot;type&quot;: &quot;host-local&quot;, ## CNI binary for IPAM
        &quot;subnet&quot;: &quot;192.168.0.0/24&quot;
    }
}&lt;/code&gt;&lt;/pre&gt;
&lt;p style=&quot;background-color: #ffffff; color: #333333; text-align: start;&quot; data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;background-color: #ffffff; color: #333333; text-align: start;&quot; data-ke-size=&quot;size16&quot;&gt;이제 실제로 ADD 명령을 전달합니다. 하지만 여러가지 변수들이 지정되지 않아 에러가 발생하였습니다.&lt;/p&gt;
&lt;pre class=&quot;routeros&quot; style=&quot;background-color: #ffffff; color: #333333; text-align: start;&quot;&gt;&lt;code&gt;root@jumpVM:~/plugins/bin# CNI_COMMAND=ADD ./ptp &amp;lt; config
{
    &quot;code&quot;: 4,
    &quot;msg&quot;: &quot;required env variables [CNI_CONTAINERID,CNI_NETNS,CNI_IFNAME,CNI_PATH] missing&quot;
}&lt;/code&gt;&lt;/pre&gt;
&lt;p style=&quot;background-color: #ffffff; color: #333333; text-align: start;&quot; data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;background-color: #ffffff; color: #333333; text-align: start;&quot; data-ke-size=&quot;size16&quot;&gt;해당 변수들까지 환경 변수로 지정하고 명령을 다시 수행합니다.&lt;/p&gt;
&lt;pre class=&quot;routeros&quot; style=&quot;background-color: #ffffff; color: #333333; text-align: start;&quot;&gt;&lt;code&gt;$ CNI_COMMAND=ADD CNI_CONTAINERID=1234 CNI_NETNS=/var/run/netns/demons CNI_IFNAME=domoeth0 CNI_PATH=/root/plugins/bin ./ptp &amp;lt; config&lt;/code&gt;&lt;/pre&gt;
&lt;p style=&quot;background-color: #ffffff; color: #333333; text-align: start;&quot; data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;background-color: #ffffff; color: #333333; text-align: start;&quot; data-ke-size=&quot;size16&quot;&gt;명령 수행 결과로 생성된 interface에 대한 정보와 IPAM에서 전달된 ip 정보를 확인할 수 있으며, 우측 세션을 보면 새로운 veth pair와 라우팅이 추가되었음을 알 수 있습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1280&quot; data-origin-height=&quot;749&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/RrJ8i/btsMpmogWGe/KJ4W0bSEXxHeXlW8PnZpC0/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/RrJ8i/btsMpmogWGe/KJ4W0bSEXxHeXlW8PnZpC0/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/RrJ8i/btsMpmogWGe/KJ4W0bSEXxHeXlW8PnZpC0/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FRrJ8i%2FbtsMpmogWGe%2FKJ4W0bSEXxHeXlW8PnZpC0%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;2255&quot; height=&quot;1321&quot; data-origin-width=&quot;1280&quot; data-origin-height=&quot;749&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p style=&quot;background-color: #ffffff; color: #333333; text-align: start;&quot; data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;background-color: #ffffff; color: #333333; text-align: start;&quot; data-ke-size=&quot;size16&quot;&gt;마지막으로 DEL 명령으로 테스트 구성을 삭제합니다.&lt;/p&gt;
&lt;pre class=&quot;routeros&quot; style=&quot;background-color: #ffffff; color: #333333; text-align: start;&quot;&gt;&lt;code&gt;CNI_COMMAND=DEL CNI_CONTAINERID=1234 CNI_NETNS=/var/run/netns/demons CNI_IFNAME=domoeth0 CNI_PATH=/root/plugins/bin ./ptp &amp;lt; config  &lt;/code&gt;&lt;/pre&gt;
&lt;p style=&quot;background-color: #ffffff; color: #333333; text-align: start;&quot; data-ke-size=&quot;size16&quot;&gt;이렇게 샘플 CNI plugin을 실행해 봄으로서 앞서 설명한 과정을 이해할 수 있으며, Container Runtime이 어떤 방식으로 CNI plugin을 호출하는지를 이해할 수 있습니다. 실제로 ADD, DEL 과 같은 요청을 합니다.&lt;/p&gt;</description>
      <category>Kubernetes</category>
      <category>cni</category>
      <category>kubernetes</category>
      <author>한명</author>
      <guid isPermaLink="true">https://a-person.tistory.com/32</guid>
      <comments>https://a-person.tistory.com/32#entry32comment</comments>
      <pubDate>Wed, 19 Feb 2025 23:23:44 +0900</pubDate>
    </item>
    <item>
      <title>[2-2] EKS Networking Part2 - LoadBalancer와 Ingress</title>
      <link>https://a-person.tistory.com/31</link>
      <description>&lt;p data-ke-size=&quot;size16&quot;&gt;Amazon EKS의 네트워킹에서 Kubernetes의 네트워크와 인그레스 트래픽을 AWS에서 어떻게 구현했는지 살펴보겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;지난 포스트에서는 EKS Networking Part 1으로 VPC CNI 개요와 노드 환경 구성을 살펴보고, 파드 통신 과정을 살펴봤습니다. 이번 포스트는 EKS Networking Part2로 LoadBalancer 유형의 서비스의 구성과 Ingress에 대해서 확인해 보겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;그리고 이들의 구현이 AKS(Azure Kubernetes Service)에서는 어떻게 다른지 설명을 해보겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;목차&lt;/h2&gt;
&lt;ol style=&quot;list-style-type: decimal;&quot; data-ke-list-type=&quot;decimal&quot;&gt;
&lt;li&gt;실습 환경&lt;/li&gt;
&lt;li&gt;Kubernetes의 네트워크 개요&lt;/li&gt;
&lt;li&gt;AWS의 LoadBalancer 유형&lt;/li&gt;
&lt;li&gt;EKS의 LoadBalancer 구현&lt;/li&gt;
&lt;li&gt;EKS의 LoadBalancer 실습&lt;/li&gt;
&lt;li&gt;AKS의 LoadBalancer 구현&lt;/li&gt;
&lt;li&gt;EKS의 Ingress 구현 및 실습&lt;/li&gt;
&lt;li&gt;AKS의 Ingress 구현&lt;/li&gt;
&lt;li&gt;리소스 정리&lt;/li&gt;
&lt;/ol&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;1 . 실습 환경&lt;/h2&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;실습 환경&lt;/h3&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;EKS Networking Part1에서 사용한 EKS를 그대로 사용하겠습니다. 아래와 같은 구성입니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;2278&quot; data-origin-height=&quot;730&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/dYzfPy/btsMkZ7BdgY/Hvv1KYgDoIXBgIjJPYKF71/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/dYzfPy/btsMkZ7BdgY/Hvv1KYgDoIXBgIjJPYKF71/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/dYzfPy/btsMkZ7BdgY/Hvv1KYgDoIXBgIjJPYKF71/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FdYzfPy%2FbtsMkZ7BdgY%2FHvv1KYgDoIXBgIjJPYKF71%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;2278&quot; height=&quot;730&quot; data-origin-width=&quot;2278&quot; data-origin-height=&quot;730&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;혹시나 동일한 테스트를 진행하고자 한다면, 지난 포스트 EKS Networking Part1(&lt;a href=&quot;https://a-person.tistory.com/30&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://a-person.tistory.com/30&lt;/a&gt;)의 &lt;code&gt;3. 실습 환경 배포&lt;/code&gt; 절을 참고 부탁드립니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;클러스터 관련 정보는 아래와 같습니다. EKS 1.31.5 버전입니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;$ kubectl cluster-info
Kubernetes control plane is running at https://04559E327E31C75E5F6A835B3E0B6AED.gr7.ap-northeast-2.eks.amazonaws.com
CoreDNS is running at https://04559E327E31C75E5F6A835B3E0B6AED.gr7.ap-northeast-2.eks.amazonaws.com/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

$ eksctl get cluster
NAME    REGION          EKSCTL CREATED
myeks   ap-northeast-2  True

$kubectl get node --label-columns=node.kubernetes.io/instance-type,eks.amazonaws.com/capacityType,topology.kubernetes.io/zone
NAME                                               STATUS   ROLES    AGE    VERSION               INSTANCE-TYPE   CAPACITYTYPE   ZONE
ip-192-168-1-228.ap-northeast-2.compute.internal   Ready    &amp;lt;none&amp;gt;   116m   v1.31.5-eks-5d632ec   t3.medium       ON_DEMAND      ap-northeast-2a
ip-192-168-2-91.ap-northeast-2.compute.internal    Ready    &amp;lt;none&amp;gt;   116m   v1.31.5-eks-5d632ec   t3.medium       ON_DEMAND      ap-northeast-2b
ip-192-168-3-155.ap-northeast-2.compute.internal   Ready    &amp;lt;none&amp;gt;   116m   v1.31.5-eks-5d632ec   t3.medium       ON_DEMAND      ap-northeast-2c&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;2. Kubernetes의 네트워크 개요&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Kubernetes의 네트워크에 관련된 주요 컴포넌트와 오브젝트로 CNI, Service, Ingress 등이 있는데, 이들은 각각 어떤 역할을 하는 걸까요? 이러한 요소들은 Kubernetes의 네트워크를 구성해주기도 하고, 한편으로는 워크로드의 서비스를 노출해주는 역할을 합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;먼저 CNI는 컨테이너 네트워킹을 가능하도록 구성해주는 역할을 합니다. 이를 통해 파드는 통신 가능한 상태가 됩니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;파드에 실행된 워크로드를 사용자가 접근하기 위해서는 Service가 필요합니다. Service에는 Cluster IP, Nodeport, LoadBalancer 유형이 있습니다. 간단히 클러스터 내의 다른 서비스에 대한 호출은 Cluster IP를 사용하고, 외부 통신은 Nodeport나 LoadBalancer 유형의 Service로 가능합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;다만 LoadBalancer 유형의 서비스는 클러스터 외부에 IP를 노출해야 하기 때문에 다른 구현체(컨트롤러)를 통해서 Loadbalancing을 구현하고, 로드밸런서의 엔드포인트 IP를 클러스터의 LoadBalancer 유형의 서비스에 노출 해야합니다. 그래서 보통은 CSP에서 Cloud Controller를 통해 구현을 하거나, On-Premise에서는 LoxiLB 와 같은 솔루션을 사용할 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;또한 노드 수준에서는 iptables, IPVS, eBPF 로 Service 와 접점(Endpoint)를 구성해야 하는데, kube-proxy 를 통해 iptables를 사용하는 방법이 일반적입니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이때 Service는 L4 수준의 통신을 가능하도록 하므로, L7 통신을 가능하게 하기 위해 Ingress 리소스를 사용할 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;본 포스트에서는 기본적인 Cluster IP, Nodeport에 대한 설명은 제외하고, EKS에서 LoadBalancer 유형의 서비스와 Ingress 서비스를 어떤 식으로 구현했는지를 살펴보겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;3. AWS의 LoadBalancer 유형&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;EKS의 Loadbalancer와 Ingress는 각 다른 유형의 Loadbalancer 로 구현되어 있습니다. EKS 실습에 앞서 AWS의 Loadbalancer 서비스 유형을 간단히 살펴보겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;참고: &lt;a href=&quot;https://docs.aws.amazon.com/ko_kr/elasticloadbalancing/latest/userguide/how-elastic-load-balancing-works.html&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://docs.aws.amazon.com/ko_kr/elasticloadbalancing/latest/userguide/how-elastic-load-balancing-works.html&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Elastic Load Balancing은 다음 유형의 로드 밸런서를 지원합니다.&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;Application Load Balancers (ALB)&lt;/li&gt;
&lt;li&gt;Network Load Balancer (NLB)&lt;/li&gt;
&lt;li&gt;Gateway Load Balancer (GWLB)&lt;/li&gt;
&lt;li&gt;Classic Load Balancer(CLB)&lt;/li&gt;
&lt;/ul&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;CLB는 가장 오래된 ELB의 형태로, L4/L7 계층의 로드밸런싱이 가능하지만, 하나의 주소에 하나의 대상 그룹 밖에 지정을 할 수 없는 것과 같은 몇가지 제한사항이 있습니다. CLB는 Legacy로 분류되어 많이 사용하지 않습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;ALB는 L7계층의 로드밸런싱이 가능합니다. L7의 로드밸런싱이 가능하기 때문에 HTTP 헤더 정보를 바탕으로 라우팅을 하거나 Path Based 라우팅도 가능합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;NLB는 L4 계층이 로드밸런서 입니다. 고성능 부하 분산을 하기에 적합하며, EIP로 공인 IP를 고정할 수 있는 장점이 있습니다. 노출된 IP를 통해 DNS를 제공해줘 접근도 가능합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;GWLB는 L3 계층에서 작동합니다. GWLB를 통해서 방화벽, 침입 탐지, 패킷 검사와 같은 가상 어플라인언스를 배포 확장하는데 활요될 수 있습니다. GWLB는 앞서 설명한 LoadBalancer와는 역할이 다소 다른 것 같습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;EKS에서는 NLB 혹은 ALB를 사용하여 워크로드의 인바운드 서비스를 가능하도록 합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;4. EKS의 LoadBalancer 구현&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;EKS에서 LoadBalancer 유형의 서비스가 생성되면 AWS의 Load Balancer가 프로비저닝 됩니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;어떤 방식으로 구현되었지를 살펴보기 위해서 아래 문서를 바탕으로 정리했습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;https://docs.aws.amazon.com/eks/latest/best-practices/load-balancing.html&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://docs.aws.amazon.com/eks/latest/best-practices/load-balancing.html&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;EKS에서는 로드밸런서를 프로비저닝을 하는 두 가지 방법이 있습니다.&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;AWS Cloud Provider Load balancer Controller (레거시)&lt;/li&gt;
&lt;li&gt;AWS Load Balancer Controller (권장)&lt;/li&gt;
&lt;/ul&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;기본적으로 Kubernetes의 LoadBalancer 유형의 서비스는 kube-controller-manager 혹은 cloud-controller-manager(in-tree controller)의 Cloud Provider 컴포넌트에 내장된 Service Controller 의해서 조정됩니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이때 AWS Cloud Provider Load balancer Controller는 레거시로, 현재는 critical bug fix만 가능합니다. AWS Cloud Provider Load balancer Controller에서는 CLB가 기본적으로 생성되고, 알맞은 annotation을 통해 NLB를 생성할 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;AWS 에서는 확장된 기능을 제공하기 위해서 AWS Load Balancer Controller를 EKS에 설치하는 방식을 제공하며, 이 경우 LoadBalancerClass에서 &lt;code&gt;service.beta.kubernetes.io/aws-load-balancer-type&lt;/code&gt; Annotation으로 Load Balancer Controller로 명시적으로 오프로드를 해야합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;blockquote data-ke-style=&quot;style3&quot;&gt;Note: 확인해보면 lodbalancerClass라는 리소스는 없는데, 실제 어떤 의미인지 정확하지 않습니다. 이후 실습에서 사용하는 LoadBalancer 유형의 서비스에서 loadBalancerClass: service.k8s.aws/nlb 로 지정하기는 합니다.&lt;/blockquote&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;span style=&quot;background-color: #fcfcfc; color: #000000; text-align: left;&quot;&gt; 이제 Load Balancer 대상 유형에 따른 차이를 살펴보겠습니다. &lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;EKS의 LoadBalancer 유형의 서비스에 &lt;code&gt;service.beta.kubernetes.io/aws-load-balancer-nlb-target-type&lt;/code&gt; Annotation을 통해 대상 유형을 instance 혹은 ip 유형을 지정할 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;먼저 instance 로 지정한 경우 아래와 같은 구성으로 연결됩니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;2268&quot; data-origin-height=&quot;1306&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/O1Z7l/btsMlnNXvX4/Cu4nCnGsIw5kyDYbF0IKEK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/O1Z7l/btsMlnNXvX4/Cu4nCnGsIw5kyDYbF0IKEK/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/O1Z7l/btsMlnNXvX4/Cu4nCnGsIw5kyDYbF0IKEK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FO1Z7l%2FbtsMlnNXvX4%2FCu4nCnGsIw5kyDYbF0IKEK%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;2268&quot; height=&quot;1306&quot; data-origin-width=&quot;2268&quot; data-origin-height=&quot;1306&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;출처: &lt;a href=&quot;https://docs.aws.amazon.com/eks/latest/best-practices/load-balancing.html&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://docs.aws.amazon.com/eks/latest/best-practices/load-balancing.html&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이 구성에서는 NodePort 를 통해서 대상이 등록되는 것을 알 수 있습니다. 이로 인해서 패킷은 노드로 전달되고 이후 iptables를 따라서 파드로 패킷이 전달되는 추가 hop이 발생합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이 때문에 대상 유형을 &lt;code&gt;ip&lt;/code&gt;로 지정하는 것이 권장되며, 이때 아래와 같이 구성됩니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;2256&quot; data-origin-height=&quot;1268&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/AfT5V/btsMk3PGpYv/uqijDKYiJfNX28ykZyn1mk/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/AfT5V/btsMk3PGpYv/uqijDKYiJfNX28ykZyn1mk/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/AfT5V/btsMk3PGpYv/uqijDKYiJfNX28ykZyn1mk/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FAfT5V%2FbtsMk3PGpYv%2FuqijDKYiJfNX28ykZyn1mk%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;2256&quot; height=&quot;1268&quot; data-origin-width=&quot;2256&quot; data-origin-height=&quot;1268&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;출처: &lt;a href=&quot;https://docs.aws.amazon.com/eks/latest/best-practices/load-balancing.html&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://docs.aws.amazon.com/eks/latest/best-practices/load-balancing.html&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이 방식은 Load Balancer의 대상이 바로 파드 IP로 등록됨에 따라, 네트워크 경로가 단순화 되고 iptables의 오버헤드가 제거되는 장점을 가지게 됩니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;5. EKS의 LoadBalancer 실습&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;먼저 아무런 설정이 없이 LoadBalancer 서비스를 생성해 보겠습니다.&lt;/p&gt;
&lt;pre class=&quot;yaml&quot;&gt;&lt;code&gt;# 터미널1 (모니터링)
watch -d 'kubectl get pod,svc'

# 수퍼마리오 디플로이먼트 배포
cat &amp;lt;&amp;lt;EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
  name: mario
  labels:
    app: mario
spec:
  replicas: 1
  selector:
    matchLabels:
      app: mario
  template:
    metadata:
      labels:
        app: mario
    spec:
      containers:
      - name: mario
        image: pengbai/docker-supermario
---
apiVersion: v1
kind: Service
metadata:
   name: mario
spec:
  selector:
    app: mario
  ports:
  - port: 80
    protocol: TCP
    targetPort: 8080
  type: LoadBalancer
EOF


# 배포 확인 : CLB 배포 확인
kubectl get deploy,svc,ep mario

# 마리오 게임 접속 : CLB 주소로 웹 접속
kubectl get svc mario -o jsonpath={.status.loadBalancer.ingress[0].hostname} | awk '{ print &quot;Maria URL = http://&quot;$1 }'&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;배포가 되고 서비스에 EXTERNAL-IP가 할당되었습니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;NAME                         READY   STATUS    RESTARTS   AGE
pod/mario-6d8c76fd8d-kqmft   1/1     Running   0          22s

NAME                 TYPE           CLUSTER-IP     EXTERNAL-IP                                                                    PORT(S)        AGE
service/kubernetes   ClusterIP      10.100.0.1     &amp;lt;none&amp;gt;                                                                         443/TCP        50m
service/mario        LoadBalancer   10.100.40.60   ac12f1c2b92b84516b339ffeea5afd12-1085404426.ap-northeast-2.elb.amazonaws.com   80:31207/TCP   22s&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이 상황에서 웹 콘솔에서 배포된 리소스를 확인해보겠습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1918&quot; data-origin-height=&quot;207&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/ympXL/btsMksbqJYj/OiVwWjpiFhaweeSR3KYe0K/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/ympXL/btsMksbqJYj/OiVwWjpiFhaweeSR3KYe0K/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/ympXL/btsMksbqJYj/OiVwWjpiFhaweeSR3KYe0K/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FympXL%2FbtsMksbqJYj%2FOiVwWjpiFhaweeSR3KYe0K%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1918&quot; height=&quot;207&quot; data-origin-width=&quot;1918&quot; data-origin-height=&quot;207&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Service 오브젝트에 아무런 annotation이 없는 상황으로 CLB가 배포된 것을 확인할 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;AWS Load Balancer Controller 배포&lt;/h3&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이제 AWS Load Balancer Controller를 배포해 보겠습니다. Helm(&lt;a href=&quot;https://helm.sh/docs/intro/install/)%EC%9D%84&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://helm.sh/docs/intro/install/&lt;/a&gt;)을 사용합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;주요 컴포넌트가 addon이 아닌 형태로 제공되면 이후에 업그레이드나 관리에 어려움이 있을텐데 이 컴포넌트는 왜 addon이 아닐지 의문이 듭니다. (Helm은 Life Cycle Management를 할 수 없음)&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;# 설치 전 CRD 확인
$ kubectl get crd
NAME                                         CREATED AT
cninodes.vpcresources.k8s.aws                2025-02-15T14:39:16Z
eniconfigs.crd.k8s.amazonaws.com             2025-02-15T14:41:52Z
policyendpoints.networking.k8s.aws           2025-02-15T14:39:16Z
securitygrouppolicies.vpcresources.k8s.aws   2025-02-15T14:39:16Z

$ kubectl get ingressclass
No resources found

# Helm Chart 설치
helm repo add eks https://aws.github.io/eks-charts
helm repo update
helm install aws-load-balancer-controller eks/aws-load-balancer-controller -n kube-system --set clusterName=$CLUSTER_NAME&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;배포가 완료되면 2개의 CRD와 ingress class가 생성된 것을 확인할 수 있습니다.&lt;/p&gt;
&lt;pre class=&quot;properties&quot;&gt;&lt;code&gt;## 설치 확인
$ kubectl get crd
NAME                                         CREATED AT
cninodes.vpcresources.k8s.aws                2025-02-15T14:39:16Z
eniconfigs.crd.k8s.amazonaws.com             2025-02-15T14:41:52Z
ingressclassparams.elbv2.k8s.aws             2025-02-15T15:35:37Z
policyendpoints.networking.k8s.aws           2025-02-15T14:39:16Z
securitygrouppolicies.vpcresources.k8s.aws   2025-02-15T14:39:16Z
targetgroupbindings.elbv2.k8s.aws            2025-02-15T15:35:37Z

$ kubectl get ingressclass
NAME   CONTROLLER            PARAMETERS   AGE
alb    ingress.k8s.aws/alb   &amp;lt;none&amp;gt;       104s

# 생성된 CRD
$ kubectl explain ingressclassparams.elbv2.k8s.aws
GROUP:      elbv2.k8s.aws
KIND:       IngressClassParams
VERSION:    v1beta1

DESCRIPTION:
    IngressClassParams is the Schema for the IngressClassParams API

FIELDS:
  apiVersion    &amp;lt;string&amp;gt;
    APIVersion defines the versioned schema of this representation of an object.
    Servers should convert recognized schemas to the latest internal value, and
    may reject unrecognized values. More info:
    https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources

  kind  &amp;lt;string&amp;gt;
    Kind is a string value representing the REST resource this object
    represents. Servers may infer this from the endpoint the client submits
    requests to. Cannot be updated. In CamelCase. More info:
    https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds

  metadata      &amp;lt;ObjectMeta&amp;gt;
    Standard object's metadata. More info:
    https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata

  spec  &amp;lt;Object&amp;gt;
    IngressClassParamsSpec defines the desired state of IngressClassParams


$ kubectl explain targetgroupbindings.elbv2.k8s.aws
GROUP:      elbv2.k8s.aws
KIND:       TargetGroupBinding
VERSION:    v1beta1

DESCRIPTION:
    TargetGroupBinding is the Schema for the TargetGroupBinding API

FIELDS:
  apiVersion    &amp;lt;string&amp;gt;
    APIVersion defines the versioned schema of this representation of an object.
    Servers should convert recognized schemas to the latest internal value, and
    may reject unrecognized values. More info:
    https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources

  kind  &amp;lt;string&amp;gt;
    Kind is a string value representing the REST resource this object
    represents. Servers may infer this from the endpoint the client submits
    requests to. Cannot be updated. In CamelCase. More info:
    https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds

  metadata      &amp;lt;ObjectMeta&amp;gt;
    Standard object's metadata. More info:
    https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata

  spec  &amp;lt;Object&amp;gt;
    TargetGroupBindingSpec defines the desired state of TargetGroupBinding

  status        &amp;lt;Object&amp;gt;
    TargetGroupBindingStatus defines the observed state of TargetGroupBinding&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;실제 생성된 디플로이먼트도 확인해봅니다.&lt;/p&gt;
&lt;pre class=&quot;sql&quot;&gt;&lt;code&gt;$ kubectl get deployment -n kube-system aws-load-balancer-controller
NAME                           READY   UP-TO-DATE   AVAILABLE   AGE
aws-load-balancer-controller   2/2     2            2           5m59s
$ kubectl describe deploy -n kube-system aws-load-balancer-controller
Name:                   aws-load-balancer-controller
Namespace:              kube-system
CreationTimestamp:      Sun, 16 Feb 2025 00:35:40 +0900
Labels:                 app.kubernetes.io/instance=aws-load-balancer-controller
                        app.kubernetes.io/managed-by=Helm
                        app.kubernetes.io/name=aws-load-balancer-controller
                        app.kubernetes.io/version=v2.11.0
                        helm.sh/chart=aws-load-balancer-controller-1.11.0
Annotations:            deployment.kubernetes.io/revision: 1
                        meta.helm.sh/release-name: aws-load-balancer-controller
                        meta.helm.sh/release-namespace: kube-system
Selector:               app.kubernetes.io/instance=aws-load-balancer-controller,app.kubernetes.io/name=aws-load-balancer-controller
Replicas:               2 desired | 2 updated | 2 total | 2 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:           app.kubernetes.io/instance=aws-load-balancer-controller
                    app.kubernetes.io/name=aws-load-balancer-controller
  Annotations:      prometheus.io/port: 8080
                    prometheus.io/scrape: true
  Service Account:  aws-load-balancer-controller
  Containers:
   aws-load-balancer-controller:
    Image:       public.ecr.aws/eks/aws-load-balancer-controller:v2.11.0
    Ports:       9443/TCP, 8080/TCP
    Host Ports:  0/TCP, 0/TCP
    Args:
      --cluster-name=myeks
      --ingress-class=alb
    Liveness:     http-get http://:61779/healthz delay=30s timeout=10s period=10s #success=1 #failure=2
    Readiness:    http-get http://:61779/readyz delay=10s timeout=10s period=10s #success=1 #failure=2
    Environment:  &amp;lt;none&amp;gt;
    Mounts:
      /tmp/k8s-webhook-server/serving-certs from cert (ro)
  Volumes:
   cert:
    Type:               Secret (a volume populated by a Secret)
    SecretName:         aws-load-balancer-tls
    Optional:           false
  Priority Class Name:  system-cluster-critical
  Node-Selectors:       &amp;lt;none&amp;gt;
  Tolerations:          &amp;lt;none&amp;gt;
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      True    MinimumReplicasAvailable
  Progressing    True    NewReplicaSetAvailable
OldReplicaSets:  &amp;lt;none&amp;gt;
NewReplicaSet:   aws-load-balancer-controller-554fbd9d (2/2 replicas created)
Events:
  Type    Reason             Age    From                   Message
  ----    ------             ----   ----                   -------
  Normal  ScalingReplicaSet  8m35s  deployment-controller  Scaled up replica set aws-load-balancer-controller-554fbd9d to 2

$ kubectl describe deploy -n kube-system aws-load-balancer-controller | grep 'Service Account'
  Service Account:  aws-load-balancer-controller

# 클러스터롤, 롤 확인
$ kubectl describe clusterrolebindings.rbac.authorization.k8s.io aws-load-balancer-controller-rolebinding
Name:         aws-load-balancer-controller-rolebinding
Labels:       app.kubernetes.io/instance=aws-load-balancer-controller
              app.kubernetes.io/managed-by=Helm
              app.kubernetes.io/name=aws-load-balancer-controller
              app.kubernetes.io/version=v2.11.0
              helm.sh/chart=aws-load-balancer-controller-1.11.0
Annotations:  meta.helm.sh/release-name: aws-load-balancer-controller
              meta.helm.sh/release-namespace: kube-system
Role:
  Kind:  ClusterRole
  Name:  aws-load-balancer-controller-role
Subjects:
  Kind            Name                          Namespace
  ----            ----                          ---------
  ServiceAccount  aws-load-balancer-controller  kube-system

$ kubectl describe clusterroles.rbac.authorization.k8s.io aws-load-balancer-controller-role
...
PolicyRule:
  Resources                                     Non-Resource URLs  Resource Names  Verbs
  ---------                                     -----------------  --------------  -----
  targetgroupbindings.elbv2.k8s.aws             []                 []              [create delete get list patch update watch]
  events                                        []                 []              [create patch]
  ingresses                                     []                 []              [get list patch update watch]
  services                                      []                 []              [get list patch update watch]
  ingresses.extensions                          []                 []              [get list patch update watch]
  services.extensions                           []                 []              [get list patch update watch]
  ingresses.networking.k8s.io                   []                 []              [get list patch update watch]
  services.networking.k8s.io                    []                 []              [get list patch update watch]
  endpoints                                     []                 []              [get list watch]
  namespaces                                    []                 []              [get list watch]
  nodes                                         []                 []              [get list watch]
  pods                                          []                 []              [get list watch]
  endpointslices.discovery.k8s.io               []                 []              [get list watch]
  ingressclassparams.elbv2.k8s.aws              []                 []              [get list watch]
  ingressclasses.networking.k8s.io              []                 []              [get list watch]
  ingresses/status                              []                 []              [update patch]
  pods/status                                   []                 []              [update patch]
  services/status                               []                 []              [update patch]
  targetgroupbindings/status                    []                 []              [update patch]
  ingresses.elbv2.k8s.aws/status                []                 []              [update patch]
  pods.elbv2.k8s.aws/status                     []                 []              [update patch]
  services.elbv2.k8s.aws/status                 []                 []              [update patch]
  targetgroupbindings.elbv2.k8s.aws/status      []                 []              [update patch]
  ingresses.extensions/status                   []                 []              [update patch]
  pods.extensions/status                        []                 []              [update patch]
  services.extensions/status                    []                 []              [update patch]
  targetgroupbindings.extensions/status         []                 []              [update patch]
  ingresses.networking.k8s.io/status            []                 []              [update patch]
  pods.networking.k8s.io/status                 []                 []              [update patch]
  services.networking.k8s.io/status             []                 []              [update patch]
  targetgroupbindings.networking.k8s.io/status  []                 []              [update patch]&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;해당 디플로이먼트는 &lt;code&gt;aws-load-balancer-controller&lt;/code&gt; 라는 ServiceAccount를 사용하는데, Service/Ingerss 및 관련된 구성요소들에 대한 권한을 가진 것을 알 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;대상 유형별 리소스 배포 실습&lt;/h3&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;AWS Load Balancer Controller 를 배포하면 Service 나 Ingress의 매니페스트에 추가된 Annotation이 다릅니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;blockquote data-ke-style=&quot;style3&quot;&gt;[Note] The configuration of the provisioned load balancer is controlled by annotations that are added to the manifest for the Service or Ingress object and are different when using the AWS Load Balancer Controller than they are when using the AWS cloud provider load balancer controller.&lt;br /&gt;&lt;a href=&quot;https://docs.aws.amazon.com/eks/latest/best-practices/load-balancing.html&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://docs.aws.amazon.com/eks/latest/best-practices/load-balancing.html&lt;/a&gt;&lt;/blockquote&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;대상 유형(Target Type)의 차이를 확인하기 위해서 각각 instance 타입과 ip 타입으로 Loadbalancer 서비스를 생성해서 확인해보겠습니다.&lt;/p&gt;
&lt;pre class=&quot;yaml&quot;&gt;&lt;code&gt;# instance type을 위한 디플로이먼트 &amp;amp; 서비스 생성
cat &amp;lt;&amp;lt; EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
  name: deploy-echo1
spec:
  replicas: 2
  selector:
    matchLabels:
      app: deploy-websrv1
  template:
    metadata:
      labels:
        app: deploy-websrv1
    spec:
      terminationGracePeriodSeconds: 0
      containers:
      - name: aews-websrv
        image: k8s.gcr.io/echoserver:1.5
        ports:
        - containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
  name: svc-nlb-instance-type
  annotations:
    service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: instance
    service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing
    service.beta.kubernetes.io/aws-load-balancer-healthcheck-port: &quot;8080&quot;
    service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: &quot;true&quot;
spec:
  ports:
    - port: 80
      targetPort: 8080
      protocol: TCP
  type: LoadBalancer
  loadBalancerClass: service.k8s.aws/nlb
  selector:
    app: deploy-websrv1
EOF&lt;/code&gt;&lt;/pre&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;&amp;nbsp;&lt;/h3&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;그 다음은 대상 유형을 IP로 배포해 보겠습니다.&lt;/p&gt;
&lt;pre class=&quot;yaml&quot;&gt;&lt;code&gt;# 디플로이먼트 &amp;amp; 서비스 생성
cat &amp;lt;&amp;lt; EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
  name: deploy-echo
spec:
  replicas: 2
  selector:
    matchLabels:
      app: deploy-websrv
  template:
    metadata:
      labels:
        app: deploy-websrv
    spec:
      terminationGracePeriodSeconds: 0
      containers:
      - name: aews-websrv
        image: k8s.gcr.io/echoserver:1.5
        ports:
        - containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
  name: svc-nlb-ip-type
  annotations:
    service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: ip
    service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing
    service.beta.kubernetes.io/aws-load-balancer-healthcheck-port: &quot;8080&quot;
    service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: &quot;true&quot;
spec:
  ports:
    - port: 80
      targetPort: 8080
      protocol: TCP
  type: LoadBalancer
  loadBalancerClass: service.k8s.aws/nlb
  selector:
    app: deploy-websrv
EOF&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아래와 같이 CLB로 구성된 mario 서비스와 NLB로 구성된 instance type과 ip type으로 서비스가 각각 생성되었습니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;$ kubectl get svc
NAME                    TYPE           CLUSTER-IP       EXTERNAL-IP                                                                         PORT(S)        AGE
kubernetes              ClusterIP      10.100.0.1       &amp;lt;none&amp;gt;                                                                              443/TCP        74m
mario                   LoadBalancer   10.100.40.60     ac12f1c2b92b84516b339ffeea5afd12-1085404426.ap-northeast-2.elb.amazonaws.com        80:31207/TCP   24m
svc-nlb-instance-type   LoadBalancer   10.100.73.240    k8s-default-svcnlbin-9674736b71-4aae57171be6e3b8.elb.ap-northeast-2.amazonaws.com   80:32332/TCP   14s
svc-nlb-ip-type         LoadBalancer   10.100.124.220   k8s-default-svcnlbip-d503e33dc2-2ecb99f47d0bc5bf.elb.ap-northeast-2.amazonaws.com   80:30264/TCP   47s&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;웹 콘솔에서 두 대상 유형이 다른 NLB 구성이 어떻게 차이가 나는지 확인해보겠습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1904&quot; data-origin-height=&quot;301&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/FPHub/btsMlh7634p/I10QOAUDcfWzpEGEazqyC1/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/FPHub/btsMlh7634p/I10QOAUDcfWzpEGEazqyC1/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/FPHub/btsMlh7634p/I10QOAUDcfWzpEGEazqyC1/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FFPHub%2FbtsMlh7634p%2FI10QOAUDcfWzpEGEazqyC1%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1904&quot; height=&quot;301&quot; data-origin-width=&quot;1904&quot; data-origin-height=&quot;301&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;먼저 instance로 생성한 NLB의 resource map을 살펴보면 Target이 NodePort로 등록되어 있는걸 알 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이상한 점은 분명 서비스 엔드포인트로 호출이 되고 NodePort도 정상인데, 대상들이 모두 &lt;code&gt;Unhealthy: Health checks failed&lt;/code&gt;로 표시된다는 점입니다. 이 부분이 by design인지 정확하지 않습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1445&quot; data-origin-height=&quot;865&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/yklVY/btsMlkjqbdV/jvoY2MOlqdg6bNcqVhvWj1/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/yklVY/btsMlkjqbdV/jvoY2MOlqdg6bNcqVhvWj1/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/yklVY/btsMlkjqbdV/jvoY2MOlqdg6bNcqVhvWj1/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FyklVY%2FbtsMlkjqbdV%2FjvoY2MOlqdg6bNcqVhvWj1%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1445&quot; height=&quot;865&quot; data-origin-width=&quot;1445&quot; data-origin-height=&quot;865&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아래는 ip로 생성한 NLB의 resource map입니다. 이 경우에는 파드 IP가 직접 등록된 것을 확인할 수 있습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1414&quot; data-origin-height=&quot;731&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/cQjD0L/btsMktBofA3/C9cG0hhmkZkdK8kXB3MdN1/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/cQjD0L/btsMktBofA3/C9cG0hhmkZkdK8kXB3MdN1/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/cQjD0L/btsMktBofA3/C9cG0hhmkZkdK8kXB3MdN1/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FcQjD0L%2FbtsMktBofA3%2FC9cG0hhmkZkdK8kXB3MdN1%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1414&quot; height=&quot;731&quot; data-origin-width=&quot;1414&quot; data-origin-height=&quot;731&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아래로 endpoint 정보를 확인하면 동일한 것을 알 수 있습니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;$ kubectl get svc
NAME                    TYPE           CLUSTER-IP       EXTERNAL-IP                                                                         PORT(S)        AGE
kubernetes              ClusterIP      10.100.0.1       &amp;lt;none&amp;gt;                                                                              443/TCP        92m
mario                   LoadBalancer   10.100.40.60     ac12f1c2b92b84516b339ffeea5afd12-1085404426.ap-northeast-2.elb.amazonaws.com        80:31207/TCP   42m
svc-nlb-instance-type   LoadBalancer   10.100.73.240    k8s-default-svcnlbin-9674736b71-4aae57171be6e3b8.elb.ap-northeast-2.amazonaws.com   80:32332/TCP   18m
svc-nlb-ip-type         LoadBalancer   10.100.124.220   k8s-default-svcnlbip-d503e33dc2-2ecb99f47d0bc5bf.elb.ap-northeast-2.amazonaws.com   80:30264/TCP   18m
$ kubectl get ep
NAME                    ENDPOINTS                              AGE
kubernetes              192.168.2.190:443,192.168.3.91:443     92m
mario                   192.168.1.136:8080                     42m
svc-nlb-instance-type   192.168.2.180:8080,192.168.3.56:8080   18m
svc-nlb-ip-type         192.168.1.137:8080,192.168.3.45:8080   18m&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;6. AKS의 LoadBalancer 구현&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Azure의 LoadBalancer 유형의 서비스는 Azure LoadBalancer를 통해 구현됩니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;AWS의 ELB들이 실제 서브넷에 위치해 파드 IP로 접근한 것과 다르게, Azure Loadbalancer는 서브넷에 위치하지 않습니다. 이러한 이유로 Azure Loadbalancer는 파드 IP로 직접 접근을 하지 못합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;blockquote data-ke-style=&quot;style3&quot;&gt;[Note] 파드IP가 가상 네트워크에서 유효한 IP를 가지고 있다고 하더라도 파드 IP로 다른 리소스에 접근하는 것과 다른 리소스에서 파드IP로 접근하는 것은 다릅니다. 파드 IP로 다른 리소스를 접근하는 것은 SNAT로 가능하지만 보통 반대 방향의 통신은 불가합니다.&lt;/blockquote&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;실제 대상이 가상 네트워크에 있지 않으면 리소스는 파드IP로 접근할 수 없습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이러한 이유로 AKS의 Azure Loadbalancer는 노드 IP를 통한 백엔드 구성을 합니다. 이 경우 EKS의 &lt;code&gt;service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: instance&lt;/code&gt; 와 동일하다고 생각할 수 있지만, Azure Loadbalancer에서는 Floating IP(&lt;a href=&quot;https://learn.microsoft.com/en-us/azure/load-balancer/load-balancer-floating-ip#floating-ip)%EB%9D%BC%EB%8A%94&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://learn.microsoft.com/en-us/azure/load-balancer/load-balancer-floating-ip#floating-ip&lt;/a&gt;)라는 방식을 통해서 Loadbalancer에서 SNAT/DNAT이 없이 VIP로 서버로 패킷을 전달하고 iptables에서 DSR(Direct Server Return)할 수 있도록 구현되어 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;code&gt;service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: instance&lt;/code&gt;와 동일한 구성도 가능한데, AKS에서 Loadbalancer 서비스에서 &lt;code&gt;service.beta.kubernetes.io/azure-disable-load-balancer-floating-ip : &quot;true&quot;&lt;/code&gt;을 통해 Floating IP를 disable하면 DIP로 패킷이 전달되고, 이 때는 백엔드가 Nodeport 방식으로 구성됩니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;참고: &lt;a href=&quot;https://cloud-provider-azure.sigs.k8s.io/topics/loadbalancer/#loadbalancer-annotations&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://cloud-provider-azure.sigs.k8s.io/topics/loadbalancer/#loadbalancer-annotations&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;참고로 Azure의 Application Gateway는 서브넷에 위치하는 리소스 입니다. AKS의 Ingress 구현체 중 Application gateway Ingresss controller를 사용할 수 있는데, Appilcation Gateway의 백엔드 대상은 파드 IP를 직접 대상으로 등록합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;7. EKS의 Ingress 구현 및 실습&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;EKS에서 Ingress를 생성하면 ALB가 프로비저닝 됩니다. 또한 아래 문서의 사전 조건을 살펴보면 이를 위해서 AWS Load Balancer Controller 가 있어야 하는 것을 알 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;https://docs.aws.amazon.com/ko_kr/eks/latest/userguide/alb-ingress.html&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://docs.aws.amazon.com/ko_kr/eks/latest/userguide/alb-ingress.html&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아래와 같이 실습을 진행해 보겠습니다.&lt;/p&gt;
&lt;pre class=&quot;yaml&quot;&gt;&lt;code&gt;# 게임 파드와 Service, Ingress 배포
cat &amp;lt;&amp;lt;EOF | kubectl apply -f -
apiVersion: v1
kind: Namespace
metadata:
  name: game-2048
---
apiVersion: apps/v1
kind: Deployment
metadata:
  namespace: game-2048
  name: deployment-2048
spec:
  selector:
    matchLabels:
      app.kubernetes.io/name: app-2048
  replicas: 2
  template:
    metadata:
      labels:
        app.kubernetes.io/name: app-2048
    spec:
      containers:
      - image: public.ecr.aws/l6m2t8p7/docker-2048:latest
        imagePullPolicy: Always
        name: app-2048
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  namespace: game-2048
  name: service-2048
spec:
  ports:
    - port: 80
      targetPort: 80
      protocol: TCP
  type: NodePort
  selector:
    app.kubernetes.io/name: app-2048
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  namespace: game-2048
  name: ingress-2048
  annotations:
    alb.ingress.kubernetes.io/scheme: internet-facing
    alb.ingress.kubernetes.io/target-type: ip
spec:
  ingressClassName: alb
  rules:
    - http:
        paths:
        - path: /
          pathType: Prefix
          backend:
            service:
              name: service-2048
              port:
                number: 80
EOF&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;서비스 매니페스트를 확인해보면 앞서 AWS Load Balancer Controller에서 생성된 IngressClass인 &lt;code&gt;alb&lt;/code&gt;를 지정한 것을 알 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;ingress가 배포되었습니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;$ kubectl get ing -n game-2048
NAME           CLASS   HOSTS   ADDRESS                                                                        PORTS   AGE
ingress-2048   alb     *       k8s-game2048-ingress2-70d50ce3fd-1587098903.ap-northeast-2.elb.amazonaws.com   80      3m9s

$ kubectl get svc -n game-2048
NAME           TYPE       CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
service-2048   NodePort   10.100.251.246   &amp;lt;none&amp;gt;        80:30521/TCP   5m31s

$ kubectl get ep -n game-2048
NAME           ENDPOINTS                           AGE
service-2048   192.168.1.239:80,192.168.3.219:80   5m23s
&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;다시 웹 콘솔로 로드밸런서 정보를 확인해보면 ALB가 생성되었습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1888&quot; data-origin-height=&quot;342&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/tKBEO/btsMj8RMjdC/sk3nr2PHo3Rsk7tC0kkp1k/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/tKBEO/btsMj8RMjdC/sk3nr2PHo3Rsk7tC0kkp1k/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/tKBEO/btsMj8RMjdC/sk3nr2PHo3Rsk7tC0kkp1k/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FtKBEO%2FbtsMj8RMjdC%2Fsk3nr2PHo3Rsk7tC0kkp1k%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1888&quot; height=&quot;342&quot; data-origin-width=&quot;1888&quot; data-origin-height=&quot;342&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;ALB의 대상그룹을 확인해보면 ALB에서 파드 IP로 직접 전달하는 것을 알 수 있습니다. Ingress 이므로 Rules가 추가되어 시각화 되어 있는 것을 알 수 있습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1885&quot; data-origin-height=&quot;766&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bBrep4/btsMk0MeGcg/kixykdb4J8zd2UP6sTklwK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bBrep4/btsMk0MeGcg/kixykdb4J8zd2UP6sTklwK/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bBrep4/btsMk0MeGcg/kixykdb4J8zd2UP6sTklwK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbBrep4%2FbtsMk0MeGcg%2Fkixykdb4J8zd2UP6sTklwK%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1885&quot; height=&quot;766&quot; data-origin-width=&quot;1885&quot; data-origin-height=&quot;766&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;EKS의 Ingress에서는 Ingress Group를 지정해서 하나의 Ingress를 바라보도록 해서, 서로 다른 주체가 관리하도록 하는 개념이 있습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;879&quot; data-origin-height=&quot;460&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/WqnFm/btsMk1RSVmV/O8Jn9k9co5Ux8i2Y8vEXXK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/WqnFm/btsMk1RSVmV/O8Jn9k9co5Ux8i2Y8vEXXK/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/WqnFm/btsMk1RSVmV/O8Jn9k9co5Ux8i2Y8vEXXK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FWqnFm%2FbtsMk1RSVmV%2FO8Jn9k9co5Ux8i2Y8vEXXK%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;879&quot; height=&quot;460&quot; data-origin-width=&quot;879&quot; data-origin-height=&quot;460&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;출처: &lt;a href=&quot;https://aws.amazon.com/ko/blogs/tech/a-deeper-look-at-ingress-sharing-and-target-group-binding-in-aws-load-balancer-controller/&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://aws.amazon.com/ko/blogs/tech/a-deeper-look-at-ingress-sharing-and-target-group-binding-in-aws-load-balancer-controller/&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아래와 같이 서로 다른 Ingerss를 사용하면서 하나의 ingress group으로 묶어서 배포하는 방식을 사용할 수 있는 점이 독특한 것 같습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;https://github.com/aws-samples/containers-blog-maelstrom/blob/main/aws-lb-controller-blog/ingress-grouping/orange-purple-ingress.yaml&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://github.com/aws-samples/containers-blog-maelstrom/blob/main/aws-lb-controller-blog/ingress-grouping/orange-purple-ingress.yaml&lt;/a&gt;&lt;/p&gt;
&lt;pre class=&quot;yaml&quot;&gt;&lt;code&gt;apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: orange-purple-ingress
  namespace: orange-purple-ns
  labels:
    app: color-2
  annotations:
    alb.ingress.kubernetes.io/scheme: internet-facing
    alb.ingress.kubernetes.io/target-type: ip
    alb.ingress.kubernetes.io/group.name: app-color-lb
spec:
  ingressClassName: alb
  rules:
    - http:
        paths:
          - path: /orange
            pathType: Prefix
            backend:
              service:
                name: orange-service
                port:
                  number: 80                        
          - path: /purple
            pathType: Prefix
            backend:
              service:
                name: purple-service
                port:
                  number: 80&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;https://github.com/aws-samples/containers-blog-maelstrom/blob/main/aws-lb-controller-blog/ingress-grouping/blue-green-ingress.yaml&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://github.com/aws-samples/containers-blog-maelstrom/blob/main/aws-lb-controller-blog/ingress-grouping/blue-green-ingress.yaml&lt;/a&gt;&lt;/p&gt;
&lt;pre class=&quot;yaml&quot;&gt;&lt;code&gt;apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: blue-green-ingress
  namespace: blue-green-ns
  labels:
    app: color-1
  annotations:
    alb.ingress.kubernetes.io/scheme: internet-facing
    alb.ingress.kubernetes.io/target-type: ip
    alb.ingress.kubernetes.io/group.name: app-color-lb
spec:
  ingressClassName: alb
  rules:
    - http:
        paths:
          - path: /blue
            pathType: Prefix
            backend:
              service:
                name: blue-service
                port:
                  number: 80                        
          - path: /green
            pathType: Prefix
            backend:
              service:
                name: green-service
                port:
                  number: 80&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;여기까지 EKS에서 살펴본 LoadBalancer 유형의 서비스와 Ingress가 사용하는 NLB, ALB 구성에 대한 전체적인 구성이 아래와 같다는 것을 알 수 있습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1570&quot; data-origin-height=&quot;757&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/u0N0w/btsMkKv6Lko/yBfEOAK4k6SFz0NQEoYbmk/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/u0N0w/btsMkKv6Lko/yBfEOAK4k6SFz0NQEoYbmk/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/u0N0w/btsMkKv6Lko/yBfEOAK4k6SFz0NQEoYbmk/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2Fu0N0w%2FbtsMkKv6Lko%2FyBfEOAK4k6SFz0NQEoYbmk%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1570&quot; height=&quot;757&quot; data-origin-width=&quot;1570&quot; data-origin-height=&quot;757&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;출처: &lt;a href=&quot;https://www.youtube.com/watch?v=E49Q3y9wsUo&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://www.youtube.com/watch?v=E49Q3y9wsUo&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;8. AKS의 Ingress 구현&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;AKS에서 Ingress를 구현한 방식에는 크게 3가지가 있습니다. Application Gateway Ingress Controller, Application Gateway for Containers, Application Routing addon 입니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;blockquote data-ke-style=&quot;style3&quot;&gt;[Note] 아래 문서를 보시면 Istio-based service mesh 또한 Ingress 옵션으로 이야기 하고 있지만 이는 istio Ingress API 이므로, 제외하고 설명하도록 하겠습니다.&lt;br /&gt;&lt;a href=&quot;https://learn.microsoft.com/en-us/azure/aks/concepts-network-ingress#compare-ingress-options&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://learn.microsoft.com/en-us/azure/aks/concepts-network-ingress#compare-ingress-options&lt;/a&gt;&lt;/blockquote&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Application Gateway Ingress Controller는 AGIC 파드가 내부에서 API 서버를 모니터링 하다가 Azure Resource Manager로 설정 변경을 하는 아키텍처를 가지고 있습니다. Azure Load Balancer와 다르게 Applicaton Gateway는 서브넷에 위치하는 리소스로 파드 IP를 직접 백엔드로 구성합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;다만 Ingress의 다양한 설정을 하기에는 기존에 Application Gateway 라는 제품에서 가지고 있는 기능에서만 구현이 가능하다는 점에서 확장성이 부족한 솔루션으로 알려져 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;참고: &lt;a href=&quot;https://learn.microsoft.com/en-us/azure/application-gateway/ingress-controller-overview&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://learn.microsoft.com/en-us/azure/application-gateway/ingress-controller-overview&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이로 인해서 비교적 최근 Application Gateway for Containers(이하 AGC) 라는 AKS를 위한 별도의 상품이 추가 되었습니다. AGC는 상대적으로 AGIC에 비해 다양한 고급 라우팅 기능이 추가 되었으며, Gateway API도 지원하고 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;참고: &lt;a href=&quot;https://learn.microsoft.com/en-us/azure/application-gateway/for-containers/overview&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://learn.microsoft.com/en-us/azure/application-gateway/for-containers/overview&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;다만 이러한 솔루션을 사용하지 않고 사용자가 직접 Nginx Ingress Controller 배포하여 사용하는 경우가 있습니다. 이를 위해서 AKS에서는 Application Routing addon으로 managed Nginx ingress를 제공하고 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Managed Nginx ingress는 기존 Nginx Ingress Controller를 addon 형태로 제공할 뿐 구성되는 아키텍처는 동일합니다. 즉 Ingress controller가 파드로 실행되고, LoadBalancer 유형의 서비스로 엔드포인트를 구성합니다. Addon으로 제공하므로 설치 및 업그레이드와 같은 부분에서 Managed 서비스를 제공합니다. 다만 Manaed 서비스이기 때문에 오픈 소스 Nginx Ingress의 모든 설정을 지원하지 않으므로 사용하기 전에 평가가 필요합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;참고: &lt;a href=&quot;https://learn.microsoft.com/en-us/azure/aks/app-routing&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://learn.microsoft.com/en-us/azure/aks/app-routing&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;9. 리소스 정리&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아래 절차로 생성된 EKS와 CloudFormation을 삭제합니다.&lt;/p&gt;
&lt;pre class=&quot;dsconfig&quot;&gt;&lt;code&gt;eksctl delete cluster --name $CLUSTER_NAME

# EKS 삭제 완료 후 실행
aws cloudformation delete-stack --stack-name myeks&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;마무리&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;EKS Networking을 두 포스트를 통해서 알아보고, 그 과정에서 EKS와 AKS의 차이를 살펴 봤습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;기본적으로 EKS의 VPC와 Azure의 Virtual Network에 차이가 있기 때문에 구현 방식이 달라지는 부분이 있었고, CNI를 구현하는 방식에는 유사한 듯 하면서도 서로 일부 제약사항이 있는 상황입니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;전반적으로 EKS가 단일 CNI와 ELB 상품을 통한 Service/Ingress를 구현한 반면, AKS는 CNI나 Ingress 쪽에서 구현체가 다양한 모습을 확인할 수 있었습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;다음 포스트에서는 EKS의 노드 그룹과 스토리지 옵션을 살펴보도록 하겠습니다.&lt;/p&gt;</description>
      <category>EKS</category>
      <category>AKS</category>
      <category>aws</category>
      <category>Azure</category>
      <category>EKS</category>
      <author>한명</author>
      <guid isPermaLink="true">https://a-person.tistory.com/31</guid>
      <comments>https://a-person.tistory.com/31#entry31comment</comments>
      <pubDate>Sun, 16 Feb 2025 02:14:38 +0900</pubDate>
    </item>
    <item>
      <title>[2-1] EKS Networking Part1 - VPC CNI와 파드 통신</title>
      <link>https://a-person.tistory.com/30</link>
      <description>&lt;p data-ke-size=&quot;size16&quot;&gt;이번 포스트에서는 EKS Networking이라는 주제로 이어가 보겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;내용이 길어질 것으로 보아 파트를 나눠서 작성할 예정이며, EKS Networking Part1에서는 EKS의 VPC CNI와 기본 환경을 검토해보고, 파드와 노드 통신을 확인해보겠습니다. 이후 EKS Networking Part2에서 Loadbalancer 유형의 서비스의 구성과 Ingress에 대해서 확인해 보겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;목차&lt;/h2&gt;
&lt;ol style=&quot;list-style-type: decimal;&quot; data-ke-list-type=&quot;decimal&quot;&gt;
&lt;li&gt;AWS VPC CNI 개요&lt;/li&gt;
&lt;li&gt;파드 생성 갯수 제한&lt;/li&gt;
&lt;li&gt;실습 환경 배포&lt;/li&gt;
&lt;li&gt;노드 정보 확인&lt;/li&gt;
&lt;li&gt;파드 통신 확인&lt;/li&gt;
&lt;/ol&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;1. AWS VPC CNI 개요&lt;/h2&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;CNI란?&lt;/h3&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;CNI(Container Network Interface)는 CNCF(Cloud Native Computing Foundation)의 프로젝트로 &lt;b&gt;Specification&lt;/b&gt;과 리눅스 컨테이너의 네트워크 인터페이스를 구성하기 위한 &lt;b&gt;plugin을 작성하기 위한 라이브러리&lt;/b&gt;로 구성됩니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;CNI는 컨테이너의 네트워크 연결성과 컨테이너가 삭제되었을 때 할당된 리소스를 제거하는 역할에 집중합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;참고: &lt;a href=&quot;https://github.com/containernetworking/cni&quot;&gt;https://github.com/containernetworking/cni&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;보통 Kubernetes에 어떤 CNI를 쓰느냐라고 얘기를 하면 의미가 통하기는 하지만, 실제로 calico, cilium, flannel 등은 CNI plugin이라고 할 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Kubernetes 에서 CNI Plugin의 동작은 간략히 아래와 같이 이뤄집니다.&lt;/p&gt;
&lt;ol style=&quot;list-style-type: decimal;&quot; data-ke-list-type=&quot;decimal&quot;&gt;
&lt;li&gt;Kubelet이 Container Runtime에 컨테이너 생성을 요청&lt;/li&gt;
&lt;li&gt;Container Runtime이 컨테이너의 Network Namespace를 생성&lt;/li&gt;
&lt;li&gt;Container Runtime이 CNI 설정과 환경변수를 표준 입력으로 CNI Plugin 호출&lt;/li&gt;
&lt;li&gt;CNI Plugin이 컨테이너의 네트워크 인터페이스를 구성하고, IP를 할당하고, 호스트 네트워크 간의 veth pair 를 생성&lt;/li&gt;
&lt;li&gt;CNI Plugin이 호스트 네트워크 네임스페이스와 컨테이너 네트워크 네임스페이스에 라우팅을 구성&lt;/li&gt;
&lt;/ol&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;관련하여 CNI plugin이 실제로 어떤 방식으로 호출되고 동작하는지에 대해서 재미난 실습이 있어 별도의 포스트를 작성하였으니 아래를 확인해보시기 바랍니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;https://a-person.tistory.com/32&quot; target=&quot;_blank&quot; rel=&quot;noopener&amp;nbsp;noreferrer&quot;&gt;https://a-person.tistory.com/32&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;AWS VPC CNI 소개&lt;/h3&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;오픈소스 CNI plugin들은 파드 네트워크를 VxLAN이나 IPIP와 같은 프로토콜을 이용해 overlay 네트워크로 구성하는 경우가 많습니다. 이것은 노드가 위치한 네트워크가 물리적으로 구성되어 있거나 하는 이유로 CNI가 직접 IP를 관리하기 어려운 이유도 있을 수 있고, 한편으로는 어떤 네트워크 환경에서든 유연하게 사용 가능하게 하기 위한 목적일 수도 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;반면 AWS VPC CNI는 AWS 환경에 최적화되어 VPC 네트워크와 동일한 IP 주소를 파드에 할당합니다. 또 AWS VPC CNI는 eBPF 기반의 Network Policy를 기본으로 제공하고 있습니다. (앞서 노드에 배포된 파드를 확인하면서 aws-node의 컨테이너가 각 VPC CNI, Network Policy Agent 인것을 확인했습니다)&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;AWS VPC CNI의 컴포넌트는 CNI 바이너리와 ipamd로 이뤄져 있습니다.&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;CNI binary: 파드 네트워크를 구성&lt;/li&gt;
&lt;li&gt;ipamd: IP Address Management(IPAM) 데몬으로, ENI를 관리하고 사용가능한 IP 혹은 prefix를 warm-pool로 관리합니다.&lt;/li&gt;
&lt;/ul&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;앞서 실습을 통해 EKS를 생성한 이후 노드에 ENI가 2개 생성된 경우가 확인했는데 이것은 VPC CNI의 동작과도 관련이 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;먼저 인스턴스가 생성되면 EC2는 첫번째 ENI를 생성해 연결하고, VPC CNI가 hostNetwork 모드로 Primary IP로 실행됩니다. 노드가 프로비저닝 되면, CNI 플러그인 Primary ENI에 IP 혹은 Prefix의 slots를 가져와 IP Pool에 할당합니다. 이러한 IP Pool을 warm pool 이라고 하며 warm pool의 사이즈는 노드의 인스턴스 타입에 결정됩니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;각 ENI는 인스턴스는 타입에 따라 제한된 slot 개수를 지원하기 때문에, 추가로 필요한 slot에 따라 CNI는 인스턴스에 추가 ENI를 할당 요청 합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;요약하면, 인스턴스 타입에 따라 IP Pool의 개수가 정해져 있고, 이때 slots 단위로 IP를 warm pool에 저장해두고, IP가 필요하면 warm pool에서 파드에 할당합니다. 인스턴스 타입에 따라 ENI가 가질 수 있는 IP 개수도 제한되기 때문에 warm pool이 증가할 때 추가 ENI가 필요할 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아래 VPC CNI 문서의 그림을 살펴보면 flow를 확인할 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;새 slot이 필요할 때, Primary ENI에 여유 slot이 있으면 파드에 이 slot을 할당합니다. Primary ENI에 여유 slot이 없으면, Secondary ENI가 있는지 확인하고, 다시 Secondary ENI에서 여유 slot을 확인해 파드에 slot을 할당합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Secondary ENI가 없는 경우, 인스턴스에 새로운 ENI 가용을 확인하여, 가능하다면 새로운 ENI를 할당합니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;4084&quot; data-origin-height=&quot;1528&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/x85vd/btsMjTmZsRR/RQKgOwc2PhUKSxPRWWwWF1/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/x85vd/btsMjTmZsRR/RQKgOwc2PhUKSxPRWWwWF1/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/x85vd/btsMjTmZsRR/RQKgOwc2PhUKSxPRWWwWF1/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2Fx85vd%2FbtsMjTmZsRR%2FRQKgOwc2PhUKSxPRWWwWF1%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;4084&quot; height=&quot;1528&quot; data-origin-width=&quot;4084&quot; data-origin-height=&quot;1528&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;참고: &lt;a href=&quot;https://docs.aws.amazon.com/eks/latest/best-practices/vpc-cni.html&quot;&gt;https://docs.aws.amazon.com/eks/latest/best-practices/vpc-cni.html&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이러한 절차에서 복잡한 느낌을 지울 수 없습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;AKS의 Azure CNI는 AWS VPC CNI와 유사하게 Virtual Network(EKS의 VPC)에서 Pod IP를 가져와 사용합니다. 이때 하나의 NIC에서 할당 가능한 IP개수의 제한은 없으므로, Azure CNI은 단순히 정의된 max-pods 개수 만큼 secondary IP로 배정합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;물론 VPC CNI는 slots 만큼 동적 할당을 하기 때문에 불필요한 IP를 static 하게 할당받지 않는 장점이 있을 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;참고로 AKS는 다양한 CNI plugin을 제공하고 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;먼저 VPC CNI와 유사하게 Virtual Network의 IP를 사용하는 Azure CNI가 있고, Azure CNI에서 동적으로 파드 IP할당하는 방식인 Azure CNI with dynamic IP allocation 이 있습니다. 또한 overlay network 사용하는 kubenet, Azure CNI overlay가 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이때, Azure CNI overlay와 Azure CNI with dynamic IP allocation의 경우 dataplane을 cilium으로 오프로딩하는 Azure CNI powered by cilium 를 사용할 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;참고: &lt;a href=&quot;https://learn.microsoft.com/en-us/azure/aks/concepts-network-cni-overview#choosing-a-cni&quot;&gt;https://learn.microsoft.com/en-us/azure/aks/concepts-network-cni-overview#choosing-a-cni&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;VPC CNI의 동작 과정은 아래 그림에서 살펴보실 수 있습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;3244&quot; data-origin-height=&quot;2336&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bpjP61/btsMkzhgqV8/wFdJKAJOFd6nkD2Wg1k4fk/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bpjP61/btsMkzhgqV8/wFdJKAJOFd6nkD2Wg1k4fk/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bpjP61/btsMkzhgqV8/wFdJKAJOFd6nkD2Wg1k4fk/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbpjP61%2FbtsMkzhgqV8%2FwFdJKAJOFd6nkD2Wg1k4fk%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;3244&quot; height=&quot;2336&quot; data-origin-width=&quot;3244&quot; data-origin-height=&quot;2336&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;출처: &lt;a href=&quot;https://docs.aws.amazon.com/eks/latest/best-practices/vpc-cni.html&quot;&gt;https://docs.aws.amazon.com/eks/latest/best-practices/vpc-cni.html&lt;/a&gt;&lt;/p&gt;
&lt;ol style=&quot;list-style-type: decimal;&quot; data-ke-list-type=&quot;decimal&quot;&gt;
&lt;li&gt;파드가 스케줄링 됩니다.&lt;/li&gt;
&lt;li&gt;kubelet은 Add 요청을 VPC CNI로 보냅니다.&lt;/li&gt;
&lt;li&gt;VPC CNI는 L-IPAM에 파드 IP를 요청합니다.&lt;/li&gt;
&lt;li&gt;L-IPAM에서 파드 IP를 반환합니다.&lt;/li&gt;
&lt;li&gt;VPC CNI는 네트워크 네임스페이스를 구성합니다.&lt;/li&gt;
&lt;li&gt;VPC CNI는 파드 IP를 kubelet으로 반환합니다.&lt;/li&gt;
&lt;li&gt;kubelet은 파드 IP를 파드에 할당합니다.&lt;/li&gt;
&lt;/ol&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이 절차가 사실 일반적인 CNI plugin의 동작 과정과 조금 다르게 묘사되어 있습니다. kubelet이 IP를 할당한다라.. 쉽게 설명하려고 한건지 정확하진 않네요.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이 그림에서 VPC CNI의 구성요소와 전반적인 흐름을 이해하는 정도로 넘어가는게 좋을 것 같습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;추가로 해당 문서를 보면 L-IPAM에서 iptables의 NAT rules를 업데이트 한다고 표시도 있습니다. 아마도 VPC IP에 대한 정보를 알고 있으므로, 이를 바탕으로 VPC를 제외한 네트워크에 대해 SNAT 정책을 업데이트 하는 것으로 이해됩니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;2. 파드 생성 갯수 제한&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;AWS는 인스턴스 타입별로 최대 ENI 개수와 ENI별 할당가능한 최대 IP 개수가 정해져 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아래와 같은 명령으로 t3 시리즈에 해당하는 값을 확인할 수 있습니다.&lt;/p&gt;
&lt;pre class=&quot;gherkin&quot;&gt;&lt;code&gt;$ aws ec2 describe-instance-types --filters Name=instance-type,Values=t3.\* \
 --query &quot;InstanceTypes[].{Type: InstanceType, MaxENI: NetworkInfo.MaximumNetworkInterfaces, IPv4addr: NetworkInfo.Ipv4AddressesPerInterface}&quot; \
 --output table
 --------------------------------------
|        DescribeInstanceTypes       |
+----------+----------+--------------+
| IPv4addr | MaxENI   |    Type      |
+----------+----------+--------------+
|  15      |  4       |  t3.2xlarge  |
|  6       |  3       |  t3.medium   |
|  12      |  3       |  t3.large    |
|  15      |  4       |  t3.xlarge   |
|  2       |  2       |  t3.nano     |
|  2       |  2       |  t3.micro    |
|  4       |  3       |  t3.small    |
+----------+----------+--------------+&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;예를 들어, t3.medium은 최대 3개의 ENI를 연결할 수 있고, ENI별 6개의 IP가 할당 가능 합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;총 3*6=18개 IP가 사용가능합니다만, 각 ENI에 할당되는 IP는 파드에서 사용할 수 없게 됩니다. 또한 VPC CNI와 kube-proxy가 hostNetwork를 사용하므로 이들은 IP가 필요 없습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;결론적으로, EKS에서 최대 파드 개수는 인스턴스 타입에 따라 아래와 같은 공식으로 계산됩니다.&lt;/p&gt;
&lt;pre class=&quot;routeros&quot;&gt;&lt;code&gt;(Number of network interfaces for the instance type * (the number of IP addresses per network interface - 1)) + 2&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;즉, t3.medium은 3*(6-1)+2= 17개 파드가 실행될 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이는 인스턴스 유형마다 다를 수 있지만 최대 파드 개수에는 limit이 있습니다. vCPU 30개 미만의 EC2 인스턴스 유형에서는 최대 파드 개수가 110개로 제한되고, vCPU 30이상의 EC2 인스턴스 유형에서는 250개로 제한을 권고 합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;VPC CNI 절에서 설명한 ENI와 파드 생성 갯수 제한에 대한 테스트는 이후 노드 정보를 확인하는 과정에서 다시 살펴보겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;참고로 AKS는 노드에 생성 가능한 파드 갯수의 제한은 인터페이스와 관련이 없이 250으로 지정되어 있으며 또한 --max-pods 옵션으로 정의할 수 있습니다. 다만 --max-pods 옵션을 지정하지 않은 경우 CNI별로 일부 차이가 있으므로 이는 문서를 확인하시기 바랍니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;https://learn.microsoft.com/en-us/azure/aks/quotas-skus-regions#service-quotas-and-limits&quot;&gt;https://learn.microsoft.com/en-us/azure/aks/quotas-skus-regions#service-quotas-and-limits&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;EKS에서는 인스턴스 타입과 ENI 제약으로 인해 최대 파드 개수가 제한되는 독특한 문제를 가지고 있습니다. 이를 해소하기 위해 ENI에 특정 IP prefix를 할당하는 Prefix Delegation(&lt;a href=&quot;https://www.eksworkshop.com/docs/networking/vpc-cni/prefix/)%EC%99%80&quot;&gt;https://www.eksworkshop.com/docs/networking/vpc-cni/prefix/)와&lt;/a&gt; 같은 방식을 제공하고 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;AWS VPC CNI의 컨셉을 살펴보면 Azure CNI와 비슷한 문제점이 있고, 또한 해결을 위한 유사한 접근 방법을 가지고 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;b&gt;첫번째 문제&lt;/b&gt;는 다수의 파드/노드가 생성되는 경우 VPC 자체의 IP가 고갈되는 문제입니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;근본적으로 더 큰 VPC를 가진 클러스터를 생성해야 합니다. 또한 EKS와 AKS는 동일하게 VPC, Virtual Network에 Address Space를 추가하는 방식을 제공하고 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;추가로 AKS에서는 이러한 제약사항을 극복하기 위해서 overlay network를 사용하는 CNI(kubenet, Azure CNI overlay)를 선택할 수 있다는 점에서 차이점이 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;b&gt;두번째 문제&lt;/b&gt;는 Pre-IP allocation으로 이미 할당은 되었지만 미사용되는 IP로 인한 낭비입니다. 이런 warm pool이 다수의 노드에 존재한다면 IP가 상당히 낭비될 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;AWS VPC CNI에서는 Warm Pool 에 관련된 설정을 변경하는 방식으로 할당되는 IP를 최소화 할 수 있습니다. 이때 WARM_IP_TARGET, MINIMUM_IP_TARGET 과 같은 값을 조정할 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;AKS에서는 Azure CNI with Dynamic Pod IP Allocation 방식을 통해 Pod가 사용하는 서브넷을 지정할 수 있습니다. 이 방식은 Azure CNI와 다르게, batch 사이즈 만큼의 IP를 할당하고 지정된 퍼센티지 이상 사용하는 경우 batch 사이즈 만큼을 추가로 할당하는 방식을 사용합니다. 이 방식은 EKS의 VPC CNI의 worm pool 방식과 Custom Networking(&lt;a href=&quot;https://www.eksworkshop.com/docs/networking/vpc-cni/custom-networking/)%EC%9D%84&quot;&gt;https://www.eksworkshop.com/docs/networking/vpc-cni/custom-networking/&lt;/a&gt;)을 섞은 것 같기도 합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;3. 실습 환경 배포&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;먼저 아래를 따라 기본적인 실습 환경을 구성합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아래 CloudFormation을 배포하면서 Key를 인자로 전달하기 때문에 사전에 key pair가 있는지 확인하시고, 없으면 생성을 해야 합니다. (EC2&amp;gt;Key pairs&amp;gt;Create key pair&amp;gt;.pem 형식으로 생성)&lt;/p&gt;
&lt;pre class=&quot;reasonml&quot;&gt;&lt;code&gt;# yaml 파일 다운로드
curl -O https://s3.ap-northeast-2.amazonaws.com/cloudformation.cloudneta.net/K8S/myeks-2week.yaml

# 배포
# aws cloudformation deploy --template-file myeks-1week.yaml --stack-name mykops --parameter-overrides KeyName=&amp;lt;My SSH Keyname&amp;gt; SgIngressSshCidr=&amp;lt;My Home Public IP Address&amp;gt;/32 --region &amp;lt;리전&amp;gt;
예시) aws cloudformation deploy --template-file ./myeks-2week.yaml \
     --stack-name myeks --parameter-overrides KeyName=ekskey SgIngressSshCidr=$(curl -s ipinfo.io/ip)/32 --region ap-northeast-2

# CloudFormation 스택 배포 완료 후 운영서버 EC2 IP 확인
ec2ip=$(aws cloudformation describe-stacks --stack-name myeks --query 'Stacks[*].Outputs[*].OutputValue' --output text)

# 운영서버 EC2 에 SSH 접속
ssh -i &amp;lt;ssh 키파일&amp;gt; ec2-user@$ec2ip&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;CloudFormation으로 배포되는 실습 환경은 아래와 같습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;2278&quot; data-origin-height=&quot;730&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bGFNjp/btsMjgwsa81/jhrkcR9LmhrIFo5BW6tn2K/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bGFNjp/btsMjgwsa81/jhrkcR9LmhrIFo5BW6tn2K/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bGFNjp/btsMjgwsa81/jhrkcR9LmhrIFo5BW6tn2K/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbGFNjp%2FbtsMjgwsa81%2FjhrkcR9LmhrIFo5BW6tn2K%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;2278&quot; height=&quot;730&quot; data-origin-width=&quot;2278&quot; data-origin-height=&quot;730&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;myeks-vpc와 operator-vpc를 각각 만들고 VPC Peering으로 연결합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;operator-vpc에는 CloudFormation으로 미리 operator-host 라는 서버를 배포했습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;myeks-vpc에는 각 가용 영역에 PublicSubnet과 PrivateSubnet을 구성했고, PublicSubnet에 EKS를 배포할 예정입니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이제 EKS를 배포하겠습니다.&lt;/p&gt;
&lt;pre class=&quot;routeros&quot;&gt;&lt;code&gt;# 환경 변수 지정
export CLUSTER_NAME=myeks

# myeks-VPC/Subnet 정보 확인 및 변수 지정
export VPCID=$(aws ec2 describe-vpcs --filters &quot;Name=tag:Name,Values=$CLUSTER_NAME-VPC&quot; --query 'Vpcs[*].VpcId' --output text)
echo $VPCID

export PubSubnet1=$(aws ec2 describe-subnets --filters Name=tag:Name,Values=&quot;$CLUSTER_NAME-Vpc1PublicSubnet1&quot; --query &quot;Subnets[0].[SubnetId]&quot; --output text)
export PubSubnet2=$(aws ec2 describe-subnets --filters Name=tag:Name,Values=&quot;$CLUSTER_NAME-Vpc1PublicSubnet2&quot; --query &quot;Subnets[0].[SubnetId]&quot; --output text)
export PubSubnet3=$(aws ec2 describe-subnets --filters Name=tag:Name,Values=&quot;$CLUSTER_NAME-Vpc1PublicSubnet3&quot; --query &quot;Subnets[0].[SubnetId]&quot; --output text)
echo $PubSubnet1 $PubSubnet2 $PubSubnet3

# 출력된 내용 참고해 아래 yaml에서 vpc/subnet id, ssh key 경로 수정
eksctl create cluster --name $CLUSTER_NAME --region=ap-northeast-2 --nodegroup-name=ng1 --node-type=t3.medium --nodes 3 --node-volume-size=30 --vpc-public-subnets &quot;$PubSubnet1&quot;,&quot;$PubSubnet2&quot;,&quot;$PubSubnet3&quot; --version 1.31 --with-oidc --external-dns-access --full-ecr-access --alb-ingress-access --node-ami-family AmazonLinux2023 --ssh-access --dry-run &amp;gt; myeks.yaml&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이후 생성된 &lt;code&gt;myeks.yaml&lt;/code&gt; 에서 &lt;code&gt;# 각자 환경 정보로 수정&lt;/code&gt;로 커맨트된 열을 확인하고 정보가 다른 경우 수정합니다.&lt;/p&gt;
&lt;pre class=&quot;bash&quot; data-ke-language=&quot;bash&quot;&gt;&lt;code&gt;apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
  name: myeks
  region: ap-northeast-2
  version: &quot;1.31&quot;

kubernetesNetworkConfig:
  ipFamily: IPv4

iam:
  vpcResourceControllerPolicy: true
  withOIDC: true

accessConfig:
  authenticationMode: API_AND_CONFIG_MAP

vpc:
  autoAllocateIPv6: false
  cidr: 192.168.0.0/16
  clusterEndpoints:
    privateAccess: true # if you only want to allow private access to the cluster
    publicAccess: true # if you want to allow public access to the cluster
  id: vpc-0ab40d2acbda845d8  # 각자 환경 정보로 수정
  manageSharedNodeSecurityGroupRules: true # if you want to manage the rules of the shared node security group
  nat:
    gateway: Disable
  subnets:
    public:
      ap-northeast-2a:
        az: ap-northeast-2a
        cidr: 192.168.1.0/24
        id: subnet-014dc12ab7042f604  # 각자 환경 정보로 수정
      ap-northeast-2b:
        az: ap-northeast-2b
        cidr: 192.168.2.0/24
        id: subnet-01ba554d3b16a15a7  # 각자 환경 정보로 수정
      ap-northeast-2c:
        az: ap-northeast-2c
        cidr: 192.168.3.0/24
        id: subnet-0868f7093cbb17c34  # 각자 환경 정보로 수정

addons:
  - name: vpc-cni # no version is specified so it deploys the default version
    version: latest # auto discovers the latest available
    attachPolicyARNs: # attach IAM policies to the add-on's service account
      - arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy
    configurationValues: |-
      enableNetworkPolicy: &quot;true&quot;

  - name: kube-proxy
    version: latest

  - name: coredns
    version: latest

  - name: metrics-server
    version: latest

privateCluster:
  enabled: false
  skipEndpointCreation: false

managedNodeGroups:
- amiFamily: AmazonLinux2023
  desiredCapacity: 3
  disableIMDSv1: true
  disablePodIMDS: false
  iam:
    withAddonPolicies:
      albIngress: false # Disable ALB Ingress Controller
      appMesh: false
      appMeshPreview: false
      autoScaler: false
      awsLoadBalancerController: true # Enable AWS Load Balancer Controller
      certManager: true # Enable cert-manager
      cloudWatch: false
      ebs: false
      efs: false
      externalDNS: true # Enable ExternalDNS
      fsx: false
      imageBuilder: true
      xRay: false
  instanceSelector: {}
  instanceType: t3.medium
  preBootstrapCommands:
    # install additional packages
    - &quot;dnf install nvme-cli links tree tcpdump sysstat ipvsadm ipset bind-utils htop -y&quot;
    # disable hyperthreading
    - &quot;for n in $(cat /sys/devices/system/cpu/cpu*/topology/thread_siblings_list | cut -s -d, -f2- | tr ',' '\n' | sort -un); do echo 0 &amp;gt; /sys/devices/system/cpu/cpu${n}/online; done&quot;
  labels:
    alpha.eksctl.io/cluster-name: myeks
    alpha.eksctl.io/nodegroup-name: ng1
  maxSize: 3
  minSize: 3
  name: ng1
  privateNetworking: false
  releaseVersion: &quot;&quot;
  securityGroups:
    withLocal: null
    withShared: null
  ssh:
    allow: true
    publicKeyName: mykeyname  # 각자 환경 정보로 수정
  tags:
    alpha.eksctl.io/nodegroup-name: ng1
    alpha.eksctl.io/nodegroup-type: managed
  volumeIOPS: 3000
  volumeSize: 30
  volumeThroughput: 125
  volumeType: gp3&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;마지막으로 생성된 &lt;code&gt;myeks.yaml&lt;/code&gt;을 바탕으로 EKS를 배포합니다.&lt;/p&gt;
&lt;pre class=&quot;routeros&quot;&gt;&lt;code&gt;# kubeconfig 파일 경로 위치 지정 : 
export KUBECONFIG=$HOME/.kube/config

# 배포
eksctl create cluster -f myeks.yaml --verbose 4&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;생성이 완료되면 클러스터 정보를 확인합니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;# 클러스터 확인
$ kubectl cluster-info
Kubernetes control plane is running at https://F141CFF9E7E8776AF6826A7D1341FBEA.yl4.ap-northeast-2.eks.amazonaws.com
CoreDNS is running at https://F141CFF9E7E8776AF6826A7D1341FBEA.yl4.ap-northeast-2.eks.amazonaws.com/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
$ eksctl get cluster
NAME    REGION          EKSCTL CREATED
myeks   ap-northeast-2  True

# 노드 확인
$ kubectl get node --label-columns=node.kubernetes.io/instance-type,eks.amazonaws.com/capacityType,topology.kubernetes.io/zone
NAME                                               STATUS   ROLES    AGE     VERSION               INSTANCE-TYPE   CAPACITYTYPE   ZONE
ip-192-168-1-203.ap-northeast-2.compute.internal   Ready    &amp;lt;none&amp;gt;   2m37s   v1.31.4-eks-aeac579   t3.medium       ON_DEMAND      ap-northeast-2a
ip-192-168-2-77.ap-northeast-2.compute.internal    Ready    &amp;lt;none&amp;gt;   2m35s   v1.31.4-eks-aeac579   t3.medium       ON_DEMAND      ap-northeast-2b
ip-192-168-3-141.ap-northeast-2.compute.internal   Ready    &amp;lt;none&amp;gt;   2m35s   v1.31.4-eks-aeac579   t3.medium       ON_DEMAND      ap-northeast-2c

# 생성 노드 확인
$ aws ec2 describe-instances --query &quot;Reservations[*].Instances[*].{InstanceID:InstanceId, PublicIPAdd:PublicIpAddress, PrivateIPAdd:PrivateIpAddress, InstanceName:Tags[?Key=='Name']|[0].Value, Status:State.Name}&quot; --filters Name=instance-state-name,Values=running --output table
-----------------------------------------------------------------------------------------
|                                   DescribeInstances                                   |
+----------------------+-----------------+----------------+-----------------+-----------+
|      InstanceID      |  InstanceName   | PrivateIPAdd   |   PublicIPAdd   |  Status   |
+----------------------+-----------------+----------------+-----------------+-----------+
|  i-095eb2faccd652e06 |  myeks-ng1-Node |  192.168.3.141 |  43.201.9.71    |  running  |
|  i-035d1900144a7ba5f |  operator-host  |  172.20.1.100  |  3.34.186.167   |  running  |
|  i-0b35dccaffe41de41 |  myeks-ng1-Node |  192.168.1.203 |  43.202.40.202  |  running  |
|  i-0543c6996984a0290 |  myeks-ng1-Node |  192.168.2.77  |  15.164.237.196 |  running  |
+----------------------+-----------------+----------------+-----------------+-----------+

# 관리형 노드 그룹 확인
$ eksctl get nodegroup --cluster $CLUSTER_NAME
CLUSTER NODEGROUP       STATUS  CREATED                 MIN SIZE        MAX SIZE        DESIRED CAPACITY        INSTANCE TYPE   IMAGE ID                ASG NAMETYPE
myeks   ng1             ACTIVE  2025-02-14T13:28:22Z    3               3               3                       t3.medium       AL2023_x86_64_STANDARD  eks-ng1-7cca8252-ac30-3af4-6c5e-a3b2d91981c3     managed&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;AWS는 서브넷이 각 가용 영역에 배치되는 리소스입니다. Azure에서는 서브넷을 생성해도 이는 가용 영역을 걸쳐서 생성되는 리전 리소스입니다. 이러한 이유로 EKS는 생성 시 가용 영역별로 서브넷을 제공하면 노드들은 자동으로 가용 영역에 배치됩니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;AKS는 서브넷을 지정해도 가용 영역의 구분이 없기 때문에 생성 시점에 가용 영역 여부를 별도로 지정해야 합니다. 아래는 포탈에서 AKS를 생성하며 가용 영역을 지정하는 화면입니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;890&quot; data-origin-height=&quot;309&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bJYjcz/btsMllCAMiN/XSPrXybZMOPBdBXJNLeKxK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bJYjcz/btsMllCAMiN/XSPrXybZMOPBdBXJNLeKxK/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bJYjcz/btsMllCAMiN/XSPrXybZMOPBdBXJNLeKxK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbJYjcz%2FbtsMllCAMiN%2FXSPrXybZMOPBdBXJNLeKxK%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;890&quot; height=&quot;309&quot; data-origin-width=&quot;890&quot; data-origin-height=&quot;309&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아래로 AKS의 availability zone 문서를 참고하실 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;https://learn.microsoft.com/en-us/azure/aks/availability-zones-overview#zone-spanning&quot;&gt;https://learn.microsoft.com/en-us/azure/aks/availability-zones-overview#zone-spanning&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;파드 정보도 확인해봅니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;# 파드 확인
$ kubectl get pod -A
NAMESPACE     NAME                              READY   STATUS    RESTARTS   AGE
kube-system   aws-node-5skww                    2/2     Running   0          13m
kube-system   aws-node-qmmn8                    2/2     Running   0          13m
kube-system   aws-node-x5rxg                    2/2     Running   0          13m
kube-system   coredns-9b5bc9468-8prkc           1/1     Running   0          18m
kube-system   coredns-9b5bc9468-qp9mj           1/1     Running   0          18m
kube-system   kube-proxy-d7l2t                  1/1     Running   0          13m
kube-system   kube-proxy-fp99v                  1/1     Running   0          13m
kube-system   kube-proxy-sxq5k                  1/1     Running   0          13m
kube-system   metrics-server-86bbfd75bb-smjpb   1/1     Running   0          18m
kube-system   metrics-server-86bbfd75bb-xm8ds   1/1     Running   0          18m

$ kubectl get ds -A
NAMESPACE     NAME         DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
kube-system   aws-node     3         3         3       3            3           &amp;lt;none&amp;gt;          18m
kube-system   kube-proxy   3         3         3       3            3           &amp;lt;none&amp;gt;          18m

# eks addon 확인
$ eksctl get addon --cluster $CLUSTER_NAME
2025-02-14 22:43:26 [ℹ]  Kubernetes version &quot;1.31&quot; in use by cluster &quot;myeks&quot;
2025-02-14 22:43:26 [ℹ]  getting all addons
2025-02-14 22:43:27 [ℹ]  to see issues for an addon run `eksctl get addon --name &amp;lt;addon-name&amp;gt; --cluster &amp;lt;cluster-name&amp;gt;`
NAME            VERSION                 STATUS  ISSUES  IAMROLE                                                                         UPDATE AVAILABLE        CONFIGURATION VALUES     POD IDENTITY ASSOCIATION ROLES
coredns         v1.11.3-eksbuild.1      ACTIVE  0                                                                                       v1.11.4-eksbuild.2,v1.11.4-eksbuild.1,v1.11.3-eksbuild.2
kube-proxy      v1.31.2-eksbuild.3      ACTIVE  0                                                                                       v1.31.3-eksbuild.2
metrics-server  v0.7.2-eksbuild.1       ACTIVE  0
vpc-cni         v1.19.0-eksbuild.1      ACTIVE  0       arn:aws:iam::430118812536:role/eksctl-myeks-addon-vpc-cni-Role1-fdAOfLzN8tNl    v1.19.2-eksbuild.5,v1.19.2-eksbuild.1
&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;앞서 설명드린 바와 같이 EKS의 노드에는 &lt;code&gt;aws-node&lt;/code&gt; 가 daemonset 으로 실행되고, VPC CNI와 Network Policy Agent로 멀티 컨테이너로 구성되어 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Azure CNI에서는 azure-vnet과 azure-vnet-ipam가 binary로 구현되어 있고 별도의 파드가 생성되지 않습니다. 다만 azure CNI overlay과 같은 유형의 CNI에서는 IPAM으로 azure-cns 데몬셋이 사용됩니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;한편 EKS의 VPC CNI는 L-IPAM 에서 NAT rules를 iptables로 반영합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;AKS의 Azure CNI도 Virtual Network 이 외의 네트워크에 대해서 NAT rules을 생성하는데, ip-masq-agent를 daemonset으로 사용하는 차이가 있습니다.&lt;/p&gt;
&lt;pre class=&quot;routeros&quot;&gt;&lt;code&gt;# Virtual Network에 대해서 nonMasqeuradeCIDRs 로 정의되어 있음
$ kubectl get cm azure-ip-masq-agent-config-reconciled -n kube-system -oyaml
apiVersion: v1
data:
  ip-masq-agent-reconciled: |-
    nonMasqueradeCIDRs:
      - 10.244.0.0/16
    masqLinkLocal: true
kind: ConfigMap
...
  labels:
    addonmanager.kubernetes.io/mode: Reconcile
    component: ip-masq-agent
    kubernetes.io/cluster-service: &quot;true&quot;
  name: azure-ip-masq-agent-config-reconciled
  namespace: kube-system

# 노드의 iptables 정보
root@aks-nodepool1-76251328-vmss000000:/# iptables -t nat -S |grep ip-masq-agent
-A POSTROUTING -m comment --comment &quot;\&quot;ip-masq-agent: ensure nat POSTROUTING directs all non-LOCAL destination traffic to our custom IP-MASQ-AGENT chain\&quot;&quot; -m addrtype ! --dst-type LOCAL -j IP-MASQ-AGENT
-A IP-MASQ-AGENT -d 10.244.0.0/16 -m comment --comment &quot;ip-masq-agent: local traffic is not subject to MASQUERADE&quot; -j RETURN
-A IP-MASQ-AGENT -m comment --comment &quot;ip-masq-agent: outbound traffic is subject to MASQUERADE (must be last in chain)&quot; -j MASQUERADE
&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;4. 노드 정보 확인&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;앞서 살펴본 VPC CNI의 특성을 확인하기 위해서 노드 정보를 살펴보겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;먼저 웹 콘솔에서 노드를 살펴봅니다. t3.medium으로 생성되었습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1690&quot; data-origin-height=&quot;579&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/dFvmdS/btsMkdyL1Nz/ixHEVGP2pNSwqB4BdUHl71/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/dFvmdS/btsMkdyL1Nz/ixHEVGP2pNSwqB4BdUHl71/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/dFvmdS/btsMkdyL1Nz/ixHEVGP2pNSwqB4BdUHl71/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FdFvmdS%2FbtsMkdyL1Nz%2FixHEVGP2pNSwqB4BdUHl71%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1690&quot; height=&quot;579&quot; data-origin-width=&quot;1690&quot; data-origin-height=&quot;579&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Networking 탭을 살펴보면 2개의 ENI에 해당하는 Private IP가 2개 확인되며, Secondary IP가 10개 확인됩니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1549&quot; data-origin-height=&quot;623&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bqftXc/btsMkczR5WM/dfJGI0YCqJMZsfkIiMndrk/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bqftXc/btsMkczR5WM/dfJGI0YCqJMZsfkIiMndrk/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bqftXc/btsMkczR5WM/dfJGI0YCqJMZsfkIiMndrk/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbqftXc%2FbtsMkczR5WM%2FdfJGI0YCqJMZsfkIiMndrk%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1549&quot; height=&quot;623&quot; data-origin-width=&quot;1549&quot; data-origin-height=&quot;623&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아래를 살펴보면 2개의 ENI가 연결되었음을 알 수 있습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1657&quot; data-origin-height=&quot;591&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bFQv93/btsMljLxYUF/gpNwcehKD4HYpBYKaRKqek/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bFQv93/btsMljLxYUF/gpNwcehKD4HYpBYKaRKqek/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bFQv93/btsMljLxYUF/gpNwcehKD4HYpBYKaRKqek/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbFQv93%2FbtsMljLxYUF%2FgpNwcehKD4HYpBYKaRKqek%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1657&quot; height=&quot;591&quot; data-origin-width=&quot;1657&quot; data-origin-height=&quot;591&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;노드에 할당된 IP를 확인해보면 hostNetwork를 제외하고 3개의 IP가 secondary IP에서 사용되고 있습니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;$ kubectl get po -A -owide |grep ip-192-168-3-141.ap-northeast-2.compute.internal
kube-system   aws-node-x5rxg                    2/2     Running   0          70m   192.168.3.141   ip-192-168-3-141.ap-northeast-2.compute.internal   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system   coredns-9b5bc9468-8prkc           1/1     Running   0          75m   192.168.3.45    ip-192-168-3-141.ap-northeast-2.compute.internal   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system   coredns-9b5bc9468-qp9mj           1/1     Running   0          75m   192.168.3.48    ip-192-168-3-141.ap-northeast-2.compute.internal   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system   kube-proxy-sxq5k                  1/1     Running   0          70m   192.168.3.141   ip-192-168-3-141.ap-northeast-2.compute.internal   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system   metrics-server-86bbfd75bb-smjpb   1/1     Running   0          75m   192.168.3.69    ip-192-168-3-141.ap-northeast-2.compute.internal   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;물론 노드에는 2개의 ENI에 해당하는 ens5, ens6의 IP만 확인됩니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;[root@ip-192-168-3-141 /]# ip -br -c a
lo               UNKNOWN        127.0.0.1/8 ::1/128
ens5             UP             192.168.3.141/24 metric 1024 fe80::8ef:19ff:fe5b:1dd3/64
eni9a92535a0a9@if3 UP             fe80::b428:33ff:feaa:6633/64
enib9bf070cdb3@if3 UP             fe80::b458:f9ff:fe9a:1a0b/64
eni54524c4f9d9@if3 UP             fe80::e8a1:acff:fe6f:6a57/64
enid0ded64c01f@if3 UP             fe80::9445:9ff:feaf:ad50/64
ens6             UP             192.168.3.172/24 fe80::832:3eff:fe60:eec5/64
eni98b6af8f1d6@if3 UP             fe80::54a1:b9ff:fe80:c915/64&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;앞서 EKS 노드에서 최대 파드 개수는 인스턴스 타입에 따라 아래와 같은 공식으로 계산되고, t3.medium에서는 3*(6-1)+2= 17개 파드가 실행될 수 있다고 알아봤습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이 내용을 테스트를 위해서 테스트 디플로이먼트를 배포합니다.&lt;/p&gt;
&lt;pre class=&quot;yaml&quot;&gt;&lt;code&gt;$ cat &amp;lt;&amp;lt;EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
  name: netshoot-pod
spec:
  replicas: 3
  selector:
    matchLabels:
      app: netshoot-pod
  template:
    metadata:
      labels:
        app: netshoot-pod
    spec:
      containers:
      - name: netshoot-pod
        image: nicolaka/netshoot
        command: [&quot;tail&quot;]
        args: [&quot;-f&quot;, &quot;/dev/null&quot;]
      terminationGracePeriodSeconds: 0
EOF&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아래와 같이 replicas를 늘여서 실제 변경되는 부분을 확인해 보겠습니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;# 파드 확인 
$ kubectl get pod -o wide
NAME                            READY   STATUS    RESTARTS   AGE     IP              NODE                                               NOMINATED NODE   READINESS GATES
netshoot-pod-744bd84b46-g7hzp   1/1     Running   0          6m18s   192.168.3.223   ip-192-168-3-141.ap-northeast-2.compute.internal   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
netshoot-pod-744bd84b46-llqjz   1/1     Running   0          6m18s   192.168.1.220   ip-192-168-1-203.ap-northeast-2.compute.internal   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
netshoot-pod-744bd84b46-rwblp   1/1     Running   0          6m18s   192.168.2.140   ip-192-168-2-77.ap-northeast-2.compute.internal    &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
nsenter-vg4pv8                  1/1     Running   0          7m1s    192.168.3.141   ip-192-168-3-141.ap-northeast-2.compute.internal   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;

# 파드 증가 테스트 &amp;gt;&amp;gt; 파드 정상 생성 확인, 워커 노드에서 eth, eni 갯수 확인
kubectl scale deployment netshoot-pod --replicas=8

# 노드에 접속 후 확인 (사전)
[root@ip-192-168-3-141 /]# ip -br -c a
lo               UNKNOWN        127.0.0.1/8 ::1/128
ens5             UP             192.168.3.141/24 metric 1024 fe80::8ef:19ff:fe5b:1dd3/64
eni9a92535a0a9@if3 UP             fe80::b428:33ff:feaa:6633/64
enib9bf070cdb3@if3 UP             fe80::b458:f9ff:fe9a:1a0b/64
eni54524c4f9d9@if3 UP             fe80::e8a1:acff:fe6f:6a57/64
enid0ded64c01f@if3 UP             fe80::9445:9ff:feaf:ad50/64
ens6             UP             192.168.3.172/24 fe80::832:3eff:fe60:eec5/64
eni98b6af8f1d6@if3 UP             fe80::54a1:b9ff:fe80:c915/64

# 노드에 접속 후 확인 (사후)
[root@ip-192-168-3-141 /]# ip -br -c a
lo               UNKNOWN        127.0.0.1/8 ::1/128
ens5             UP             192.168.3.141/24 metric 1024 fe80::8ef:19ff:fe5b:1dd3/64
eni9a92535a0a9@if3 UP             fe80::b428:33ff:feaa:6633/64
enib9bf070cdb3@if3 UP             fe80::b458:f9ff:fe9a:1a0b/64
eni54524c4f9d9@if3 UP             fe80::e8a1:acff:fe6f:6a57/64
enid0ded64c01f@if3 UP             fe80::9445:9ff:feaf:ad50/64
ens6             UP             192.168.3.172/24 fe80::832:3eff:fe60:eec5/64
eni98b6af8f1d6@if3 UP             fe80::54a1:b9ff:fe80:c915/64
eni9c40e9afed0@if3 UP             fe80::58ea:7dff:fe10:23f/64
ens7             UP             192.168.3.104/24 fe80::883:eff:fe48:b1bf/64&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;확인해보면 파드 개수가 늘어나자 ens7이 신규로 생성된 것을 알 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;웹 콘솔에서도 Secondary IP가 15개로 증가한 것을 확인할 수 있습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1559&quot; data-origin-height=&quot;770&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/VSQKR/btsMjR3K5LZ/IOJqShIGRfTBG3U1K7RAzk/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/VSQKR/btsMjR3K5LZ/IOJqShIGRfTBG3U1K7RAzk/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/VSQKR/btsMjR3K5LZ/IOJqShIGRfTBG3U1K7RAzk/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FVSQKR%2FbtsMjR3K5LZ%2FIOJqShIGRfTBG3U1K7RAzk%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1559&quot; height=&quot;770&quot; data-origin-width=&quot;1559&quot; data-origin-height=&quot;770&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이때 Primary ENI가 아닌 ENI는 통신에 관여를 할지 한번 확인해보겠습니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;kubectl get po -A -owide |grep ip-192-168-3-141.ap-northeast-2.compute.internal
default       netshoot-pod-744bd84b46-7ggdq     1/1     Running   0          7m10s   192.168.3.83    ip-192-168-3-141.ap-northeast-2.compute.internal   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
default       netshoot-pod-744bd84b46-g7hzp     1/1     Running   0          15m     192.168.3.223   ip-192-168-3-141.ap-northeast-2.compute.internal   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
default       nsenter-vg4pv8                    1/1     Running   0          15m     192.168.3.141   ip-192-168-3-141.ap-northeast-2.compute.internal   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system   aws-node-x5rxg                    2/2     Running   0          88m     192.168.3.141   ip-192-168-3-141.ap-northeast-2.compute.internal   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system   coredns-9b5bc9468-8prkc           1/1     Running   0          92m     192.168.3.45    ip-192-168-3-141.ap-northeast-2.compute.internal   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system   coredns-9b5bc9468-qp9mj           1/1     Running   0          92m     192.168.3.48    ip-192-168-3-141.ap-northeast-2.compute.internal   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system   kube-proxy-sxq5k                  1/1     Running   0          88m     192.168.3.141   ip-192-168-3-141.ap-northeast-2.compute.internal   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system   metrics-server-86bbfd75bb-smjpb   1/1     Running   0          93m     192.168.3.69    ip-192-168-3-141.ap-northeast-2.compute.internal   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system   metrics-server-86bbfd75bb-xm8ds   1/1     Running   0          93m     192.168.3.56    ip-192-168-3-141.ap-northeast-2.compute.internal   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;ENI 정보를 확인해보면 192.168.3.83만 두번째 ENI에 할당된 IP로 확인됩니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1119&quot; data-origin-height=&quot;322&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/CHPL5/btsMkK3WW99/2LZq1jEAFmMjqKxndBxYX1/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/CHPL5/btsMkK3WW99/2LZq1jEAFmMjqKxndBxYX1/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/CHPL5/btsMkK3WW99/2LZq1jEAFmMjqKxndBxYX1/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FCHPL5%2FbtsMkK3WW99%2F2LZq1jEAFmMjqKxndBxYX1%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1119&quot; height=&quot;322&quot; data-origin-width=&quot;1119&quot; data-origin-height=&quot;322&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;해당 파드에서 ping을 하고, tcpdump를 확인해봐도 Primary ENI에서만 패킷이 확인되는 것을 알 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;(정정) 이후 파드 통신 절에서 파드 통신 테스트를 한번 더 해보니 이건 ENI가 통신 관여를 하지 않는 건 아니고, &lt;b&gt;외부 통신은 Primary ENI&lt;/b&gt;에서 &lt;b&gt;내부 통신은 해당 IP를 가진 Secondary ENI&lt;/b&gt;에서 통신이 이뤄졌습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1127&quot; data-origin-height=&quot;999&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/HpigZ/btsMkJxbcSB/q4IK8nfbvlPPV36TK03KOk/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/HpigZ/btsMkJxbcSB/q4IK8nfbvlPPV36TK03KOk/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/HpigZ/btsMkJxbcSB/q4IK8nfbvlPPV36TK03KOk/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FHpigZ%2FbtsMkJxbcSB%2Fq4IK8nfbvlPPV36TK03KOk%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1127&quot; height=&quot;999&quot; data-origin-width=&quot;1127&quot; data-origin-height=&quot;999&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이번에는 replicas를 50개까지 증가시켜보겠습니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;$ kubectl scale deployment netshoot-pod --replicas=50
deployment.apps/netshoot-pod scaled

$ kubectl get po |grep -v Running
NAME                            READY   STATUS    RESTARTS   AGE
netshoot-pod-744bd84b46-69r4x   0/1     Pending   0          23s
netshoot-pod-744bd84b46-bhprl   0/1     Pending   0          23s
netshoot-pod-744bd84b46-g6pp4   0/1     Pending   0          24s
netshoot-pod-744bd84b46-gs4dt   0/1     Pending   0          24s
netshoot-pod-744bd84b46-jpxbc   0/1     Pending   0          24s
netshoot-pod-744bd84b46-spdw6   0/1     Pending   0          24s
netshoot-pod-744bd84b46-v56h5   0/1     Pending   0          23s
netshoot-pod-744bd84b46-vhlzn   0/1     Pending   0          24s
netshoot-pod-744bd84b46-vngbr   0/1     Pending   0          23s
netshoot-pod-744bd84b46-wf58t   0/1     Pending   0          24s
$ kubectl get po -A |wc
     62     372    5026
$ kubectl get po |grep -v Running|wc
     11      55     725&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;t3.medium 에서는 17개의 파드가 실행 가능하고, 노드가 3대이므로 총 51개의 파드가 실행 가능합니다. 결국 11개의 파드는 unschedulable 한 상태로 빠집니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;5. 파드 통신 확인&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;EKS의 VPC CNI는 파드에 VPC의 IP를 할당하는 방식으로 NAT 통신이 없이 파드간 통신이 이뤄집니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;클러스터 내 파드 간 통신 확인&lt;/h3&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;현재 파드는 3개로 각 노드에 배포되어 있습니다. 이때, 파드간 통신을 확인해보고, 노드에서 tcpdump로 확인합니다.&lt;/p&gt;
&lt;pre class=&quot;routeros&quot;&gt;&lt;code&gt;$ kubectl get po -A -owide
NAMESPACE     NAME                              READY   STATUS    RESTARTS   AGE    IP              NODE                                               NOMINATED NODE   READINESS GATES
default       netshoot-pod-744bd84b46-8mrkj     1/1     Running   0          15s    192.168.3.31    ip-192-168-3-141.ap-northeast-2.compute.internal   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
default       netshoot-pod-744bd84b46-lgkxf     1/1     Running   0          15s    192.168.2.140   ip-192-168-2-77.ap-northeast-2.compute.internal    &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
default       netshoot-pod-744bd84b46-qzq29     1/1     Running   0          15s    192.168.1.126   ip-192-168-1-203.ap-northeast-2.compute.internal   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
..

# 파드 이름 변수 지정
PODNAME1=$(kubectl get pod -l app=netshoot-pod -o jsonpath='{.items[0].metadata.name}')
PODNAME2=$(kubectl get pod -l app=netshoot-pod -o jsonpath='{.items[1].metadata.name}')
PODNAME3=$(kubectl get pod -l app=netshoot-pod -o jsonpath='{.items[2].metadata.name}')

# 파드 IP 변수 지정
PODIP1=$(kubectl get pod -l app=netshoot-pod -o jsonpath='{.items[0].status.podIP}')
PODIP2=$(kubectl get pod -l app=netshoot-pod -o jsonpath='{.items[1].status.podIP}')
PODIP3=$(kubectl get pod -l app=netshoot-pod -o jsonpath='{.items[2].status.podIP}')

echo $PODIP1 $PODIP2 $PODIP3
192.168.3.31 192.168.2.140 192.168.1.126

# 파드1 Shell 에서 파드2로 ping 테스트, 8.8.8.8로 ping 테스트
kubectl exec -it $PODNAME1 -- ping -c 2 $PODIP2
 kubectl exec -it $PODNAME1 -- ping -c 2 8.8.8.8

# 워커 노드 EC2 : TCPDUMP 확인
## For Pod to external (outside VPC) traffic, we will program iptables to SNAT using Primary IP address on the Primary ENI.
[root@ip-192-168-3-141 /]# sudo tcpdump -i any -nn icmp
tcpdump: data link type LINUX_SLL2
dropped privs to tcpdump
tcpdump: verbose output suppressed, use -v[v]... for full protocol decode
listening on any, link-type LINUX_SLL2 (Linux cooked v2), snapshot length 262144 bytes
15:31:58.525323 eni89548af8467 In  IP 192.168.3.31 &amp;gt; 192.168.2.140: ICMP echo request, id 7, seq 1, length 64
15:31:58.525366 ens6  Out IP 192.168.3.31 &amp;gt; 192.168.2.140: ICMP echo request, id 7, seq 1, length 64
15:31:58.528035 ens6  In  IP 192.168.2.140 &amp;gt; 192.168.3.31: ICMP echo reply, id 7, seq 1, length 64
15:31:58.528089 eni89548af8467 Out IP 192.168.2.140 &amp;gt; 192.168.3.31: ICMP echo reply, id 7, seq 1, length 64
...
15:35:38.173442 eni89548af8467 In  IP 192.168.3.31 &amp;gt; 8.8.8.8: ICMP echo request, id 13, seq 1, length 64
15:35:38.173494 ens5  Out IP 192.168.3.141 &amp;gt; 8.8.8.8: ICMP echo request, id 24172, seq 1, length 64
15:35:38.202228 ens5  In  IP 8.8.8.8 &amp;gt; 192.168.3.141: ICMP echo reply, id 24172, seq 1, length 64
15:35:38.202403 eni89548af8467 Out IP 8.8.8.8 &amp;gt; 192.168.3.31: ICMP echo reply, id 13, seq 1, length 64
..&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;파드 간 통신에서 출발지는 컨테이너의 veth 인터페이스이며, 이후 ens6를 통해 Pod IP에서 Pod IP로 직접 통신이 이뤄지는 것을 알 수 있습니다. 192.168.3.31은 두번째 EN에 있는 secondary IP입니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1119&quot; data-origin-height=&quot;322&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bDlA2E/btsMlAzyxaY/MwmSUKszYpSkl5WBfskrEk/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bDlA2E/btsMlAzyxaY/MwmSUKszYpSkl5WBfskrEk/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bDlA2E/btsMlAzyxaY/MwmSUKszYpSkl5WBfskrEk/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbDlA2E%2FbtsMlAzyxaY%2FMwmSUKszYpSkl5WBfskrEk%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1119&quot; height=&quot;322&quot; data-origin-width=&quot;1119&quot; data-origin-height=&quot;322&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;반면 파드의 외부 통신은 출발지는 컨테이너의 veth 인터페이스이며, 이후 ens5로 통신이 이뤄지는데 이 구간에서 노드 IP로의 SNAT이 이뤄지는 것을 확인할 수 있습니다. 독특하네요!&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아래 CNI 제안 문서에서 제공하는 그림을 보시면 이해할 수 있습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1453&quot; data-origin-height=&quot;464&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/G2FDn/btsMk1j3AdD/DZBUtgkiKb2V1Wfoxq6OL1/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/G2FDn/btsMk1j3AdD/DZBUtgkiKb2V1Wfoxq6OL1/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/G2FDn/btsMk1j3AdD/DZBUtgkiKb2V1Wfoxq6OL1/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FG2FDn%2FbtsMk1j3AdD%2FDZBUtgkiKb2V1Wfoxq6OL1%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1453&quot; height=&quot;464&quot; data-origin-width=&quot;1453&quot; data-origin-height=&quot;464&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;출처: &lt;a href=&quot;https://github.com/aws/amazon-vpc-cni-k8s/blob/master/docs/cni-proposal.md&quot;&gt;https://github.com/aws/amazon-vpc-cni-k8s/blob/master/docs/cni-proposal.md&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;그렇다면 3번째 ENI의 secondary IP의 파드는 ens7을 사용할까요? 그렇습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1119&quot; data-origin-height=&quot;312&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/DuO8P/btsMlntEBWx/7m7OV5aZnBsqWZKwmEwbu1/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/DuO8P/btsMlntEBWx/7m7OV5aZnBsqWZKwmEwbu1/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/DuO8P/btsMlntEBWx/7m7OV5aZnBsqWZKwmEwbu1/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FDuO8P%2FbtsMlntEBWx%2F7m7OV5aZnBsqWZKwmEwbu1%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1119&quot; height=&quot;312&quot; data-origin-width=&quot;1119&quot; data-origin-height=&quot;312&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;통신을 해보면 ens7로 나갑니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;[root@ip-192-168-3-141 /]# ip -br -c a
lo               UNKNOWN        127.0.0.1/8 ::1/128
ens5             UP             192.168.3.141/24 metric 1024 fe80::8ef:19ff:fe5b:1dd3/64
eni9a92535a0a9@if3 UP             fe80::b428:33ff:feaa:6633/64
enib9bf070cdb3@if3 UP             fe80::b458:f9ff:fe9a:1a0b/64
eni54524c4f9d9@if3 UP             fe80::e8a1:acff:fe6f:6a57/64
enid0ded64c01f@if3 UP             fe80::9445:9ff:feaf:ad50/64
ens6             UP             192.168.3.172/24 fe80::832:3eff:fe60:eec5/64
eni89548af8467@if3 UP             fe80::c8:88ff:fea3:5679/64
enidefacbb986c@if3 UP             fe80::2825:89ff:fec5:d261/64
eni56e598ef751@if3 UP             fe80::6ce6:79ff:fe8e:7f2c/64
eni7e0515a0713@if3 UP             fe80::40f2:24ff:fedb:164a/64
eniea2b33504a4@if3 UP             fe80::6492:f6ff:fe5d:371a/64
eni4dfe955a07f@if3 UP             fe80::c435:82ff:feae:b50b/64
ens7             UP             192.168.3.123/24 fe80::880:71ff:feec:d51b/64
[root@ip-192-168-3-141 /]# sudo tcpdump -i any -nn icmp
tcpdump: data link type LINUX_SLL2
dropped privs to tcpdump
tcpdump: verbose output suppressed, use -v[v]... for full protocol decode
listening on any, link-type LINUX_SLL2 (Linux cooked v2), snapshot length 262144 bytes
16:40:48.254174 eni9e794cfd960 In  IP 192.168.3.189 &amp;gt; 192.168.2.140: ICMP echo request, id 7, seq 1, length 64
16:40:48.254222 ens7  Out IP 192.168.3.189 &amp;gt; 192.168.2.140: ICMP echo request, id 7, seq 1, length 64
16:40:48.257570 ens7  In  IP 192.168.2.140 &amp;gt; 192.168.3.189: ICMP echo reply, id 7, seq 1, length 64
16:40:48.257619 eni9e794cfd960 Out IP 192.168.2.140 &amp;gt; 192.168.3.189: ICMP echo reply, id 7, seq 1, length 64
&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;파드의 외부 통신 확인&lt;/h3&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;두번째 테스트는 VPC Peering된 네트워크에 있는 서버와의 통신이 어떻게 이뤄지는지 테스트 해보겠습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;2278&quot; data-origin-height=&quot;730&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/uiyBm/btsMj67uT7h/Re2KjreABfGC6FaHkYh6g0/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/uiyBm/btsMj67uT7h/Re2KjreABfGC6FaHkYh6g0/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/uiyBm/btsMj67uT7h/Re2KjreABfGC6FaHkYh6g0/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FuiyBm%2FbtsMj67uT7h%2FRe2KjreABfGC6FaHkYh6g0%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;2278&quot; height=&quot;730&quot; data-origin-width=&quot;2278&quot; data-origin-height=&quot;730&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;해당 서버의 IP는 172.20.1.100 입니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;[root@operator-host ~]# ip -br -c a
lo               UNKNOWN        127.0.0.1/8 ::1/128
eth0             UP             172.20.1.100/24 fe80::3f:d8ff:fe93:8b1f/64
docker0          DOWN           172.17.0.1/16&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;EKS에서는 아래와 같은 SNAT rules에 따라 VPC Cidr(192.168.0.0/16)이 아닌 경우 SNAT을 수행합니다.&lt;/p&gt;
&lt;pre class=&quot;lsl&quot;&gt;&lt;code&gt;[root@ip-192-168-3-141 /]#  iptables -t nat -S | grep 'A AWS-SNAT-CHAIN'
-A AWS-SNAT-CHAIN-0 -d 192.168.0.0/16 -m comment --comment &quot;AWS SNAT CHAIN&quot; -j RETURN
-A AWS-SNAT-CHAIN-0 ! -o vlan+ -m comment --comment &quot;AWS, SNAT&quot; -m addrtype ! --dst-type LOCAL -j SNAT --to-source 192.168.3.141 --random-fully&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이때 Pod1에서 서버로 Ping을 테스트 해보겠습니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;# kubectl exec -it $PODNAME1 -- ping -c 1 172.20.1.100
PING 172.20.1.100 (172.20.1.100) 56(84) bytes of data.
64 bytes from 172.20.1.100: icmp_seq=1 ttl=254 time=1.73 ms

--- 172.20.1.100 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 1.732/1.732/1.732/0.000 ms&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이 구간에서는 SNAT가 수행되는 것을 확인할 수 있습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1327&quot; data-origin-height=&quot;423&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/pFtuo/btsMksibR4D/TFZNOellH8npkzU0TOkcxK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/pFtuo/btsMksibR4D/TFZNOellH8npkzU0TOkcxK/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/pFtuo/btsMksibR4D/TFZNOellH8npkzU0TOkcxK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FpFtuo%2FbtsMksibR4D%2FTFZNOellH8npkzU0TOkcxK%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1327&quot; height=&quot;423&quot; data-origin-width=&quot;1327&quot; data-origin-height=&quot;423&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;다만 Pod -&amp;gt; 서버로는 통신이 되지만 서버 -&amp;gt; pod로는 통신이 되지 않습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1218&quot; data-origin-height=&quot;727&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/di290w/btsMljY7i1c/CmwCgEwwA2Eq8xV6yg5i81/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/di290w/btsMljY7i1c/CmwCgEwwA2Eq8xV6yg5i81/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/di290w/btsMljY7i1c/CmwCgEwwA2Eq8xV6yg5i81/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2Fdi290w%2FbtsMljY7i1c%2FCmwCgEwwA2Eq8xV6yg5i81%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1218&quot; height=&quot;727&quot; data-origin-width=&quot;1218&quot; data-origin-height=&quot;727&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;만약 서버가 있는 네트워크가 DX/VPN/TGW로 연결된 확장된 네트워크라고 가정을 해보고, 해당 VPC Cidr에 대해서도 NAT rules를 제외하는방법을 알아 보겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아래와 같이 aws-node에 AWS_VPC_K8S_CNI_EXCLUDE_SNAT_CIDRS를 환경 변수로 전달하는 방법을 사용할 수 있습니다. 이 경우 aws-node 파드들이 재시작 하는 것을 확인할 수 있습니다.&lt;/p&gt;
&lt;pre class=&quot;lsl&quot;&gt;&lt;code&gt;$ kubectl set env daemonset aws-node -n kube-system AWS_VPC_K8S_CNI_EXCLUDE_SNAT_CIDRS=172.20.0.0/16
daemonset.apps/aws-node env updated

# 파드 롤링 업데이트 수행
$ kubectl get po -A -w
NAMESPACE     NAME                              READY   STATUS            RESTARTS   AGE
default       netshoot-pod-744bd84b46-8mrkj     1/1     Running           0          36m
default       netshoot-pod-744bd84b46-lgkxf     1/1     Running           0          36m
default       netshoot-pod-744bd84b46-qzq29     1/1     Running           0          36m
default       nsenter-vg4pv8                    1/1     Running           0          83m
kube-system   aws-node-hg2mg                    0/2     PodInitializing   0          1s
kube-system   aws-node-nlqk9                    2/2     Running           0          5s
kube-system   aws-node-x5rxg                    2/2     Running           0          156m
kube-system   coredns-9b5bc9468-8prkc           1/1     Running           0          160m
kube-system   coredns-9b5bc9468-qp9mj           1/1     Running           0          160m
kube-system   kube-proxy-d7l2t                  1/1     Running           0          156m
kube-system   kube-proxy-fp99v                  1/1     Running           0          156m
kube-system   kube-proxy-sxq5k                  1/1     Running           0          156m
kube-system   metrics-server-86bbfd75bb-smjpb   1/1     Running           0          160m
kube-system   metrics-server-86bbfd75bb-xm8ds   1/1     Running           0          160m
kube-system   aws-node-hg2mg                    1/2     Running           0          2s
kube-system   aws-node-hg2mg                    2/2     Running           0          3s
kube-system   aws-node-x5rxg                    2/2     Terminating       0          156m
kube-system   aws-node-x5rxg                    0/2     Completed         0          156m
kube-system   aws-node-5qxln                    0/2     Pending           0          1s
kube-system   aws-node-5qxln                    0/2     Pending           0          1s&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;br /&gt;이제 노드에서도 확인해보면 172.20.0.0/16 에 대해서 AWS SNAT CHAIN EXCLUSION으로 추가된 것을 알 수 있습니다.&lt;/p&gt;
&lt;pre class=&quot;lsl&quot;&gt;&lt;code&gt;[root@ip-192-168-3-141 /]# iptables -t nat -S | grep 'A AWS-SNAT-CHAIN'
-A AWS-SNAT-CHAIN-0 -d 172.20.0.0/16 -m comment --comment &quot;AWS SNAT CHAIN EXCLUSION&quot; -j RETURN
-A AWS-SNAT-CHAIN-0 -d 192.168.0.0/16 -m comment --comment &quot;AWS SNAT CHAIN&quot; -j RETURN
-A AWS-SNAT-CHAIN-0 ! -o vlan+ -m comment --comment &quot;AWS, SNAT&quot; -m addrtype ! --dst-type LOCAL -j SNAT --to-source 192.168.3.141 --random-fully&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;다시 통신 테스트를 진행해보면 SNAT이 없이 Pod -&amp;gt; 서버 통신이 가능한 것으로 확인됩니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1331&quot; data-origin-height=&quot;348&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/cjOV1X/btsMkWwiCzV/Uh2OyYhh1hSSaclOHKwyt1/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/cjOV1X/btsMkWwiCzV/Uh2OyYhh1hSSaclOHKwyt1/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/cjOV1X/btsMkWwiCzV/Uh2OyYhh1hSSaclOHKwyt1/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FcjOV1X%2FbtsMkWwiCzV%2FUh2OyYhh1hSSaclOHKwyt1%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1331&quot; height=&quot;348&quot; data-origin-width=&quot;1331&quot; data-origin-height=&quot;348&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;다만 &lt;code&gt;kubectl set env daemonset aws-node -n kube-system&lt;/code&gt;으로 적용하는 방법은 영구적으로 적용되지 않기 때문에 아래와 같이 Addon 에 Configuration values를 추가해 줄 수 있습니다.&lt;/p&gt;
&lt;pre class=&quot;json&quot;&gt;&lt;code&gt;{
    &quot;env&quot;: {
        &quot;AWS_VPC_K8S_CNI_EXCLUDE_SNAT_CIDRS&quot; : &quot;172.20.0.0/16&quot;
    }
}&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;웹 콘솔에서 EKS에서 클러스터를 선택하고, Add-ons탭에서 Amazon VPC CNI를 진입하고 Edit을 선택합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Add-on configuration schema를 통해서 내용을 하고, json 구조에 맞춰 작성이 필요합니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1331&quot; data-origin-height=&quot;654&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bia7Su/btsMllvVy7m/XVkcJK3V6p4HKVyK9N6kK1/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bia7Su/btsMllvVy7m/XVkcJK3V6p4HKVyK9N6kK1/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bia7Su/btsMllvVy7m/XVkcJK3V6p4HKVyK9N6kK1/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2Fbia7Su%2FbtsMllvVy7m%2FXVkcJK3V6p4HKVyK9N6kK1%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1331&quot; height=&quot;654&quot; data-origin-width=&quot;1331&quot; data-origin-height=&quot;654&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;적용을 하면 aws-node 파드들이 자동으로 재배포 됩니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;kubectl get po -A -w
NAMESPACE     NAME                              READY   STATUS    RESTARTS   AGE
kube-system   aws-node-b7qj7                    2/2     Running   0          40s
kube-system   aws-node-j77kr                    2/2     Running   0          36s
kube-system   aws-node-qtchj                    2/2     Running   0          44s&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;확인해보면 &lt;code&gt;AWS_VPC_K8S_CNI_EXCLUDE_SNAT_CIDRS&lt;/code&gt; 이 적용되어 있는 것을 확인할 수 있습니다.&lt;/p&gt;
&lt;pre class=&quot;crmsh&quot;&gt;&lt;code&gt;kubectl get ds aws-node -n kube-system -o json | jq '.spec.template.spec.containers[0].env'
[
  {
    &quot;name&quot;: &quot;ADDITIONAL_ENI_TAGS&quot;,
    &quot;value&quot;: &quot;{}&quot;
  },
..
  {
    &quot;name&quot;: &quot;AWS_VPC_K8S_CNI_EXCLUDE_SNAT_CIDRS&quot;,
    &quot;value&quot;: &quot;172.20.0.0/16&quot;
  },
..&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;AKS에서 사용하는 ip-masq-agent의 기본 설정은 &lt;code&gt;azure-ip-masq-agent-config-reconciled&lt;/code&gt;라는 configmap에 정보를 확인할 수 있습니다.&lt;/p&gt;
&lt;pre class=&quot;routeros&quot;&gt;&lt;code&gt;# Virtual Network에 대해서 nonMasqeuradeCIDRs 로 정의되어 있음
$ kubectl get cm azure-ip-masq-agent-config-reconciled -n kube-system -oyaml
apiVersion: v1
data:
  ip-masq-agent-reconciled: |-
    nonMasqueradeCIDRs:
      - 10.244.0.0/16
    masqLinkLocal: true
kind: ConfigMap
...
  labels:
    addonmanager.kubernetes.io/mode: Reconcile
    component: ip-masq-agent
    kubernetes.io/cluster-service: &quot;true&quot;
  name: azure-ip-masq-agent-config-reconciled
  namespace: kube-system

# 노드의 iptables 정보
root@aks-nodepool1-76251328-vmss000000:/# iptables -t nat -S |grep ip-masq-agent
-A POSTROUTING -m comment --comment &quot;\&quot;ip-masq-agent: ensure nat POSTROUTING directs all non-LOCAL destination traffic to our custom IP-MASQ-AGENT chain\&quot;&quot; -m addrtype ! --dst-type LOCAL -j IP-MASQ-AGENT
-A IP-MASQ-AGENT -d 10.244.0.0/16 -m comment --comment &quot;ip-masq-agent: local traffic is not subject to MASQUERADE&quot; -j RETURN
-A IP-MASQ-AGENT -m comment --comment &quot;ip-masq-agent: outbound traffic is subject to MASQUERADE (must be last in chain)&quot; -j MASQUERADE&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;다만 이 configmap은 &lt;code&gt;addonmanager.kubernetes.io/mode: Reconcile&lt;/code&gt;으로 지정되어 addonManager에 의해서 관리되므로, 사용자가 임의로 변경해도 다시 Reconcile 됩니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아래와 같은 절차로 configmap을 생성하는 경우 iptables가 변경되는 것을 확인하실 수 있습니다.&lt;/p&gt;
&lt;pre class=&quot;yaml&quot;&gt;&lt;code&gt;$ cat config-custom.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: azure-ip-masq-agent-config
  namespace: kube-system
  labels:
    component: ip-masq-agent
    kubernetes.io/cluster-service: &quot;true&quot;
    addonmanager.kubernetes.io/mode: EnsureExists
data:
  ip-masq-agent: |-
    nonMasqueradeCIDRs:
      - 172.31.1.0/24
    masqLinkLocal: false
    masqLinkLocalIPv6: true

$ kubectl apply -f config-custom.yaml
configmap/azure-ip-masq-agent-config created

# 노드의 iptables 정보 (172.31.1.0/24가 추가됨)
root@aks-nodepool1-76251328-vmss000000:/# iptables -t nat -S |grep ip-masq-agent
-A POSTROUTING -m comment --comment &quot;\&quot;ip-masq-agent: ensure nat POSTROUTING directs all non-LOCAL destination traffic to our custom IP-MASQ-AGENT chain\&quot;&quot; -m addrtype ! --dst-type LOCAL -j IP-MASQ-AGENT
-A IP-MASQ-AGENT -d 172.31.1.0/24 -m comment --comment &quot;ip-masq-agent: local traffic is not subject to MASQUERADE&quot; -j RETURN
-A IP-MASQ-AGENT -d 10.244.0.0/16 -m comment --comment &quot;ip-masq-agent: local traffic is not subject to MASQUERADE&quot; -j RETURN
-A IP-MASQ-AGENT -m comment --comment &quot;ip-masq-agent: outbound traffic is subject to MASQUERADE (must be last in chain)&quot; -j MASQUERADE&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Azure의 공식 문서에서 가이드가 확인하기 어렵지만, 아래 깃헙 리파지터를 확인해보시면 daemonset과 동일한 네임스페이스에 configmap을 생성하면 되는 것을 알 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;https://github.com/Azure/ip-masq-agent-v2&quot;&gt;https://github.com/Azure/ip-masq-agent-v2&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;마무리&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;EKS Networking이라는 주제로 CNI의 의미를 살펴보고 AWS VPC CNI의 특성과 EKS에서 인스턴스 타입에 따른 파드 생성 제한에 대해서 확인해 보았습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이후 EKS 클러스터를 생성해보고, 앞서 확인한 정보를 바탕으로 노드와 파드 정보를 확인하는 실습을 다뤄봤습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;한편으로 EKS는 단일 VPC CNI만 제공하고 있는 반면 AKS에서는 다양한 CNI를 제공하는 차이점이 있었습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;다음 포스트에는 EKS Networking 두번째 주제로, Loadbalancer 유형의 서비스와 Ingress를 EKS에서 어떻게 구현하는 지 살펴보겠습니다.&lt;/p&gt;</description>
      <category>EKS</category>
      <category>AKS</category>
      <category>aws</category>
      <category>Azure</category>
      <category>EKS</category>
      <author>한명</author>
      <guid isPermaLink="true">https://a-person.tistory.com/30</guid>
      <comments>https://a-person.tistory.com/30#entry30comment</comments>
      <pubDate>Sun, 16 Feb 2025 02:04:15 +0900</pubDate>
    </item>
    <item>
      <title>[1] EKS 생성과 리소스 살펴보기</title>
      <link>https://a-person.tistory.com/29</link>
      <description>&lt;p data-ke-size=&quot;size16&quot;&gt;Amazon Elastic Kubernetes Service(이하 EKS)를 설치해보고 EKS가 어떻게 구성되는지 살펴보겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;개인적으로 Azure를 자주 사용하기 때문에 Azure Kubernetes Service(이하&amp;nbsp;AKS)와 어떤 차이가 있는 지도 간략하게 설명하겠습니다.&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;목차&lt;/h2&gt;
&lt;ol style=&quot;list-style-type: decimal;&quot; data-ke-list-type=&quot;decimal&quot;&gt;
&lt;li&gt;환경 개요&lt;/li&gt;
&lt;li&gt;EKS 생성하기&lt;/li&gt;
&lt;li&gt;웹 콘솔 살펴보기&lt;/li&gt;
&lt;li&gt;주요 컴포넌트 비교&lt;/li&gt;
&lt;li&gt;노드 비교&lt;/li&gt;
&lt;li&gt;리소스 정리&lt;/li&gt;
&lt;/ol&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;1. 환경 개요&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Windows 11의 WSL(Windows Subsystem for Linux) 환경에서 실습을 진행합니다. 버전 정보는 아래를 참고 부탁드립니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;&amp;gt; wsl --version
WSL 버전: 2.3.26.0
커널 버전: 5.15.167.4-1
WSLg 버전: 1.0.65
MSRDC 버전: 1.2.5620
Direct3D 버전: 1.611.1-81528511
DXCore 버전: 10.0.26100.1-240331-1435.ge-release
Windows 버전: 10.0.22631.4751
&amp;gt; wsl --status
기본 배포: Ubuntu
기본 버전: 2
&amp;gt; wsl --list
Linux용 Windows 하위 시스템 배포:
Ubuntu(기본값)&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;참고로 Windows 11환경에서 WSL 을 구성하는 것은 아래를 참고하시면 됩니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;https://learn.microsoft.com/ko-kr/windows/wsl/install&quot;&gt;https://learn.microsoft.com/ko-kr/windows/wsl/install&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;오래되서 잘 기억나지 않지만, 저는 예전에 wsl을 설치를 해서 아래와 같은 방식을 사용 했던 것 같기도 합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;https://wikidocs.net/219899&quot;&gt;https://wikidocs.net/219899&lt;/a&gt;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;&amp;nbsp;&lt;/h2&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;2. EKS 생성하기&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;EKS의 배포 방식에는 AWS 콘솔을 이용한 방식, &lt;code&gt;eksctl&lt;/code&gt;을 이용한 방식, IaC 도구(terraform 등)를 이용한 방식이 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;eksctl를 통해서 클러스터를 생성하는 것이 가장 간단하지만, 내부적으로 CloudFormation을 이용하여 EKS클러스터와 노드그룹 2가지 stack을 만들어 생성을 진행하는 방식이라 속도가 느립니다. 반면 terraform을 사용하는 경우는 AWS API로 직접 생성을 요청하여 &lt;code&gt;eksctl&lt;/code&gt; 보다는 빠르다고 합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;해당 포스트에서는 &lt;code&gt;eksctl&lt;/code&gt; 을 통해서 클러스터를 구성하려고 합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이를 위해서 기본적으로 &lt;code&gt;AWS CLI&lt;/code&gt;, &lt;code&gt;eksctl&lt;/code&gt;, &lt;code&gt;kubectl&lt;/code&gt;과 같은 도구를 먼저 설치하겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;https://docs.aws.amazon.com/eks/latest/userguide/setting-up.html&quot;&gt;https://docs.aws.amazon.com/eks/latest/userguide/setting-up.html&lt;/a&gt;&lt;/p&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;&amp;nbsp;&lt;/h3&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;AWS CLI 설치&lt;/h3&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;AWS에서 제공하는 가이드에 따라 AWS CLI를 설치합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html&quot;&gt;https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html&lt;/a&gt;&lt;/p&gt;
&lt;pre class=&quot;awk&quot;&gt;&lt;code&gt;curl &quot;https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip&quot; -o &quot;awscliv2.zip&quot;
unzip awscliv2.zip
sudo ./aws/install&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;*참고로 wsl 기본적으로 unzip이 없어서 &lt;code&gt;apt install unzip -y&lt;/code&gt; 를 먼저 수행해야 합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아래와 같이 나오면 정상입니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;# sudo ./aws/install
You can now run: /usr/local/bin/aws --version
# aws --version
aws-cli/2.23.13 Python/3.12.6 Linux/5.15.167.4-microsoft-standard-WSL2 exe/x86_64.ubuntu.20&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;또한 &lt;code&gt;aws configure&lt;/code&gt;를 수행해 필요한 정보를 입력해야만 이후 eksctl 사용이 가능합니다.&lt;/p&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;&amp;nbsp;&lt;/h3&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;kubectl 설치&lt;/h3&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이미 kubernetes를 자주 사용하기 때문에 언젠가 설치한 kubectl이 설치되어 있어 이 부분은 생략하겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아래에서 알맞은 OS 유형을 바탕으로 설치를 진행하시면 됩니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;https://docs.aws.amazon.com/eks/latest/userguide/install-kubectl.html#_step_2_install_or_update_kubectl&quot;&gt;https://docs.aws.amazon.com/eks/latest/userguide/install-kubectl.html#_step_2_install_or_update_kubectl&lt;/a&gt;&lt;/p&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;&amp;nbsp;&lt;/h3&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;eksctl 설치&lt;/h3&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;마지막으로 &lt;code&gt;eksctl&lt;/code&gt;을 설치합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;https://eksctl.io/installation/#for-unix&quot;&gt;https://eksctl.io/installation/#for-unix&lt;/a&gt;&lt;/p&gt;
&lt;pre class=&quot;routeros&quot;&gt;&lt;code&gt;# for ARM systems, set ARCH to: `arm64`, `armv6` or `armv7`
ARCH=amd64
PLATFORM=$(uname -s)_$ARCH

curl -sLO &quot;https://github.com/eksctl-io/eksctl/releases/latest/download/eksctl_$PLATFORM.tar.gz&quot;

# (Optional) Verify checksum
curl -sL &quot;https://github.com/eksctl-io/eksctl/releases/latest/download/eksctl_checksums.txt&quot; | grep $PLATFORM | sha256sum --check

tar -xzf eksctl_$PLATFORM.tar.gz -C /tmp &amp;amp;&amp;amp; rm eksctl_$PLATFORM.tar.gz

sudo mv /tmp/eksctl /usr/local/bin&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이런식으로 결과가 나오면 됩니다.&lt;/p&gt;
&lt;pre class=&quot;yaml&quot;&gt;&lt;code&gt;# eksctl version
0.203.0&lt;/code&gt;&lt;/pre&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;&amp;nbsp;&lt;/h3&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;EKS 클러스터 생성&lt;/h3&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;eksctl을 통해 클러스터를 생성하고 삭제하는 가이드는 아래를 참고할 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;https://docs.aws.amazon.com/eks/latest/userguide/getting-started-eksctl.html&quot;&gt;https://docs.aws.amazon.com/eks/latest/userguide/getting-started-eksctl.html&lt;/a&gt;&lt;/p&gt;
&lt;pre class=&quot;clean&quot;&gt;&lt;code&gt;## 생성
eksctl create cluster --name my-cluster --region region-code

## 삭제
eksctl delete cluster --name my-cluster --region region-code&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;참고: region-code&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;https://docs.aws.amazon.com/ko_kr/AWSEC2/latest/UserGuide/using-regions-availability-zones.html&quot;&gt;https://docs.aws.amazon.com/ko_kr/AWSEC2/latest/UserGuide/using-regions-availability-zones.html&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아래 블로그를 보면 다양한 옵션을 통해 클러스터를 생성할 수 있음을 알 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;https://learnk8s.io/terraform-eks#three-popular-options-to-provision-an-eks-cluster&quot;&gt;https://learnk8s.io/terraform-eks#three-popular-options-to-provision-an-eks-cluster&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;예를 들어, Node type, Node 개수, 노드 min/max(Cluster Autoscaler)를 조정할 수 있습니다.&lt;/p&gt;
&lt;pre class=&quot;pgsql&quot;&gt;&lt;code&gt;eksctl create cluster \
  --name learnk8s-cluster \
  --node-type t2.micro \
  --nodes 3 \
  --nodes-min 3 \
  --nodes-max 5 \
  --region eu-central-1&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이를 보다 규격화 해서 아래와 같은 형태로 &lt;code&gt;cluster.yaml&lt;/code&gt; 파일을 생성하고 &lt;code&gt;eksctl create cluster -f cluster.yaml&lt;/code&gt;와 같이 실행하는 방법도 있습니다.&lt;/p&gt;
&lt;pre class=&quot;yaml&quot;&gt;&lt;code&gt;apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
  name: learnk8s
  region: eu-central-1
nodeGroups:
  - name: worker-group
    instanceType: t2.micro
    desiredCapacity: 3
    minSize: 3
    maxSize: 5&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;간단한 테스트를 위한 목적이라면 eksctl 에 옵션을 추가하는 방식으로 사용할 수 있겠지만, yaml에 환경 구성을 저장해 놓고 재활용할 수 있는 장점이 있을 것 같습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;본 포스트에서는 테스트 목적이므로 &lt;code&gt;eksctl&lt;/code&gt; 에 옵션을 추가하여 클러스터를 생성하도록 하겠습니다.&lt;/p&gt;
&lt;pre class=&quot;haml&quot;&gt;&lt;code&gt;AWS_DEFAULT_REGION=ap-northeast-2
CLUSTER_NAME=myeks1

eksctl create cluster \
  --name $CLUSTER_NAME \
  --region $AWS_DEFAULT_REGION \
  --nodegroup-name ${CLUSTER_NAME}-nodegroup \
  --node-type t3.medium \
  --node-volume-size=30 \
  --version 1.31 \
  --ssh-access \
  --external-dns-access \
  --verbose 4&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;참고로 기존에 생성된 ssh key(~/.ssh/id_rsa.pub)를 활용하므로, 생성된 ssh key가 없는 경우에는 ssh key-gen을 미리 수행하시면 됩니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;ssh-keygen -t rsa -b 4096&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;code&gt;&lt;/code&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;code&gt;&lt;/code&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;code&gt;eksctl&lt;/code&gt; 수행 로그를 바탕으로 진행과정을 살펴보겠습니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;## 이와 같은 로그로 시작합니다.
2025-02-06 21:52:29 [ℹ]  eksctl version 0.203.0
## eksctl에서 클러스터와 managed nodegroup을 위해 2개의 CloudFormation stacks을 만드는 것을 알 수 있습니다.
2025-02-06 21:52:29 [ℹ]  will create 2 separate CloudFormation stacks for cluster itself and the initial managed nodegroup
## eksctl-myeks1-cluster를 stack을 배포합니다.
2025-02-06 21:52:31 [ℹ]  deploying stack &quot;eksctl-myeks1-cluster&quot;
## 어느정도 시간이 지나면 myeks1의 컨트롤 플레인이 생성이 완료된 것을 알 수 있습니다.
2025-02-06 22:00:32 [▶]  completed task: create cluster control plane &quot;myeks1&quot;
## Addon을 설치합니다. 이 과정에서 metrics-server, kube-proxy, vpc-cni, coredns 와 같은 컴포넌트 들이 설치됩니다.
2025-02-06 22:00:33 [ℹ]  creating addon
2025-02-06 22:00:33 [▶]  addon: &amp;amp;{metrics-server v0.7.2-eksbuild.1  [] map[]  {false false false false false false false} map[]  &amp;lt;nil&amp;gt; false  true [] [] []}
..
## 노드 그룹에 대한 stack 배포를 시작합니다.
2025-02-06 22:02:37 [ℹ]  deploying stack &quot;eksctl-myeks1-nodegroup-myeks1-nodegroup&quot;
## 노드 그룹 생성이 완료 되었습니다.
2025-02-06 22:05:13 [✔]  created 1 managed nodegroup(s) in cluster &quot;myeks1&quot;
## kubeconfig 를 저장합니다.
2025-02-06 22:05:13 [✔]  saved kubeconfig as &quot;/root/.kube/config&quot;
## myeks1 클러스터의 생성이 종료 되었습니다.
2025-02-06 22:05:15 [✔]  EKS cluster &quot;myeks1&quot; in &quot;ap-northeast-2&quot; region is ready&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;대략 생성 요청 이후 15분 내에 완료가 되었습니다. terraform으로 수행하는 경우는 시간이 더 단축될 수 있으므로 이후 다시 확인을 해봐야할 것 같습니다&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;AKS의 경우는 보통 5분 내에 생성이 완료되어 생성 시간 자체는 AKS가 빠른 느낌입니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;또한 수행 과정을 살펴보면 kubeconfig를 직접 설정해 주는 것을 알 수 있습니다. 이로써 &lt;code&gt;eksctl&lt;/code&gt;로 클러스터를 생성하면 바로 &lt;code&gt;kubectl&lt;/code&gt; 을 사용할 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;반면, AKS에서는 클러스터 생성 후 &lt;code&gt;az aks get-credentials&lt;/code&gt; 와 같은 명령어로 kubeconfig를 가져올 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;blockquote style=&quot;background-color: #fcfcfc; color: #666666; text-align: left;&quot; data-ke-style=&quot;style3&quot;&gt;(업데이트_2025-2-11) EKS에서도 aws eks update-kubeconfig 로 추후 kubeconfig를 가져올 수 있습니다.&lt;/blockquote&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;3. 웹 콘솔 살펴보기&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;EKS를 생성할 때 기본적으로 생성되는 리소스를 확인하기 위해서 웹 콘솔을 접근 해봅니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;앞서 설명한 바와 같이 &lt;code&gt;eksctl&lt;/code&gt;은 CloudFormation을 사용하는데 이를 웹 콘솔에서 확인하실 수 있습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-filename=&quot;image-20250206221704047.png&quot; data-origin-width=&quot;2270&quot; data-origin-height=&quot;376&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/JuqNR/btsL8YietlR/HJakEGvbeEneR8XonkLGC1/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/JuqNR/btsL8YietlR/HJakEGvbeEneR8XonkLGC1/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/JuqNR/btsL8YietlR/HJakEGvbeEneR8XonkLGC1/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FJuqNR%2FbtsL8YietlR%2FHJakEGvbeEneR8XonkLGC1%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;2270&quot; height=&quot;376&quot; data-filename=&quot;image-20250206221704047.png&quot; data-origin-width=&quot;2270&quot; data-origin-height=&quot;376&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;생성된 EKS로 들어와서 정보를 확인해 봅니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-filename=&quot;image-20250206222108733.png&quot; data-origin-width=&quot;1652&quot; data-origin-height=&quot;441&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/0dQGa/btsL9QjpG9a/UPzkTP53plnQEwaqrh3k7k/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/0dQGa/btsL9QjpG9a/UPzkTP53plnQEwaqrh3k7k/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/0dQGa/btsL9QjpG9a/UPzkTP53plnQEwaqrh3k7k/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2F0dQGa%2FbtsL9QjpG9a%2FUPzkTP53plnQEwaqrh3k7k%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1652&quot; height=&quot;441&quot; data-filename=&quot;image-20250206222108733.png&quot; data-origin-width=&quot;1652&quot; data-origin-height=&quot;441&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;현재 버전과 언제까지 유효한지, 그리고 클러스터 상태와 업그레이드 인사이트(업그레이드에 충족한 상태인지 검증하는 기능으로 보임), 노드 상태 문제를 한 눈에 볼 수 있는 점이 인상적입니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;클러스터 정보 아래에 탭으로 다양한 클러스터 정보를 확인해볼 수 있습니다. 이 중 몇가지를 살펴보겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-filename=&quot;image-20250206222417747.png&quot; data-origin-width=&quot;1149&quot; data-origin-height=&quot;70&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/Zkhtd/btsMa6ZIDg6/PFKGCWvAtgZ0LCxdUrwLrK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/Zkhtd/btsMa6ZIDg6/PFKGCWvAtgZ0LCxdUrwLrK/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/Zkhtd/btsMa6ZIDg6/PFKGCWvAtgZ0LCxdUrwLrK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FZkhtd%2FbtsMa6ZIDg6%2FPFKGCWvAtgZ0LCxdUrwLrK%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1149&quot; height=&quot;70&quot; data-filename=&quot;image-20250206222417747.png&quot; data-origin-width=&quot;1149&quot; data-origin-height=&quot;70&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;b&gt;[개요]&lt;/b&gt; 탭에서 API Server 엔드포인트 정보를 확인할 수 있습니다. (혹시나 보안 이슈를 우려하여 실제 정보는 캡처하지 않았습니다)&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;EKS의 API Server 엔드포인트를 확인하며 신기한 점은 이 엔드포인트에 대해서 dns query를 하면 2개의 IP가 응답을 합니다. API Server가 2개가 있다는 의미는 아니고, NLB에서 2개의 IP를 응답해 주는 것이라고 합니다.&lt;/p&gt;
&lt;pre class=&quot;css&quot;&gt;&lt;code&gt;# dig +short xxx.gr7.ap-northeast-2.eks.amazonaws.com
3.35.116.125
3.35.119.23&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;노드에 진입하여 확인해 볼 때도 kubelet과 kube-proxy가 각 다른 API Server 엔드포인트로 연결한 것을 알 수 있습니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;# kubectl get no
NAME                                                STATUS   ROLES    AGE   VERSION
ip-192-168-32-233.ap-northeast-2.compute.internal   Ready    &amp;lt;none&amp;gt;   24m   v1.31.4-eks-aeac579
ip-192-168-67-87.ap-northeast-2.compute.internal    Ready    &amp;lt;none&amp;gt;   24m   v1.31.4-eks-aeac579
# kubectl node-shell ip-192-168-32-233.ap-northeast-2.compute.internal
[root@ip-192-168-32-233 /]# ss -tnp | egrep &quot;kubelet|kube-proxy&quot;
ESTAB 0      0               192.168.32.233:59018              3.35.119.23:443   users:((&quot;kubelet&quot;,pid=2901,fd=20))
..
ESTAB 0      0               192.168.32.233:51418             3.35.116.125:443   users:((&quot;kube-proxy&quot;,pid=3122,fd=9))&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;그리고 &lt;b&gt;[컴퓨팅]&lt;/b&gt; 탭을 확인해보면 노드그룹이 EC2 Auto Scaling Group을 통해서 제공되며, 2개의 노드가 생성된 것을 알 수 있습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-filename=&quot;image-20250206222459333.png&quot; data-origin-width=&quot;1588&quot; data-origin-height=&quot;656&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/c5uBbS/btsL91ZnnSW/o9TWfzVEADnkjJ94NqGKe0/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/c5uBbS/btsL91ZnnSW/o9TWfzVEADnkjJ94NqGKe0/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/c5uBbS/btsL91ZnnSW/o9TWfzVEADnkjJ94NqGKe0/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2Fc5uBbS%2FbtsL91ZnnSW%2Fo9TWfzVEADnkjJ94NqGKe0%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1588&quot; height=&quot;656&quot; data-filename=&quot;image-20250206222459333.png&quot; data-origin-width=&quot;1588&quot; data-origin-height=&quot;656&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;b&gt;[네트워킹]&lt;/b&gt; 탭을 보면 노드가 배포된 VPN와 서브넷 정보를 확인할 수 있습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-filename=&quot;image-20250206224455016.png&quot; data-origin-width=&quot;1618&quot; data-origin-height=&quot;392&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/F8i0h/btsMagWhBWf/SN1sJW8no5ZKS5mlSa3bm0/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/F8i0h/btsMagWhBWf/SN1sJW8no5ZKS5mlSa3bm0/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/F8i0h/btsMagWhBWf/SN1sJW8no5ZKS5mlSa3bm0/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FF8i0h%2FbtsMagWhBWf%2FSN1sJW8no5ZKS5mlSa3bm0%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1618&quot; height=&quot;392&quot; data-filename=&quot;image-20250206224455016.png&quot; data-origin-width=&quot;1618&quot; data-origin-height=&quot;392&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;여기서 중요한 부분은 &lt;b&gt;API 서버 엔드포인트 엑세스&lt;/b&gt;와 &lt;b&gt;퍼블릭 액세스 소스 허용 목록&lt;/b&gt;입니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;b&gt;API 서버 엔드포인드 엑세스&lt;/b&gt;는 EKS에서 API 서버를 퍼블릭 혹은 프라이빗으로 제공하는지 여부이고, 퍼블릭인 경우 &lt;b&gt;퍼블릭 엑세스 소스 허용 목록&lt;/b&gt;에서 접근을 허용할 IP를 등록해 줄 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;퍼블릭으로 설정된 클러스터에서는 최소한 접근 허용 목록을 지정하는 것이 보안적으로 안정적입니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;EKS는 API 서버 엔드포인트를 &lt;code&gt;퍼브릭&lt;/code&gt;/&lt;code&gt;퍼블릭 및 프라이빗&lt;/code&gt;/&lt;code&gt;프라이빗&lt;/code&gt; 세 가지 유형으로 제공하고 있습니다. 여기서 상단의 &lt;code&gt;엔드포인트 엑세스 관리&lt;/code&gt;를 눌러보면 상세한 정보를 확인하실 수 있습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-filename=&quot;image-20250206224731039.png&quot; data-origin-width=&quot;1251&quot; data-origin-height=&quot;598&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/njJNC/btsMaN69ew2/B8KGVjmjarwz0uEYoMNKJ0/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/njJNC/btsMaN69ew2/B8KGVjmjarwz0uEYoMNKJ0/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/njJNC/btsMaN69ew2/B8KGVjmjarwz0uEYoMNKJ0/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FnjJNC%2FbtsMaN69ew2%2FB8KGVjmjarwz0uEYoMNKJ0%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1251&quot; height=&quot;598&quot; data-filename=&quot;image-20250206224731039.png&quot; data-origin-width=&quot;1251&quot; data-origin-height=&quot;598&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이를 그림으로 살펴보면 더 쉽게 이해할 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;먼저 Public을 살펴보면 kubectl과 같이 API 서버를 접근할 때 퍼블릭 엔드포인트로 접근하게 되며 또한 워커 노드의 접근도 퍼블릭 엔드포인트로 접근합니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-filename=&quot;image-20250206224843726.png&quot; data-origin-width=&quot;1309&quot; data-origin-height=&quot;671&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/dNWuTd/btsMaM8fv77/6ozlBYL5FOEKYlfbOIBznK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/dNWuTd/btsMaM8fv77/6ozlBYL5FOEKYlfbOIBznK/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/dNWuTd/btsMaM8fv77/6ozlBYL5FOEKYlfbOIBznK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FdNWuTd%2FbtsMaM8fv77%2F6ozlBYL5FOEKYlfbOIBznK%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1309&quot; height=&quot;671&quot; data-filename=&quot;image-20250206224843726.png&quot; data-origin-width=&quot;1309&quot; data-origin-height=&quot;671&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;출처: &lt;a href=&quot;https://www.youtube.com/watch?v=bksogA-WXv8&amp;amp;t=5s&quot;&gt;https://www.youtube.com/watch?v=bksogA-WXv8&amp;amp;t=5s&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;두번째 유형은 Public and Private 형태입니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;kubectl과 같이 API 서버를 접근할 때 퍼블릭 엔드포인트로 접근하게 되는 것은 동일합니다. 반면 워커 노드는 API Server에 대한 dns query에 대해서 Route 53의 Private Hosted Zone을 통해 EKS owned ENI으로 응답을 받게되어 프라이빗한 연결을 제공합니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-filename=&quot;image-20250206230053383.png&quot; data-origin-width=&quot;1312&quot; data-origin-height=&quot;668&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/y1zte/btsL900wkUW/3jHR3lMMar86eXeIp4KqHK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/y1zte/btsL900wkUW/3jHR3lMMar86eXeIp4KqHK/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/y1zte/btsL900wkUW/3jHR3lMMar86eXeIp4KqHK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2Fy1zte%2FbtsL900wkUW%2F3jHR3lMMar86eXeIp4KqHK%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1312&quot; height=&quot;668&quot; data-filename=&quot;image-20250206230053383.png&quot; data-origin-width=&quot;1312&quot; data-origin-height=&quot;668&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;출처: &lt;a href=&quot;https://www.youtube.com/watch?v=bksogA-WXv8&amp;amp;t=5s&quot;&gt;https://www.youtube.com/watch?v=bksogA-WXv8&amp;amp;t=5s&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;세번째 유형은 Private 입니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이 구성에서는 API Server 엔드포인트를 퍼블릭으로 노출하지 않기 때문에 kubectl 과 같은 접근과 노드의 접근 모두가 EKS Owned ENI를 통해서 이뤄 집니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-filename=&quot;image-20250206230726860.png&quot; data-origin-width=&quot;1313&quot; data-origin-height=&quot;658&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/AixEr/btsL95UXvW5/Dw3gdRErQzjeMwDGoadZXk/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/AixEr/btsL95UXvW5/Dw3gdRErQzjeMwDGoadZXk/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/AixEr/btsL95UXvW5/Dw3gdRErQzjeMwDGoadZXk/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FAixEr%2FbtsL95UXvW5%2FDw3gdRErQzjeMwDGoadZXk%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1313&quot; height=&quot;658&quot; data-filename=&quot;image-20250206230726860.png&quot; data-origin-width=&quot;1313&quot; data-origin-height=&quot;658&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;보다 자세한 내용은 아래의 문서를 참고하시기 바랍니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;https://docs.aws.amazon.com/ko_kr/eks/latest/userguide/cluster-endpoint.html&quot;&gt;https://docs.aws.amazon.com/ko_kr/eks/latest/userguide/cluster-endpoint.html&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이러한 API 서버 엔드포인트에 대한 보안 강화는 AKS에서도 유사하게 구현되어 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;먼저 API 서버 엔드포인트 Public을 Public Cluster로 생각할 수 있으며, Authorized IP range(&lt;a href=&quot;https://learn.microsoft.com/ko-kr/azure/aks/api-server-authorized-ip-ranges?tabs=azure-cli)%EB%A5%BC&quot;&gt;https://learn.microsoft.com/ko-kr/azure/aks/api-server-authorized-ip-ranges?tabs=azure-cli&lt;/a&gt;)를 통해 API 서버로의 접근을 제한할 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;또한 API 서버 엔드포인트 Private 에 대응하는 Private Cluster(&lt;a href=&quot;https://learn.microsoft.com/en-us/azure/aks/private-clusters?tabs=default-basic-networking%2Cazure-portal)%EB%A5%BC&quot;&gt;https://learn.microsoft.com/en-us/azure/aks/private-clusters?tabs=default-basic-networking%2Cazure-portal&lt;/a&gt;)를 구성하는 방식을 제공하고 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;다만 EKS에서 설명하는 API 서버 엔드포인트 Private에서 다른 AWS 서비스를 접근하는 경우에도 VPC Endpoint가 필요하다는 언급이 있는 것을 볼 때, EKS의 Private의 의미는 워커 노드 관점에서 외부로의 접근을 Isolated하겠다는 중의적인 표현이 있는 것 같습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;반대로 AKS의 Private cluster는 단순히 API 서버를 Private 하게 제공한다는 의미만 담고 있습니다. 다른 서비스로의 접근은 해당 서비스에서 Private endpoint 방식으로 구성을 하거나 해야하고, 또한 워커 노드의 환경 자체를 Isolated하기 위해서는 UDR(User-defined Routing)을 통해 Firewall을 이용하는 방식을 취할 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;한편 AKS에서도 최근 Network Isolated Cluster를 Public Preview 로 제공하고 있습니다. 이는 워커 노드 환경에서 외부 접근을 Isolated 하는 방식을 제공합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;https://learn.microsoft.com/en-us/azure/aks/concepts-network-isolated&quot;&gt;https://learn.microsoft.com/en-us/azure/aks/concepts-network-isolated&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;마지막으로 &lt;b&gt;[추가 기능]&lt;/b&gt; 탭에서는 AddOn을 확인해볼 수 있습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-filename=&quot;image-20250206231704677.png&quot; data-origin-width=&quot;1594&quot; data-origin-height=&quot;773&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/9Cfq6/btsL87MMbp8/ZyHSKl7ysUgNVJeFIBEeSk/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/9Cfq6/btsL87MMbp8/ZyHSKl7ysUgNVJeFIBEeSk/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/9Cfq6/btsL87MMbp8/ZyHSKl7ysUgNVJeFIBEeSk/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2F9Cfq6%2FbtsL87MMbp8%2FZyHSKl7ysUgNVJeFIBEeSk%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1594&quot; height=&quot;773&quot; data-filename=&quot;image-20250206231704677.png&quot; data-origin-width=&quot;1594&quot; data-origin-height=&quot;773&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;주요 컴포넌트 비교 절에서 확인하겠지만 AKS는 kube-proxy, cordns, metrics-server 와 같은 컴포넌트를 시스템 컴포넌트 이야기하고 미리 설치되어 제공됩니다. 또한 이러한 컴포넌트는 보통 kubernetes 버전과 상응하여 자동으로 업그레이드 되는 방식을 취하고 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;EKS에서 확인을 해보면 이러한 컴포넌트들이 개별 AddOn으로 인식되며, 또한 개별 컴포넌트에 대해서 &lt;code&gt;버전 업데이트&lt;/code&gt;가 가능한 점에 차이가 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;code&gt;추가 기능 가져오기&lt;/code&gt;를 눌러보면 CSI Driver 와 같은 형태의 컴포넌트들도 AddOn으로 추가가 가능한 것을 알 수 있으며, 3rd Party에서 제공하는 AddOn도 AWS 마켓플레이스를 통해서 설치 가능한 것으로 보입니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-filename=&quot;image-20250206232116204.png&quot; data-origin-width=&quot;1600&quot; data-origin-height=&quot;1128&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/TTCDM/btsL9swoEO0/7yrJUwX0HcLOAJ5zBz0xc1/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/TTCDM/btsL9swoEO0/7yrJUwX0HcLOAJ5zBz0xc1/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/TTCDM/btsL9swoEO0/7yrJUwX0HcLOAJ5zBz0xc1/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FTTCDM%2FbtsL9swoEO0%2F7yrJUwX0HcLOAJ5zBz0xc1%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1600&quot; height=&quot;1128&quot; data-filename=&quot;image-20250206232116204.png&quot; data-origin-width=&quot;1600&quot; data-origin-height=&quot;1128&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Kubernetes 버전 별로 제공되는 AddOn에 차이가 있을 수 있으므로 아래의 명령을 통해서 확인하시기 바랍니다.&lt;/p&gt;
&lt;pre class=&quot;stata&quot;&gt;&lt;code&gt;aws eks describe-addon-versions --kubernetes-version 1.31  --query 'addons[].{MarketplaceProductUrl: marketplaceInformation.productUrl, Name: addonName, Owner: owner Publisher: publisher, Type: type}' --output table&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;웹 콘솔에서 EKS 화면을 떠나 노드 중 한대를 확인해보겠습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-filename=&quot;image-20250206233646924.png&quot; data-origin-width=&quot;1643&quot; data-origin-height=&quot;750&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/HffVG/btsL9PdIwwo/kZtHfyoitpQG4Sh2LvmhcK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/HffVG/btsL9PdIwwo/kZtHfyoitpQG4Sh2LvmhcK/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/HffVG/btsL9PdIwwo/kZtHfyoitpQG4Sh2LvmhcK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FHffVG%2FbtsL9PdIwwo%2FkZtHfyoitpQG4Sh2LvmhcK%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1643&quot; height=&quot;750&quot; data-filename=&quot;image-20250206233646924.png&quot; data-origin-width=&quot;1643&quot; data-origin-height=&quot;750&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Private IP가 2개가 있습니다. 노드에서도 확인해보면 eth0, eth1 두개의 인터페이스가 있는 것을 알 수 있습니다. (아직 AWS를 잘 몰라서 왜 2개인지는 잘 모르겠네요..)&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;[root@ip-192-168-32-233 /]# ip -br -c a
lo               UNKNOWN        127.0.0.1/8 ::1/128
eth0             UP             192.168.32.233/19 fe80::8f9:d9ff:febf:d407/64
enia4cb1502028@if3 UP             fe80::e0bd:c3ff:feec:294c/64
enifc2da76e4e6@if3 UP             fe80::88e6:29ff:fe17:10f8/64
eth1             UP             192.168.34.104/19 fe80::864:31ff:fea4:c503/64&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;blockquote data-ke-style=&quot;style3&quot;&gt;(업데이트_2025-2-11)&lt;br /&gt;AWS의 Instance는 Instance type마다 할당 가능한 ENI 개수와 ENI별 할당 가능한 IP개수가 정해져 있습니다. 또한 파드에서 필요한 IP를 미리 할당해 Warm Pool 개념으로 로컬에 일부 가지고 있습니다. 이때 1개의 ENI로 할당가능한 IP가 부족하면 자동으로 추가 ENI 를 연결합니다.&lt;/blockquote&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Public IP도 하나 할당되어 있습니다. 노드에 할당된 파드에 진입해보니 해당 Public IP로 통신을 하는 것을 알 수 있습니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;# kubectl get po -owide
NAME                        READY   STATUS              RESTARTS   AGE   IP               NODE                                                NOMINATED NODE   READINESS GATES
nettools-65789c8677-5l444   0/1     ContainerCreating   0          3s    &amp;lt;none&amp;gt;           ip-192-168-32-233.ap-northeast-2.compute.internal   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
nettools-65789c8677-j9dzp   1/1     Running             0          29s   192.168.86.177   ip-192-168-67-87.ap-northeast-2.compute.internal    &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
nsenter-tpremx              1/1     Running             0          72m   192.168.32.233   ip-192-168-32-233.ap-northeast-2.compute.internal   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
# kubectl exec -it nettools-65789c8677-5l444 -- bash
[root@nettools-65789c8677-5l444 /]# curl ifconfig.me
52.79.138.69&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아마 노드들이 Public Subnet을 사용하기 때문에 이런 구성이 된 것일 수도 있고, 혹은 컨셉이 다른 것일 수 있습니다만, AKS에서는 Instance Level Public IP(&lt;a href=&quot;https://learn.microsoft.com/en-us/azure/aks/use-node-public-ips)%EB%A5%BC&quot;&gt;https://learn.microsoft.com/en-us/azure/aks/use-node-public-ips&lt;/a&gt;)를 별도로 지정하지 않는 이상 노드들은 Private IP로만 구성됩니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;노드(파드)의 외부 통신의 방식은 클러스터의 Outbound-type을 통해서 지정되며 상세한 내용은 아래의 문서를 참고하실 수 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;https://learn.microsoft.com/en-us/azure/aks/egress-outboundtype&quot;&gt;https://learn.microsoft.com/en-us/azure/aks/egress-outboundtype&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;해당 인스턴스의 네트워크 인터페이스를 살펴보면 다수의 secondary IP가 할당된 것을 알 수 있습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-filename=&quot;image-20250206234908371.png&quot; data-origin-width=&quot;1060&quot; data-origin-height=&quot;365&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/blJ77c/btsMagB0mwn/nYCRDc8tpz9LMifE5wGwh1/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/blJ77c/btsMagB0mwn/nYCRDc8tpz9LMifE5wGwh1/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/blJ77c/btsMagB0mwn/nYCRDc8tpz9LMifE5wGwh1/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FblJ77c%2FbtsMagB0mwn%2FnYCRDc8tpz9LMifE5wGwh1%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1060&quot; height=&quot;365&quot; data-filename=&quot;image-20250206234908371.png&quot; data-origin-width=&quot;1060&quot; data-origin-height=&quot;365&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;AWS는 VPC CNI로 파드의 IP Range가 VPC의 IP와 동일한 IP 대역을 사용합니다. 해당 노드에 배포된 파드들을 살펴보면 host network를 사용하지 않는 파드들이 사용하는 IP로 보입니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;# kubectl get po -A -owide |grep ip-192-168-32-233
default       nettools-65789c8677-5l444         1/1     Running   0          8m52s   192.168.47.31    ip-192-168-32-233.ap-northeast-2.compute.internal   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
default       nsenter-tpremx                    1/1     Running   0          81m     192.168.32.233   ip-192-168-32-233.ap-northeast-2.compute.internal   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system   aws-node-fgrrx                    2/2     Running   0          106m    192.168.32.233   ip-192-168-32-233.ap-northeast-2.compute.internal   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system   coredns-9b5bc9468-dvv5c           1/1     Running   0          110m    192.168.49.120   ip-192-168-32-233.ap-northeast-2.compute.internal   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system   kube-proxy-xpnsb                  1/1     Running   0          106m    192.168.32.233   ip-192-168-32-233.ap-northeast-2.compute.internal   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system   metrics-server-86bbfd75bb-wmk4d   1/1     Running   0          110m    192.168.42.2     ip-192-168-32-233.ap-northeast-2.compute.internal   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;마지막으로 노드의 네트워크 인터페이스와 EKS Owned ENI를 비교해 보면, EKS Owned ENI에 연결된 Instance owner를 확인해보실 필요가 있습니다. 이 것은 ENI에 연결된 인스턴스 정보인데, 확인해보면 ENI 소유자와 다른 것을 알 수 있습니다. 즉 다른 소유자에서 생성한 것이라는 것을 알 수 있습니다.&amp;nbsp;&lt;br /&gt;&lt;br /&gt;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-filename=&quot;image-20250206235645547.png&quot; data-origin-width=&quot;1560&quot; data-origin-height=&quot;612&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/cJn5GG/btsL8GvjsB6/XWox4KFeNzRglYkrEF4bL1/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/cJn5GG/btsL8GvjsB6/XWox4KFeNzRglYkrEF4bL1/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/cJn5GG/btsL8GvjsB6/XWox4KFeNzRglYkrEF4bL1/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FcJn5GG%2FbtsL8GvjsB6%2FXWox4KFeNzRglYkrEF4bL1%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1560&quot; height=&quot;612&quot; data-filename=&quot;image-20250206235645547.png&quot; data-origin-width=&quot;1560&quot; data-origin-height=&quot;612&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이를 Cross-Account ENI 라고 하며, 아래와 같은 다른 형태로 구성됩니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;882&quot; data-origin-height=&quot;608&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/p6Zd5/btsMcsoRBpf/vUec3Za2xUwix1iGjSXwW1/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/p6Zd5/btsMcsoRBpf/vUec3Za2xUwix1iGjSXwW1/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/p6Zd5/btsMcsoRBpf/vUec3Za2xUwix1iGjSXwW1/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2Fp6Zd5%2FbtsMcsoRBpf%2FvUec3Za2xUwix1iGjSXwW1%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;882&quot; height=&quot;608&quot; data-origin-width=&quot;882&quot; data-origin-height=&quot;608&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;출처: &lt;a href=&quot;https://www.youtube.com/watch?app=desktop&amp;amp;v=zGs13xbRBMg&quot; target=&quot;_blank&quot; rel=&quot;noopener&amp;nbsp;noreferrer&quot;&gt;https://www.youtube.com/watch?app=desktop&amp;amp;v=zGs13xbRBMg&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;노드에서 EKS Owned ENI를 확인할 수 있는 방법이 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;kubectl exec나 kubectl logs와 같은 명령은 API 서버로 요청을 하여 노드에 실행 중인 파드나 로그를 보여줍니다. 그렇기 때문에 이 명령을 수행하는 순간은 API 서버에서 노드로의 접근으로 이뤄집니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아래와 같이 사전에 &lt;code&gt;ss -tnp&lt;/code&gt;를 수행하고, 세션1에서 &lt;code&gt;kubectl exec&lt;/code&gt;를 수행한 뒤, 다시 &lt;code&gt;ss -tnp&lt;/code&gt;를 수행해보면 EKS Owned ENI를 통한 연결이 확인됩니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-filename=&quot;image-20250206235943502.png&quot; data-origin-width=&quot;2232&quot; data-origin-height=&quot;874&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/eJUU5I/btsL9O6Z64g/o4nH4QjW1jgmsVUf8cRKU1/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/eJUU5I/btsL9O6Z64g/o4nH4QjW1jgmsVUf8cRKU1/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/eJUU5I/btsL9O6Z64g/o4nH4QjW1jgmsVUf8cRKU1/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FeJUU5I%2FbtsL9O6Z64g%2Fo4nH4QjW1jgmsVUf8cRKU1%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;2232&quot; height=&quot;874&quot; data-filename=&quot;image-20250206235943502.png&quot; data-origin-width=&quot;2232&quot; data-origin-height=&quot;874&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;참고로 AKS에서는 API 서버와 노드의 연결을 &lt;code&gt;konnectivity&lt;/code&gt;를 통해 제공한다는 차이가 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;4. 주요 컴포넌트 비교&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;앞서 생성한 EKS에서 배포된 파드를 바탕으로 기본 컴포넌트를 살펴보겠습니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;# kubectl get po -A
NAMESPACE     NAME                              READY   STATUS        RESTARTS   AGE
kube-system   aws-node-6mm9l                    2/2     Running       0          119m
kube-system   aws-node-fgrrx                    2/2     Running       0          119m
kube-system   coredns-9b5bc9468-dvv5c           1/1     Running       0          123m
kube-system   coredns-9b5bc9468-pzm47           1/1     Running       0          123m
kube-system   kube-proxy-4ml68                  1/1     Running       0          119m
kube-system   kube-proxy-xpnsb                  1/1     Running       0          119m
kube-system   metrics-server-86bbfd75bb-j888c   1/1     Running       0          123m
kube-system   metrics-server-86bbfd75bb-wmk4d   1/1     Running       0          123m&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;code&gt;metrics-server&lt;/code&gt;, &lt;code&gt;coredns&lt;/code&gt;, &lt;code&gt;kube-proxy&lt;/code&gt; 와 같은 컴포넌트가 있고, &lt;code&gt;aws-node&lt;/code&gt;가 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;code&gt;aws-node&lt;/code&gt;는 이미지를 살펴보면 AWS CNI와 Netowrk Policy Agent에 해당하는 컨테이너로 이루어져 있다는 걸 알 수 있습니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;      image: 602401143452.dkr.ecr.ap-northeast-2.amazonaws.com/amazon-k8s-cni-init:v1.19.0-eksbuild.1
      image: 602401143452.dkr.ecr.ap-northeast-2.amazonaws.com/amazon/aws-network-policy-agent:v1.1.5-eksbuild.1&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;상당히 간결한 느낌입니다. 한편으로 필요한 기능이 있을 때 AddOn을 상당히 설치해야 한다는 의미이기도 합니다.&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;4GiB 인스턴스에서 대략 550Gib 정도를 kube-reserved로 사용하는 걸로 보입니다.&amp;nbsp;실행 파드가 적고, limit 설정도 안된 파드가 있어서 Allocated resources가 여유가 있습니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;Capacity:
  cpu:                2
  ephemeral-storage:  31444972Ki
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             3943300Ki
  pods:               17
Allocatable:
  cpu:                1930m
  ephemeral-storage:  27905944324
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             3388292Ki
  pods:               17
...
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests    Limits
  --------           --------    ------
  cpu                350m (18%)  0 (0%)
  memory             270Mi (8%)  570Mi (17%)
  ephemeral-storage  0 (0%)      0 (0%)
  hugepages-1Gi      0 (0%)      0 (0%)
  hugepages-2Mi      0 (0%)      0 (0%)&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;비교를 위해 AKS를 기본 생성하고, 배포된 컴포넌트를 살펴보겠습니다.&lt;/p&gt;
&lt;pre class=&quot;crmsh&quot;&gt;&lt;code&gt;az group create --name aks-rg --location eastus
az aks create \
    --resource-group aks-rg \
    --name myaks1 \
    --node-count 2 &lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;상대적으로 많은 컴포넌트가 설치되어 있는 것을 알 수 있습니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;$ az aks get-credentials -g aks-rg -n myaks1
Merged &quot;myaks1&quot; as current context in /home/xx/.kube/config
$ kubectl get po -A
NAMESPACE     NAME                                  READY   STATUS    RESTARTS   AGE
kube-system   azure-cns-65cr5                       1/1     Running   0          4m29s
kube-system   azure-cns-zc944                       1/1     Running   0          4m37s
kube-system   azure-ip-masq-agent-7ccvx             1/1     Running   0          4m37s
kube-system   azure-ip-masq-agent-fjkz5             1/1     Running   0          4m29s
kube-system   cloud-node-manager-jg2p7              1/1     Running   0          4m37s
kube-system   cloud-node-manager-rjh58              1/1     Running   0          4m29s
kube-system   coredns-54b69f46b8-fsgqh              1/1     Running   0          4m15s
kube-system   coredns-54b69f46b8-qsqnr              1/1     Running   0          5m13s
kube-system   coredns-autoscaler-bfcb7c74c-dk26h    1/1     Running   0          5m13s
kube-system   csi-azuredisk-node-hcmcn              3/3     Running   0          4m37s
kube-system   csi-azuredisk-node-hzp7x              3/3     Running   0          4m29s
kube-system   csi-azurefile-node-clzjj              3/3     Running   0          4m37s
kube-system   csi-azurefile-node-gt9mj              3/3     Running   0          4m29s
kube-system   konnectivity-agent-546bc6d8dc-5d9xs   1/1     Running   0          15s
kube-system   konnectivity-agent-546bc6d8dc-j8pzd   1/1     Running   0          12s
kube-system   kube-proxy-vs88d                      1/1     Running   0          4m29s
kube-system   kube-proxy-wtqth                      1/1     Running   0          4m37s
kube-system   metrics-server-7d95c7bd8d-rklw7       2/2     Running   0          4m7s
kube-system   metrics-server-7d95c7bd8d-xsgn4       2/2     Running   0          4m7s&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;상대적으로 많은 파드들이 실행되어 Allocated resources를 많이 사용 중입니다. 이 때문에 AKS에서는 시스템 컴포넌트들과 사용자 워크로드를 분리하도록 권장하고 있습니다. 시스템 노드풀과 사용자 노드풀을 분리하는 방식을 취할 수 있습니다. 다만 이 경우에도 daemonset 유형의 파드(ex. CSI driver)들은 전체 노드에도 동일하게 생성됩니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;Capacity:
  cpu:                2
  ephemeral-storage:  129886128Ki
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             7097596Ki
  pods:               250
Allocatable:
  cpu:                1900m
  ephemeral-storage:  119703055367
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             5160188Ki
  pods:               250
 ...
  Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests     Limits
  --------           --------     ------
  cpu                682m (35%)   2042m (107%)
  memory             766Mi (15%)  4252Mi (84%)
  ephemeral-storage  0 (0%)       0 (0%)
  hugepages-1Gi      0 (0%)       0 (0%)
  hugepages-2Mi      0 (0%)       0 (0%)&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;color: #333333; text-align: start;&quot; data-ke-size=&quot;size16&quot;&gt;7GiB 노드에서 1.9GiB 정도가 kube-reserved 로 예약되어 있습니다. AKS에서는 1.29 이상에서 kube-reserved 매커니즘에 상당한 개선이 있었습니다. 다만 default로 생성된 노드의 max-pods 가 250으로 설정되어 이 부분이 kube-reserved로 반영된 영향일 수 있습니다.&lt;/p&gt;
&lt;p style=&quot;color: #333333; text-align: start;&quot; data-ke-size=&quot;size16&quot;&gt;&lt;br /&gt;상세한 내용은 아래 문서를 참고할 수 있습니다.&lt;/p&gt;
&lt;p style=&quot;color: #333333; text-align: start;&quot; data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;https://learn.microsoft.com/en-us/azure/aks/node-resource-reservations#memory-reservations&quot;&gt;https://learn.microsoft.com/en-us/azure/aks/node-resource-reservations#memory-reservations&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;특이한 점은 AWS의 AddOn들은 CPU limit을 지정하지 않고 있습니다. (자신감일까요..?)&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;5. 노드 구성 비교&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;노드 관점에서 정보를 비교해 보겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;EKS의 노드입니다. Amazon Linux 2를 사용하고 있고 containerd를 사용한다는 것을 알 수 있습니다. 인스턴스에 Public IP가 구성되어 있고, 아래 정보에서 EXTERNAL-IP가 표시되어 있습니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;# kubectl get no -owide
NAME                                                STATUS   ROLES    AGE    VERSION               INTERNAL-IP      EXTERNAL-IP     OS-IMAGE         KERNEL-VERSION                  CONTAINER-RUNTIME
ip-192-168-32-233.ap-northeast-2.compute.internal   Ready    &amp;lt;none&amp;gt;   137m   v1.31.4-eks-aeac579   192.168.32.233   52.79.138.69    Amazon Linux 2   5.10.233-223.887.amzn2.x86_64   containerd://1.7.25
ip-192-168-67-87.ap-northeast-2.compute.internal    Ready    &amp;lt;none&amp;gt;   137m   v1.31.4-eks-aeac579   192.168.67.87    15.165.15.152   Amazon Linux 2   5.10.233-223.887.amzn2.x86_64   containerd://1.7.25&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;AKS의 노드입니다. Ubuntu 22.0.4.5 를 사용하고 containerd를 사용합니다. 노드들은 default에서 Private IP만 구성되어 EXTERNAL-IP가 &amp;lt;none&amp;gt;으로 표시됩니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;$ kubectl get no -owide
NAME                                STATUS   ROLES    AGE     VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION      CONTAINER-RUNTIME
aks-nodepool1-21558960-vmss000000   Ready    &amp;lt;none&amp;gt;   4m19s   v1.30.7   10.224.0.4    &amp;lt;none&amp;gt;        Ubuntu 22.04.5 LTS   5.15.0-1079-azure   containerd://1.7.25-1
aks-nodepool1-21558960-vmss000001   Ready    &amp;lt;none&amp;gt;   4m27s   v1.30.7   10.224.0.5    &amp;lt;none&amp;gt;        Ubuntu 22.04.5 LTS   5.15.0-1079-azure   containerd://1.7.25-1&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;EKS 노드에 진입하여 일반적인 구성을 살펴보겠습니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;[root@ip-192-168-32-233 /]# hostname
ip-192-168-32-233.ap-northeast-2.compute.internal
[root@ip-192-168-32-233 /]# ip -br -c a
lo               UNKNOWN        127.0.0.1/8 ::1/128
eth0             UP             192.168.32.233/19 fe80::8f9:d9ff:febf:d407/64
enia4cb1502028@if3 UP             fe80::e0bd:c3ff:feec:294c/64
enifc2da76e4e6@if3 UP             fe80::88e6:29ff:fe17:10f8/64
eth1             UP             192.168.34.104/19 fe80::864:31ff:fea4:c503/64
[root@ip-192-168-32-233 /]# stat -fc %T /sys/fs/cgroup/
tmpfs&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;AKS는 1.25 버전에서 cgroupv2를 default로 사용하고 있는 점에 차이가 있습니다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;root@aks-nodepool1-21558960-vmss000000:/# hostname
aks-nodepool1-21558960-vmss000000
root@aks-nodepool1-21558960-vmss000000:/# ip -br -c a
lo               UNKNOWN        127.0.0.1/8 ::1/128 
eth0             UP             10.224.0.4/16 metric 100 fe80::6245:bdff:fea7:ab7a/64 
enP40656s1       UP             
azvae85acd5c66@if4 UP             fe80::a8aa:aaff:feaa:aaaa/64 
azve6bd6a1fcff@if6 UP             fe80::a8aa:aaff:feaa:aaaa/64 
azv61ea7670b68@if8 UP             fe80::a8aa:aaff:feaa:aaaa/64 
root@aks-nodepool1-21558960-vmss000000:/# stat -fc %T /sys/fs/cgroup/
cgroup2fs&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;cgroupv2로 변경을 하는 경우, 애플리케이션 프레임워크에서 cgroupv2를 인지하지 못하는 상황에서 메모리 리포팅 버그나 OOM과 같은 이슈가 발생할 수 있습니다. 이러한 리스크로 인해서 아직 cgroupv2으로 전환을 하지 않은 것인지 분명하지는 않습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;blockquote style=&quot;background-color: #fcfcfc; color: #666666; text-align: left;&quot; data-ke-style=&quot;style3&quot;&gt;(업데이트_2025-2-11)&lt;br /&gt;현 시점 EKS의 defualt 노드 타입은 amazon linux2 입니다. azure linux2는 2026-06-30 에 EOL 될 예정으로 이로 인해서 기능 개선이 없는 것이 아닐까 생각됩니다. amazon linux 2023의 경우 cgroupv2로 설정되어 있습니다.&lt;/blockquote&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;일단 현 시점에는 노드 수준의 큰 차이점을 확인하기 어려워 기본적인 정보만 보는 수준에서 실습을 마무리 하겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;6. 리소스 정리&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;실습을 마무리하고 eksctl로 생성한 클러스터를 삭제합니다.&lt;/p&gt;
&lt;pre class=&quot;mel&quot;&gt;&lt;code&gt;eksctl delete cluster --name $CLUSTER_NAME&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 style=&quot;color: #000000; text-align: start;&quot; data-ke-size=&quot;size26&quot;&gt;마무리&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;해당 포스트는 AEWS(AWS EKS Workshop Study) 3기를 참여하면서 과제로 작성을 시작했습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이번 주는 개인 환경을 세팅하고 EKS를 설치해 EKS의 기본적인 아키텍처와 구성 정보를 확인해봤습니다. 그 과정에서 AKS와 다양한 관점에서 비교를 해봤습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;한편으로 AWS를 사용할 일이 없었는데, 이번 과정을 통해서 AWS 콘솔이 어떤 식으로 정보를 노출하는지, CLI를 사용하는 방식 등을 이해할 수 있었습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;매주 다양한 주제에 대해서 EKS를 스터디할 계획이라, 각 주제별로 Azure의 구성이나 관점에 어떤 차이가 있는지 비교해 보도록 하겠습니다.&lt;/p&gt;</description>
      <category>EKS</category>
      <category>AKS</category>
      <category>EKS</category>
      <category>WSL</category>
      <author>한명</author>
      <guid isPermaLink="true">https://a-person.tistory.com/29</guid>
      <comments>https://a-person.tistory.com/29#entry29comment</comments>
      <pubDate>Fri, 7 Feb 2025 01:22:20 +0900</pubDate>
    </item>
    <item>
      <title>VS Code를 markdown editor로 사용하기</title>
      <link>https://a-person.tistory.com/28</link>
      <description>&lt;p data-ke-size=&quot;size16&quot;&gt;보통 블로그를 포스팅 할 때 markdown editor를 사용해서 글을 작성하고, 이후에 블로그의 글쓰기에 붙여 넣는 방식을 사용했습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;markdown editor를 사용하면 글을 정리하기도 편하고 이미지도 바로 복/붙이 되어서 편한 부분이 있습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;예전에는 typora라는 markdown editor를 사용했는데 이후에 유료로 전환을 한 것 같습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;확인해보니 VS Code도 markdown editor를 사용 가능한 걸로 확인해서 간단히 소개합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;먼저 VS Code를 켜고 폴더를 열어, .md 확장자로 내용을 작성합니다. 다행이 이미지 복/붙을 하면 해당 폴더에 저장하는 방식으로 지원이 됩니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;그리고 작성된 글을 확인하고자 할 때, 우측 상단의 'Open Preview to the Side'를 누릅니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1825&quot; data-origin-height=&quot;627&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bhNdtj/btsL9RCgjfT/T1bxcN2R5cTCkQw6Lhmzt0/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bhNdtj/btsL9RCgjfT/T1bxcN2R5cTCkQw6Lhmzt0/img.png&quot; data-alt=&quot;VS Code for markdown editor #1&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bhNdtj/btsL9RCgjfT/T1bxcN2R5cTCkQw6Lhmzt0/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbhNdtj%2FbtsL9RCgjfT%2FT1bxcN2R5cTCkQw6Lhmzt0%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1825&quot; height=&quot;627&quot; data-origin-width=&quot;1825&quot; data-origin-height=&quot;627&quot;/&gt;&lt;/span&gt;&lt;figcaption&gt;VS Code for markdown editor #1&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아래와 같이 미리 보기로 확인이 가능합니다. 코드 처리가 제대로 안되는거 같긴 합니다만..&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1826&quot; data-origin-height=&quot;973&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/ybvbG/btsMafW0qs3/Rm38WulragKRotHwDORKk0/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/ybvbG/btsMafW0qs3/Rm38WulragKRotHwDORKk0/img.png&quot; data-alt=&quot;VS Code for markdown editor #2&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/ybvbG/btsMafW0qs3/Rm38WulragKRotHwDORKk0/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FybvbG%2FbtsMafW0qs3%2FRm38WulragKRotHwDORKk0%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1826&quot; height=&quot;973&quot; data-origin-width=&quot;1826&quot; data-origin-height=&quot;973&quot;/&gt;&lt;/span&gt;&lt;figcaption&gt;VS Code for markdown editor #2&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;사실 typora 방식이 편하긴 하지만 VS Code로 무료로 쓸 수 있다는 장점이 있습니다.&lt;/p&gt;</description>
      <category>기타</category>
      <category>markdown editor</category>
      <category>vscode</category>
      <author>한명</author>
      <guid isPermaLink="true">https://a-person.tistory.com/28</guid>
      <comments>https://a-person.tistory.com/28#entry28comment</comments>
      <pubDate>Thu, 6 Feb 2025 16:37:51 +0900</pubDate>
    </item>
    <item>
      <title>Go스터디: 7주차(31장)</title>
      <link>https://a-person.tistory.com/27</link>
      <description>&lt;p data-ke-size=&quot;size16&quot;&gt;이 글은 골든래빗 &amp;lsquo;Tucker의 Go 언어 프로그래밍의 31장 써머리입니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이 책의 마지막 스토디 노트입니다.&lt;/p&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;Todo 리스트 웹 서비스 만들기&lt;/h3&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Todo 리스트 웹 서비스는 프론트 엔드 코드와 백엔드 코드로 나눠진다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;프론트 엔드는 웹서비스의 화면을 담당하고, 백엔드는 데이터와 로직을 담당한다.&lt;/p&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;구현 순서&lt;/h3&gt;
&lt;ol style=&quot;list-style-type: decimal;&quot; data-ke-list-type=&quot;decimal&quot;&gt;
&lt;li&gt;먼저 RESTful API에 맞춰 서비스를 정의한다.&lt;/li&gt;
&lt;li&gt;Todo 구조체를 만든다.&lt;/li&gt;
&lt;li&gt;RESTful API에 맞춰 각 핸들러를 만든다.&lt;/li&gt;
&lt;li&gt;화면을 구성하는 HTML 문서를 만든다.&lt;/li&gt;
&lt;li&gt;프론트엔드 동작을 나타내는 자바스크립트 코드를 만든다.&lt;/li&gt;
&lt;li&gt;웹 브라우저로 동작을 확인한다.&lt;/li&gt;
&lt;/ol&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;시작 하기 전에 웹서버를 만들기 앞서 gorilla/mux 외 두 가지 패키지를 더 설치한다.&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;urfave/negroni 패키지: 자주 사용되는 웹 핸들러를 제공하는 패키지이다. 추가로 로그 기능, panic 복구 기능, 파일 서버 기능을 제공한다.&lt;/li&gt;
&lt;li&gt;unrolled/render 패키지: 웹 서버 응답으로 HTML, JSON, TEXT 같은 포맷을 간단히 사용할 수 있다.&lt;/li&gt;
&lt;/ul&gt;
&lt;pre class=&quot;vim&quot;&gt;&lt;code&gt;$ go mod init goprojects/todo31
$ go get github.com/gorilla/mux
$ go get github.com/urfave/negroni
$ go get github.com/unrolled/render
&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이제 백엔드의 RESTful API를 아래와 같이 작성한다.&lt;/p&gt;
&lt;pre class=&quot;go&quot;&gt;&lt;code&gt;// ch31/ex31.1/ex31.1.go
package main

import (
	&quot;encoding/json&quot;
	&quot;log&quot;
	&quot;net/http&quot;
	&quot;sort&quot;
	&quot;strconv&quot;

	&quot;github.com/gorilla/mux&quot;
	&quot;github.com/unrolled/render&quot;
	&quot;github.com/urfave/negroni&quot;
)

var rd *render.Render

type Todo struct { // 할 일 정보를 담는 Todo 구조체
	ID        int    `json:&quot;id,omitempty&quot;` // json 포맷으로 변환 옵션 -&amp;gt; JSON 포맷으로 변환시 ID가 아닌 id로 변환됨
	Name      string `json:&quot;name&quot;`
	Completed bool   `json:&quot;completed,omitempty&quot;`
}

var todoMap map[int]Todo
var lastID int = 0

func MakeWebHandler() http.Handler { // 웹 서버 핸들러 생성
	rd = render.New()
	todoMap = make(map[int]Todo)
	mux := mux.NewRouter()
	mux.Handle(&quot;/&quot;, http.FileServer(http.Dir(&quot;public&quot;))) // &quot;/&quot;&quot; 경로에 요청이 들어올 때 public 아래 폴더의 파일을 제공하는 파일 서버
	// &quot;/todos&quot; 에 대해서 GET, POST, DELETE, PUT에 대한 핸들러 구현
	mux.HandleFunc(&quot;/todos&quot;, GetTodoListHandler).Methods(&quot;GET&quot;)
	mux.HandleFunc(&quot;/todos&quot;, PostTodoHandler).Methods(&quot;POST&quot;)
	mux.HandleFunc(&quot;/todos/{id:[0-9]+}&quot;, RemoveTodoHandler).Methods(&quot;DELETE&quot;)
	mux.HandleFunc(&quot;/todos/{id:[0-9]+}&quot;, UpdateTodoHandler).Methods(&quot;PUT&quot;)
	return mux
}

type Todos []Todo // ID로 정렬하는 인터페이스

func (t Todos) Len() int {
	return len(t)
}

func (t Todos) Swap(i, j int) {
	t[i], t[j] = t[j], t[i]
}

func (t Todos) Less(i, j int) bool {
	return t[i].ID &amp;gt; t[j].ID
}

func GetTodoListHandler(w http.ResponseWriter, r *http.Request) {
	list := make(Todos, 0)
	for _, todo := range todoMap {
		list = append(list, todo)
	}
	sort.Sort(list)
	rd.JSON(w, http.StatusOK, list) // ID로 정렬하여 전체 목록 반환
}

func PostTodoHandler(w http.ResponseWriter, r *http.Request) {
	var todo Todo
	err := json.NewDecoder(r.Body).Decode(&amp;amp;todo)
	if err != nil {
		log.Fatal(err)
		w.WriteHeader(http.StatusBadRequest)
		return
	}
	lastID++ // 새로운 ID로 등록하고 만든 Todo 반환
	todo.ID = lastID
	todoMap[lastID] = todo
	rd.JSON(w, http.StatusCreated, todo)
}

type Success struct {
	Success bool `json:&quot;success&quot;`
}

func RemoveTodoHandler(w http.ResponseWriter, r *http.Request) {
	vars := mux.Vars(r) // ID에 해당하는 할 일 삭제
	id, _ := strconv.Atoi(vars[&quot;id&quot;])
	if _, ok := todoMap[id]; ok {
		delete(todoMap, id)
		rd.JSON(w, http.StatusOK, Success{true})
	} else {
		rd.JSON(w, http.StatusNotFound, Success{false})
	}
}

func UpdateTodoHandler(w http.ResponseWriter, r *http.Request) {
	var newTodo Todo // ID에 해당하는 할 일 수정
	err := json.NewDecoder(r.Body).Decode(&amp;amp;newTodo)
	if err != nil {
		log.Fatal(err)
		w.WriteHeader(http.StatusBadRequest)
		return
	}

	vars := mux.Vars(r)
	id, _ := strconv.Atoi(vars[&quot;id&quot;])
	if todo, ok := todoMap[id]; ok {
		todo.Name = newTodo.Name
		todo.Completed = newTodo.Completed
		rd.JSON(w, http.StatusOK, Success{true})
	} else {
		rd.JSON(w, http.StatusBadRequest, Success{false})
	}
}

func main() {
	m := MakeWebHandler()  // 기본 핸들러 (핸들러들이 등록된 mux가 반환됨)
	n := negroni.Classic() // negroni 기본 핸들러
	n.UseHandler(m)        // negroni 기본 핸들러로 만든 핸들러 MakeWebHandler 을 감싼다.
	// HTTP 요청 수신 시 negroni에서 제공하는 부가 기능 핸들러들을 수행하고 난 뒤, MakeWebHandler()를 수행한다.

	log.Println(&quot;Started App&quot;)
	err := http.ListenAndServe(&quot;:3000&quot;, n) // negroni 기본 핸들러가 동작함
	if err != nil {
		panic(err)
	}
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;프론트 엔드는 핵심이 아니므로 github(&lt;a href=&quot;https://github.com/tuckersGo/musthaveGo/tree/master/ch31/ex31.1/public&quot;&gt;https://github.com/tuckersGo/musthaveGo/tree/master/ch31/ex31.1/public&lt;/a&gt;)의 파일을 참조해서 웹서버의 위치에 /public 폴더를 만들고 넣는다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;실행해보면 아래와 같이 실행된다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1155&quot; data-origin-height=&quot;651&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bpROhL/btsz60KvbZn/A2qwkhKNymc8GApX5ikIbK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bpROhL/btsz60KvbZn/A2qwkhKNymc8GApX5ikIbK/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bpROhL/btsz60KvbZn/A2qwkhKNymc8GApX5ikIbK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbpROhL%2Fbtsz60KvbZn%2FA2qwkhKNymc8GApX5ikIbK%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1155&quot; height=&quot;651&quot; data-origin-width=&quot;1155&quot; data-origin-height=&quot;651&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;negroni 를 사용해서 로그도 그럴싸하게 남는다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;$ go run .\\main.go
2023/11/09 22:57:37 Started App
[negroni] 2023-11-09T22:57:51+09:00 | 200 |      213.3309ms | localhost:3000 | GET /
[negroni] 2023-11-09T22:57:51+09:00 | 200 |      22.9704ms | localhost:3000 | GET /todo.css
[negroni] 2023-11-09T22:57:51+09:00 | 200 |      39.5097ms | localhost:3000 | GET /todo.js
[negroni] 2023-11-09T22:57:51+09:00 | 200 |      976.8&amp;micro;s | localhost:3000 | GET /todos
[negroni] 2023-11-09T22:57:51+09:00 | 404 |      257.9&amp;micro;s | localhost:3000 | GET /favicon.ico
[negroni] 2023-11-09T22:58:04+09:00 | 201 |      0s | localhost:3000 | POST /todos
[negroni] 2023-11-09T22:58:16+09:00 | 201 |      0s | localhost:3000 | POST /todos
[negroni] 2023-11-09T22:58:22+09:00 | 201 |      0s | localhost:3000 | POST /todos
&lt;/code&gt;&lt;/pre&gt;</description>
      <category>Book Study/Tucker의 Go Programming</category>
      <category>go</category>
      <category>묘공단</category>
      <author>한명</author>
      <guid isPermaLink="true">https://a-person.tistory.com/27</guid>
      <comments>https://a-person.tistory.com/27#entry27comment</comments>
      <pubDate>Thu, 9 Nov 2023 23:02:12 +0900</pubDate>
    </item>
    <item>
      <title>VS Code에서 REST 테스트 하기</title>
      <link>https://a-person.tistory.com/26</link>
      <description>&lt;p data-ke-size=&quot;size16&quot;&gt;Postman 으로 REST 테스트를 수행할 수 있지만 VS Code에서도 Rest Client를 통해 REST 테스트를 할 수 있다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;VS Code의 Extention에서 Rest Client 를 설치한다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;510&quot; data-origin-height=&quot;169&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bOObL1/btszJJYoxx3/kFLidaOkr28xE0R8BFgd0K/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bOObL1/btszJJYoxx3/kFLidaOkr28xE0R8BFgd0K/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bOObL1/btszJJYoxx3/kFLidaOkr28xE0R8BFgd0K/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbOObL1%2FbtszJJYoxx3%2FkFLidaOkr28xE0R8BFgd0K%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;510&quot; height=&quot;169&quot; data-origin-width=&quot;510&quot; data-origin-height=&quot;169&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;코드가 있는 위치에서 http.test 파일을 생성한다.&lt;/p&gt;
&lt;pre id=&quot;code_1699192426582&quot; class=&quot;bash&quot; data-ke-language=&quot;bash&quot; data-ke-type=&quot;codeblock&quot;&gt;&lt;code&gt;GET http://localhost:3000/students

###
POST http://localhost:3000/students
Content-Type: application/json

{
    &quot;Id&quot;: 0,
    &quot;Name&quot;:&quot;ccc&quot;,
    &quot;Age&quot;:15,
    &quot;Score&quot;:75
}&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;VS Code 에서 'Send Request'를 누르면 요청이 전달되고, Response를 확인할 수 있다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;1169&quot; data-origin-height=&quot;375&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/cBSSpb/btszH7FcRz5/oUD3ktqUhghKFHcRcdafEk/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/cBSSpb/btszH7FcRz5/oUD3ktqUhghKFHcRcdafEk/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/cBSSpb/btszH7FcRz5/oUD3ktqUhghKFHcRcdafEk/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FcBSSpb%2FbtszH7FcRz5%2FoUD3ktqUhghKFHcRcdafEk%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1169&quot; height=&quot;375&quot; data-origin-width=&quot;1169&quot; data-origin-height=&quot;375&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;</description>
      <category>기타</category>
      <category>Rest</category>
      <category>vscode</category>
      <author>한명</author>
      <guid isPermaLink="true">https://a-person.tistory.com/26</guid>
      <comments>https://a-person.tistory.com/26#entry26comment</comments>
      <pubDate>Sun, 5 Nov 2023 22:58:05 +0900</pubDate>
    </item>
    <item>
      <title>Go스터디: 6주차(27~30장)</title>
      <link>https://a-person.tistory.com/25</link>
      <description>&lt;p data-ke-size=&quot;size16&quot;&gt;이 글은 골든래빗 &amp;lsquo;Tucker의 Go 언어 프로그래밍의 27~30장 써머리입니다.&lt;/p&gt;
&lt;h1&gt;27장 객체지향 설계 원칙 SOLID&lt;/h1&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;객체지향 설계의 5가지 원칙인 SOLID를 살펴보고 좋은 설계가 무엇인지 살펴본다.&lt;/p&gt;
&lt;blockquote data-ke-style=&quot;style3&quot;&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;좋은 설계: 상호 결합도(coupling)가 낮고 응집도(cohesion)가 높은 설계를 말한다. 반대로 상호 결합도가 높다는 것은 모듈이 서로 강하게 결합되어 있어서 떼어 낼 수 없다는 의미이다. 한편 응집도가 낮다는 것은 하나의 모듈이 스스로 자립하지 못한다는 의미로, 다른 모듈에 의존적인 관계를 가지는 경우이다.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;단일 책임의 원칙(single responsibility principle, SRP)&lt;/h3&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;모든 객체는 하나의 책임만 져야 한다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;rarr; 코드의 재사용성을 높여준다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아래는 나쁜 사례로, 회계 보고서라는 책임과 보고서 전송이라는 책임까지 지고 있다.&lt;/p&gt;
&lt;pre class=&quot;go&quot;&gt;&lt;code&gt;type FinanceReport stuct { // 회계 보고서
	report string
}

func (r *FinanceReport) SendReport (email string) { // 보고서 전송
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;책임이 두 가지가 되므로, 이후 마케팅 보고서라는 객체가 생겨도 회계 보고서의 SendReport()를 사용할 수 없다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이를 개선하기 위해서 FinanceReport는 Report 인터페이스를 구현하고, ReportSender는 Report 인터페이스를 이용하는 관계를 형성하면 된다.&lt;/p&gt;
&lt;pre class=&quot;go&quot;&gt;&lt;code&gt;type Report interface { // Report() 메서드를 포함하는 Report 인터페이스
	Report() string
}

type FinanceReport Struct { // 경제 보고설르 담당하는 FinanceReport
	report sting
}

func (r *FinanceReport) Report() sting { // Report 인터페이스를 구현
}

type ReportSender struct { // 보고서 전송을 담당
}

func (s *ReportSender) SendReport(report Report) {
// Report 인터페이스 객체를 인수로 받음
}
&lt;/code&gt;&lt;/pre&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;개방-폐쇄 원칙(open-closed principle, OCP)&lt;/h3&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;확장에는 열려 있고, 변경에는 닫혀 있어야 한다. (프로그램에 기능을 추가할 때 기존 코드의 변경을 최소화 해야 한다)&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;rarr; 상호 결합도를 줄여 새 기능을 추가할 때 쉽게 할 수 있고, 기존 구현을 변경하지 않아도 된다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아래는 나쁜 사례로, 전송 방식을 추가할 때 새로운 case를 만들어 구현을 추가한다. (기존 SendReport() 함수 구현을 변경하게 된다.)&lt;/p&gt;
&lt;pre class=&quot;go&quot;&gt;&lt;code&gt;func SendReport(r *Report, method SendType, receiver string) {
	switch method {
	case Email:
		// 이메일 전송 
	case Fax:
		// Fax 전송 
	case PDF:
		// PDF 파일 생성
	case Printer:
		// 프린팅 
	..
	}
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이를 개선하기 위해서 ReportSender 인터페이스를 생성하고, 각 방식이 이 인터페이스를 구현하도록 한다.&lt;/p&gt;
&lt;pre class=&quot;go&quot;&gt;&lt;code&gt;type ReportSender interface {
	Send(r *Report)
}

type EmailSender sturct {
}

func (e *EmailSender) Send(r *Report) {
	// 이메일 전송
}

type FaxSender sturct {
}

func (f *FaxSender ) Send(r *Report) {
	// F  ax 전송
}
&lt;/code&gt;&lt;/pre&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;리스코프 치환 원칙(liskov substitution principle, LSP)&lt;/h3&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;q(x T)를 타입 T의 객체 x에 대해 증명(동작)할 수 있는 속성이라 하자. 그렇다면 S가 T의 하위 타입이라면 q(y S)는 타입 S의 객체 y에 대해 증명(동작)할 수 있어야 한다. (상위 타입을 인수로 받아 동작하는 함수는 하위 타입의 인수에도 동작해야 한다)&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;rarr; 예상치 못한 동작을 예방할 수 있다. (리스코프 치환 원칙이 지켜지지 않으면 함수의 동작을 예측하기 어렵다)&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아래는 LSP에 위반 사례로, Go에서 상속이 있다는 가정으로 생각해보면 동일한 동작을 예상하는 FillScreenWidth 에서 다른 동작이 일어나는 오류가 발생한다.&lt;/p&gt;
&lt;pre class=&quot;reasonml&quot;&gt;&lt;code&gt;class Rectangle { // 사각형
	width int
	height int

	setWidth(w int) { width = w }
	setHeight(h int) { Height = h }
}

class Square extends Rectangle { // 정사각형 (사각형을 상속)
	@override
	setWidth(w int) { width = w; height = w; }
	@override
	setHeight(h int) { width = h; height = h; }
}

// 화면 가로 크기에 맞게 이미지의 가로 크기 늘립니다. -&amp;gt; Square가 들어오면? 세로도 늘어난다.
func FillScreenWidth(screenSize Rectangle, imageSize *Rrectangle) {
	if imageSize.width &amp;lt; screenSize.width {
		imageSize.setWidth(screenSize.width)
	}
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Go에서는 상속이 없기 때문에 이러한 동작이 일어나지 않는다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;다만 Go에서도 인터페이스를 사용하면 상위 개체(인터페이스), 하위 개체(구현한 객체)의 관계가 생기기 때문에, 인터페이스를 인수로 받는 함수는 모든 하위 개체에서 동작이 되어야 하는 원칙이 지켜져야 한다. 그래서 인터페이스 타입 변환 같은 다이나믹 캐스팅(인터페이스 타입을 구체화된 객체로 바꾸는 것)을 하지 않는 것이 좋다.&lt;/p&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;인터페이스 분리 원칙(interface segregation principle, ISP)&lt;/h3&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;클라이언트(인터페이스의 이용)는 자신이 이용하지 않는 메서드에 의존하지 않아야 한다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;rarr; 인터페이스를 분리하면 불필요한 메서드들과 의존 관계가 끊어져 더 가볍게 인터페이스를이용할 수 있다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아래는 나쁜 사례로, SendReport()는 Report 인터페이스가 포함한 4개 메서드 중에 Report()메서드만 사용한다. (인터페이스에 불필요한 메서드가 포함되어 있다)&lt;/p&gt;
&lt;pre class=&quot;reasonml&quot;&gt;&lt;code&gt;type Report interface {
	Report() string
	Pages() int
	Author() string
	WrittenDate() time.Time
}

func SendReport(r Report) {  // 함수를 호출려면 인수는 4개의 메서를 다 구현하고 있어야 한다.
	send(r.Report())
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;해결하기 위해서 인터페이스를 적절하게 분리해야 한다.&lt;/p&gt;
&lt;pre class=&quot;reasonml&quot;&gt;&lt;code&gt;type Report interface {
	Report() string
}

type WrittenInfo interface {
	Pages() int
	Author() string
	WrittenDate() time.Time
}

func SendReport(r Report) { // 함수를 호출하는 Report는 사용하는 Report() 하나의 메서드만 구현하면 된다.
	send(r.Report())
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;인터페이스를 분리해 불필요한 메서드들과 의존 관계를 끊으면 더 가볍게 인터페이스를 이용할 수 있다.&lt;/p&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;의존 관계 역전 원칙(dependency inversion principle, DIP)&lt;/h3&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;구체화된 객체는 추상화된 객체와 의존 관계를 가져야 한다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;rarr; 구체화된 모듈이 아닌 추상 모듈에 의존함으로써 확장성이 증가한다. 상호 결합도가 낮아져서 다른 프로그램으로 이식성이 증가한다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아래는 메일이라는 모듈과 알람이라는 모듈이 관계를 맺고 있는 코드이다. (메일이 알람을 소유하고 있다)&lt;/p&gt;
&lt;pre class=&quot;go&quot;&gt;&lt;code&gt;type Mail struct {
	alarm Alarm
}

type Alarm struct {
}

func (m *Mail) OnRecv() { // OnRecv() 메서드는 메일 수신 시 호출된다.
	m.alarm.Alarm() // 알람을 울린다.
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이를 인터페이스를 통해 의존하도록 바꾼다.&lt;/p&gt;
&lt;pre class=&quot;go&quot;&gt;&lt;code&gt;package main

import &quot;fmt&quot;

type Event interface {
	Register(EventListener)
}

type EventListener interface {
	OnFire()
}

type Mail struct {
	listener EventListener
}

func (m *Mail) Register(listener EventListener) { // Event 인터페이스 구현
	m.listener = listener
}

func (m *Mail) OnRecv() { // 등록된 listner의 OnFire() 호출
	m.listener.OnFire()
}

type Alarm struct {
}

func (a *Alarm) OnFire() { // EVentListner 인터페이스 구현
	// 알람
	fmt.Println(&quot;알람&quot;)
}

func main() {
	var mail = &amp;amp;Mail{}
	var listener EventListener = &amp;amp;Alarm{}

	mail.Register(listener)
	mail.OnRecv() // 알람이 울린다.
}
&lt;/code&gt;&lt;/pre&gt;
&lt;h1&gt;28장 테스트와 벤치마크&lt;/h1&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;테스트 코드는 작성한 코드를 테스트 하는 코드이고, 벤치마크 코드는 코드 로직의 성능을 측정하는 코드이다.&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;28.1 테스트 코드&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Go에서는 테스트 코드 작성과 실행을 언어에서 지원한다. 테스트 코드를 작성하면 go test 명령으로 실행할 수 있다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Go의 테스트 코드의 작성 규약은 아래와 같다.&lt;/p&gt;
&lt;ol style=&quot;list-style-type: decimal;&quot; data-ke-list-type=&quot;decimal&quot;&gt;
&lt;li&gt;테스트 코드는 파일명이 _test.go로 끝나는 파일 안에 존재해야 한다.&lt;/li&gt;
&lt;li&gt;테스트 코드를 작성하려면 import &amp;ldquo;testing&amp;rdquo;으로 testing 패키지를 가져와야 한다.&lt;/li&gt;
&lt;li&gt;테스트 코드들은 함수로 묶여 있어야 하고, 함수명은 반드시 Test 로 시작해야 한다. 형태는 func TestXxxx(t *testing.T) 형태이어야 한다.&lt;/li&gt;
&lt;/ol&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;예를 들어, 아래 코드에 대한 테스트 코드 main.go를 작성해본다.&lt;/p&gt;
&lt;pre class=&quot;go&quot;&gt;&lt;code&gt;package main

import &quot;fmt&quot;

func square(x int) int {
	return 81
}

func main() {
	fmt.Printf(&quot;9*9=%d\\n&quot;, square(9))
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;동일한 위치에서 main_test.go를 만들고 아래와 같이 작성한다.&lt;/p&gt;
&lt;pre class=&quot;css&quot;&gt;&lt;code&gt;package main

import &quot;testing&quot;

func TestSquare(t *testing.T) {
	rst := square(9)

	if rst != 81 {
		t.Errorf(&quot;square(9) should be 81, but returns %d&quot;, rst)
	}
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아래는 테스트 결과이다.&lt;/p&gt;
&lt;pre class=&quot;shell&quot;&gt;&lt;code&gt;$ go test
PASS
ok      goprojects/test28       1.010s
&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;테스트 코드 작성 규칙을 따라 테스트 코드 파일을 _main.go 로 끝나게 맞춰줘야 테스트 코드로 인식한다. 아래는 테스트 파일을 제대로 인식하지 못한 에러이다.&lt;/p&gt;
&lt;pre class=&quot;gams&quot;&gt;&lt;code&gt;$ go test
?       goprojects/test28       [no test files]
&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;테스트 코드를 추가한다.&lt;/p&gt;
&lt;pre class=&quot;css&quot;&gt;&lt;code&gt;func TestSquare2(t *testing.T) {
	rst := square(3)

	if rst != 9 {
		t.Errorf(&quot;square(3) should be 9, but returns %d&quot;, rst)
	}
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;다시 테스트를 돌려보면 테스트 코드가 실패한 것을 확인할 수 있다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;$ go test
--- FAIL: TestSquare2 (0.00s)
    main_test.go:17: square(3) should be 9, but returns 81
FAIL
exit status 1
FAIL    goprojects/test28       0.950s
&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;실제 코드가 하드 코딩되어 발생한 에러로 return x*x 로 수정한다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;VS Code에서는 아래와 같이 자동으로 테스트가 가능하도록 생긴다. run package tests 를 클릭해도 된다. run package tests 는 패키지 전체의 테스트를 수행하고, run test 로 개별 테스트를 수행할 수 있다. 커맨드 라인에서는 go test -run 테스트명 으로 개별 테스트를 수행할 수 있다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;846&quot; data-origin-height=&quot;383&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bxQHPs/btszJ06ZNyN/IuqDdS2950sZvSfsf7UqNk/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bxQHPs/btszJ06ZNyN/IuqDdS2950sZvSfsf7UqNk/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bxQHPs/btszJ06ZNyN/IuqDdS2950sZvSfsf7UqNk/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbxQHPs%2FbtszJ06ZNyN%2FIuqDdS2950sZvSfsf7UqNk%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;846&quot; height=&quot;383&quot; data-origin-width=&quot;846&quot; data-origin-height=&quot;383&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;테스트 코드 작성을 간결하게 도와주는 stretchr/testify 패키지를 사용해본다.&lt;/p&gt;
&lt;pre class=&quot;swift&quot;&gt;&lt;code&gt;package main

import (
	&quot;testing&quot;

	&quot;github.com/stretchr/testify/assert&quot;
)

func TestSquare1(t *testing.T) {
	assert := assert.New(t) // 테스트 객체 생성
	assert.Equal(81, square(9), &quot; square(9) should be 81&quot;) // 테스트 함수 호출
}

func TestSquare2(t *testing.T) {
	assert := assert.New(t)
	assert.Equal(9, square(3), &quot; square(3) should be 9&quot;)
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;go get 으로 패키지를 설치하고 테스트를 수행해본다.&lt;/p&gt;
&lt;pre class=&quot;armasm&quot;&gt;&lt;code&gt;$ go test
# goprojects/test28
main_test.go:6:2: no required module provides package github.com/stretchr/testify/assert; to add it:
        go get github.com/stretchr/testify/assert
FAIL    goprojects/test28 [setup failed]
$ go get github.com/stretchr/testify/assert
go: added github.com/davecgh/go-spew v1.1.1
go: added github.com/pmezard/go-difflib v1.0.0
go: added github.com/stretchr/testify v1.8.4
go: added gopkg.in/yaml.v3 v3.0.1
$ go test
PASS
ok      goprojects/test28       1.201s
&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;임의로 테스트 코드에 에러를 발생시켜 보면 테스트 코드의 결과가 상세하게 바뀐 것을 알 수 있다.&lt;/p&gt;
&lt;pre class=&quot;yaml&quot;&gt;&lt;code&gt;$ go test
--- FAIL: TestSquare1 (0.00s)
    main_test.go:11:
                Error Trace:    C:/../projects/goprojects/test28/main_test.go:11
                Error:          Not equal:
                                expected: 8
                                actual  : 81
                Test:           TestSquare1
                Messages:        square(9) should be 81
FAIL
exit status 1
FAIL    goprojects/test28       0.924s
&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Equal() 메서드 외에도 NotEqual(), Nil(), NotNil() 등의 메서드를 제공한다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;stretchr/testify 패키지에서 mock, suite 패키지를 제공하고 있다.&lt;/p&gt;
&lt;blockquote data-ke-style=&quot;style3&quot;&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;mock 패키지: 모듈의 행동을 가장하는 목업(mockup) 객체를 제공한다. 예를 들어, 온라인 기능 테스트를 할 때 하위 영역인 네트워크 객체를 가장하는 목업 객체를 만들 때 유용하다. &lt;br /&gt;suite 패키지: 테스트 준비 작업이나 테스트 종료 후 처리 작업을 도와주는 패키지이다. 예를 들어, 텍스트에 특정 파일이 있어야 하는 경우, 임시 파일을 생성하고, 테스트 종료 후 삭제해주는 작업을 만들 때 유용하다.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;28.2 테스트 주도 개발&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;테스트의 중요성이 커짐에 따라 코드 작성 이전에 테스트 코드를 작성하는 테스트 주도 개발(Test Driven Development, TDD) 방식을 소개한다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;테스트는 크게 블랙박스 테스트와 화이트 박스 테스트로 구분할 수 있다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;b&gt;블랙박스 테스트&lt;/b&gt;는 제품 내부를 오픈하지 않은 상태에서 진행되는 테스트이다. 사용자 입장의 테스트라고 해서 사용성 테스트(usability test)라고 하기도 한다. 프로그램 코드에 대한 검증이 아니라, 프로그램을 실행한 상태에서 동작을 검사하는 방식이다. 보통 전문 테스트, QA 직군에서 담당한다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;b&gt;화이트박스 테스트&lt;/b&gt;는 내부 코드를 직접 검증하는 방식이다. 유닛 테스트(unit test, 단위 테스트)라고 부른다. 프로그래머가 직접 테스트 코드를 작성해서 내부 테스트를 검사하는 방식이다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;전통적인 화이트박스 테스트는 코드 작성 &amp;rarr; 테스트 &amp;rarr; 버그 발견 &amp;rarr; 코드 수정 로 이뤄진다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이러한 방식은 코드 작성 후 테스트 코드를 작성하다 보니 메인 시나리오에 의존해 테스트를 하여 예외 상황이나 경계 체크(boundary check)가 무시되기 쉽다. 또한 테스트 통과를 목적으로 하는 형식적인 테스트 코드가 될 수 있다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;테스트 주도 개발(TDD)이 대안이 될 수 있다. 테스트 주도 개발은 테스트 코드 작성 시기를 코드 작성 이전으로 옮긴 방식이다. 테스트 작성 &amp;rarr; 테스트 실패 &amp;rarr; 코드 작성 &amp;rarr; 테스트 성공&amp;rarr; 개선 을 반복하여 코드를 테스트를 성공시키고 리팩터링(refactoring)하여 완성하는 방식이다.&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;28.3 벤치마크 코드&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Go의 testing 패키지는 테스트 코드 외에 코드의 성능을 검사하는 벤치마크 기능을 지원한다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Go의 벤치마크 코드의 작성 규약은 아래와 같다.&lt;/p&gt;
&lt;ol style=&quot;list-style-type: decimal;&quot; data-ke-list-type=&quot;decimal&quot;&gt;
&lt;li&gt;벤치마크 코드는 파일명이 _test.go로 끝나는 파일 안에 존재해야 한다.&lt;/li&gt;
&lt;li&gt;벤치마크 코드를 작성하려면 import &amp;ldquo;testing&amp;rdquo;으로 testing 패키지를 가져와야 한다.&lt;/li&gt;
&lt;li&gt;벤치마크 코드들은 함수로 묶여 있어야 하고, 함수명은 반드시 Benchmark로 시작해야 한다. 형태는 func BenchmarkXxxx(b *testing.B) 형태이어야 한다.&lt;/li&gt;
&lt;/ol&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;벤치마크 코드는 go test -bench . 로 실행할 수 있다.&lt;/p&gt;
&lt;h1&gt;29장 Go 언어로 만드는 웹 서버&lt;/h1&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;HTTP 프로토콜을 사용하여 요청에 응답하는 서버를 웹 서버 혹은 HTTP 서버라고 한다. Go에서는 net/http 패키지를 제공한다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Go에서 웹서버를 만들려면 핸들러 등록과 웹서버 시작이라는 두 단계를 거친다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;핸들러란 각 HTTP 요청이 수신됐을 때 HTTP 요청 URL 경로에 대응해 처리하는 함수 또는 객체라고 보면 된다.&lt;/p&gt;
&lt;blockquote data-ke-style=&quot;style3&quot;&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;http://goldenrabbit.co.kr/news?startDate=2023-11-03&quot;&gt;http://goldenrabbit.co.kr/news?startDate=2023-11-03&lt;/a&gt; 와 같은 HTTP 요청 URL을 구분할 수 있다.&lt;br /&gt;&lt;br /&gt;http:// : 프로토콜 &lt;br /&gt;goldenrabbit.co.kr : 도메인 &lt;br /&gt;/news : 경로 &lt;br /&gt;?startDate=2023-11-03 : 쿼리 스트링 (startDate: 매개변수, 2023-11-03: 값)&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;핸들러는 HandleFunc() 함수로 등록할 수 있고, ListenAndServer() 함수로 웹 서버를 시작한다.&lt;/p&gt;
&lt;pre class=&quot;reasonml&quot;&gt;&lt;code&gt;package main

import (
	&quot;fmt&quot;
	&quot;net/http&quot;
)

func main() {
	http.HandleFunc(&quot;/&quot;, func(w http.ResponseWriter, r *http.Request) {
		fmt.Fprint(w, &quot;Hello world!&quot;) // 핸들러 등록
	})

	http.HandleFunc(&quot;/bar&quot;, func(w http.ResponseWriter, r *http.Request) {
		fmt.Fprint(w, &quot;Bar pasge!&quot;) // /bar에 대한 핸들러 등록
	})

	http.ListenAndServe(&quot;:3000&quot;, nil) // 웹서버 시작
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;HandleFunction()의 핸들러 함수는 두 인수를 가지는데, http.Request 에는 클라이언트에서 보낸 메서드, 헤더, 바디와 같은 HTTP 요청 정보를 가지고, http.ResponseWriter 인수는 이후 Fprint()의 출력 스트림으로 지정된다. http.ResponseWriter 타입에 값을 쓰면 HTTP 응답으로 전송된다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;ListenAndServe() 는 두가지 인수를 가지는데, 첫번째 인수는 HTTP 요청을 수신하는 주소를 입력하고, 두번째 인수에는 핸들러 인스턴스를 넣어준다. 이 값이 nil을 넣어주면 DefaultServeMux를 사용하는데, DefaultServeMux는 http.HandleFunc() 함수로 등록된 핸들러들을 사용한다.&lt;/p&gt;
&lt;blockquote data-ke-style=&quot;style3&quot;&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;http를 인스턴스를 명시적으로 만들지 않고, 바로 사용한다? http.ResponseWriter와 http.Request는 언제 만들어지는 걸까?&lt;br /&gt;http.HandleFunc에 함수 리터럴인 부분은 실제로 언제 사용될지 모르는 부분이다. http.ListenAndServer()로 웹서버가 시작되고, 실제로 이 핸들러가 호출될 때, &lt;span style=&quot;background-color: #fcfcfc; color: #666666; text-align: left;&quot;&gt;http.ResponseWriter와 http.Request가 전달된다.&lt;/span&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;DefaultServeMux를 사용하면 http.HandleFunc()을 이용해서 등록한 핸들러를 사용하기 때문에 다양한 기능을 추가하기 어렵다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;앞의 기본 예제를 ServeMux 인스턴스를 생성해 사용할 수 있다.&lt;/p&gt;
&lt;pre class=&quot;go&quot;&gt;&lt;code&gt;package main

import (
	&quot;fmt&quot;
	&quot;net/http&quot;
)

type fooHandler struct{}

func (f *fooHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
	fmt.Fprint(w, &quot;foo pasge!&quot;) // 핸들러로 사용되는 구조체는 ServeHTTP를 구현해야 한다
}

func main() {
	mux := http.NewServeMux()
	mux.HandleFunc(&quot;/&quot;, func(w http.ResponseWriter, r *http.Request) {
		fmt.Fprint(w, &quot;Hello world!&quot;) // 핸들러 등록
	})

	mux.HandleFunc(&quot;/bar&quot;, func(w http.ResponseWriter, r *http.Request) {
		fmt.Fprint(w, &quot;Bar pasge!&quot;) // /bar에 대한 핸들러 등록
	})

	mux.Handle(&quot;/foo&quot;, &amp;amp;fooHandler{}) // handler 구조체를 선언해서 mxu.Handle()에 추가하는 방법

	http.ListenAndServe(&quot;:3000&quot;, mux) // 웹서버 시작
}
&lt;/code&gt;&lt;/pre&gt;
&lt;blockquote data-ke-style=&quot;style3&quot;&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Mux: multiplexer(멀티플렉서)의 약자로 여러 입력 중 하나를 선택해서 반환하는 디지털 장치를 말한다. 여기서는 각 URL에 대한 핸들러를 등록한 다음, HTTP 요청이 왔을 때 URL에 해당하는 핸들러를 선택해서 실행하는 방식이다. 이러한 방식을 라우터(router)라고 말하기도 한다.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;과거의 웹 서버가 서버에 위치한 HTML을 전달하는 목적이었다면, 현재의 웹 서버에는 아래와 같은 변화가 있다.&lt;/p&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;Server Rendering &amp;rarr; Client Rendering&lt;/h3&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Server Rendering은 요청에 대해 서버에서 로직을 수행 결과로 HTML를 만들어 응답하는 구조이라면, Client Rendering은 요청에 대해 서버가 HTML의 틀(템플릿) 제공하고, 클라이언트에서 렌더링을 할 때 템플릿을 동적으로 데이터를 채워나가는 방식이다. 동적인 요청에 대한 결과를 JSON이라는 데이터 형태로 받는다.&lt;/p&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;Frontend와 Backend의 혼합 &amp;rarr; Frontend와 Backend의 역할의 분할&lt;/h3&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Frontend가 Client Rendering을 담당하고, Backend에서는 로직을 수행하고 데이터의 전달을 담당한다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이 과정에서 중요한 역할을 하는 JSON 데이터를 처리하는 방식을 살펴본다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;JSON(JavaScript Object Notation) 데이터를 전송하기 위해서 encoding/json 패키지를 사용한다. 이 패키지를 사용해 구조체를 JSON 데이터로 변환(marshal, encode)하고, 다시 JSON 데이터를 구조체로 변환(unmarshal, decode)할 수 있다.&lt;/p&gt;
&lt;pre class=&quot;bash&quot; data-ke-language=&quot;bash&quot;&gt;&lt;code&gt;package main

import (
	&quot;encoding/json&quot;
	&quot;fmt&quot;
	&quot;net/http&quot;
)

type Student struct {
	Name  string
	Age   int
	Score int
}

func MakeWebHandler() http.Handler { // 핸들러 인스턴스를 생성하는 함수
	mux := http.NewServeMux()
	mux.HandleFunc(&quot;/student&quot;, StudentHandler)
	return mux
}

func StudentHandler(w http.ResponseWriter, r *http.Request) {
	var student = Student{&quot;aaa&quot;, 16, 87}
	data, _ := json.Marshal(student) // Student 객체를 []byte로 변환

	w.Header().Add(&quot;content-type&quot;, &quot;application/json&quot;) // 헤더에 JSON 포맷임을 표시
	w.WriteHeader(http.StatusOK)
	fmt.Fprint(w, string(data))
}

func main() {
	http.ListenAndServe(&quot;:3000&quot;, MakeWebHandler())
}&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;테스트 코드를 통해서 JSON 데이터를 받아 객체로 변환해본다 처리해 본다.&lt;/p&gt;
&lt;pre class=&quot;groovy&quot;&gt;&lt;code&gt;package main

import (
	&quot;encoding/json&quot;
	&quot;net/http&quot;
	&quot;net/http/httptest&quot;
	&quot;testing&quot;

	&quot;github.com/stretchr/testify/assert&quot;
)

func TestJsonHalander(t *testing.T) {
	assert := assert.New(t)

	res := httptest.NewRecorder()                      // 테스트 response 레코더를 만든다.
	req := httptest.NewRequest(&quot;GET&quot;, &quot;/student&quot;, nil) // /student 경로 테스트

	mux := MakeWebHandler()
	mux.ServeHTTP(res, req)

	assert.Equal(http.StatusOK, res.Code) // http 상태 코드 확인
	student := new(Student)
	err := json.NewDecoder(res.Body).Decode(student) // 결과 변환(res.Body의 결과를 decode해서 student 객체에 담는다)
	// 결과 확인
	assert.Nil(err)
	assert.Equal(&quot;aaa&quot;, student.Name)
	assert.Equal(16, student.Age)
	assert.Equal(87, student.Score)
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;기타 Go의 다양한 웹 프레임워크는 아래 참고를 확인한다.&lt;/p&gt;
&lt;blockquote data-ke-style=&quot;style3&quot;&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;참고: &lt;a href=&quot;https://velog.io/@geunwoobaek/Go-Framework-비교&quot;&gt;https://velog.io/@geunwoobaek/Go-Framework-비교&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h1&gt;30장 RESTful API 서버 만들기&lt;/h1&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;RESTful API는 서버에서 어떠한 리소스를 제공할 때 이 리소스에 접근하는 API의 설계 방식이다. REST에서는 HTTP 프로토콜을 사용하고, 리소스를 URI(uniform resource identifier)로 대응시키고, 지정된 URI에 대해서 HTTP 메서드를 대칭해 해당 리소스에 대한 CRUD를 관리한다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;REST에서는 URL과 HTTP 메서드로 데이터와 동작을 표현한다.&lt;/p&gt;
&lt;pre class=&quot;awk&quot;&gt;&lt;code&gt;GET &amp;lt;Https://somesite.com/students/3&amp;gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;URL과 메서드를 보면 이 요청이 3번 학생 데이터를 가져오는 요청이라는 것을 유추할 수 있다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이때 메서드는 HTTP 메서드를 의미하는데, 아래와 같은 메서드를 지원한다. 이러한 메서드를 바탕으로 URL로 전달한 자원에 대한 동작을 정의한다. 이렇게 URL 만으로 어떤 요청인지 알 수 있기 때문에 RESTful API의 특징을 자기표현적인 URL이라고 한다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;메서드 URL 동작&lt;/p&gt;
&lt;table style=&quot;border-collapse: collapse; width: 100%;&quot; border=&quot;1&quot; data-ke-align=&quot;alignLeft&quot;&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;GET&lt;/td&gt;
&lt;td&gt;/students&lt;/td&gt;
&lt;td&gt;전체 학생 데이터 반환&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;GET&lt;/td&gt;
&lt;td&gt;/students/3&lt;/td&gt;
&lt;td&gt;id에 해당하는 학생 데이터 반환&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;POST&lt;/td&gt;
&lt;td&gt;/students&lt;/td&gt;
&lt;td&gt;새로운 학생 등록&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;PUT&lt;/td&gt;
&lt;td&gt;/students/id&lt;/td&gt;
&lt;td&gt;id에 해당하는 학생 데이터 변경&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;DELETE&lt;/td&gt;
&lt;td&gt;/students/id&lt;/td&gt;
&lt;td&gt;id에 해당하는 학생 데이터 삭제&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;여기서는 아래의 단계로 RESTful API에 맞는 웹서버를 구현한다.&lt;/p&gt;
&lt;ol style=&quot;list-style-type: decimal;&quot; data-ke-list-type=&quot;decimal&quot;&gt;
&lt;li&gt;gorilla/mux 와 같은 RESTful API 웹 서버 제작을 도와주는 패키지를 설치한다.&lt;/li&gt;
&lt;li&gt;RESTful API에 맞춰서 웹 핸들러 함수를 만들어 준다.&lt;/li&gt;
&lt;li&gt;RESTful API를 테스트하는 테스트 코들르 만든다.&lt;/li&gt;
&lt;li&gt;웹 브라우저로 데이터를 조회한다.&lt;/li&gt;
&lt;/ol&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이 예제에서는 학생(Student) 구조체를 정의하고, 이 리소스를 위의 표와 같은 동작을 하는 REST API로 구현해본다.&lt;/p&gt;
&lt;pre class=&quot;bash&quot; data-ke-language=&quot;bash&quot;&gt;&lt;code&gt;package main

import (
	&quot;encoding/json&quot;
	&quot;net/http&quot;
	&quot;sort&quot;
	&quot;strconv&quot;

	&quot;github.com/gorilla/mux&quot;
)

type Student struct {
	Id    int
	Name  string
	Age   int
	Score int
}

var students map[int]Student // 학생 목록을 저장하는 맵
var lastId int               // map id로 사용

// gorilla/mux 패키지를 이용해 웹 핸들러를 만들고, 임시 학생 데이터 두개 생성 저장
func MakeWebHandler() http.Handler {
	mux := mux.NewRouter() // gorilla/mux를 만든다.
	// /students 요청을 받으면 GetStudentListHandler() 함수가 호출되도록 하고,
	// Methods() 메서들르 통해 GET 메서드 요청을 받을 때만 핸들러가 동작하도록 한다.
	mux.HandleFunc(&quot;/students&quot;, GetStudentListHandler).Methods(&quot;GET&quot;) // 학생 리스트
	// &amp;lt;- 여기에 새로운 핸들러 등록 -&amp;gt;
	mux.HandleFunc(&quot;/students/{id:[0-9]+}&quot;, GetStudentHandler).Methods(&quot;GET&quot;)       // id 학생 정보
	mux.HandleFunc(&quot;/students&quot;, PostStudentHandler).Methods(&quot;POST&quot;)                 // 학생 추가
	mux.HandleFunc(&quot;/students/{id:[0-9]+}&quot;, DeleteStudentHandler).Methods(&quot;DELETE&quot;) // 학생 삭제

	// 임시데이터 생성
	students = make(map[int]Student)
	students[1] = Student{1, &quot;aaa&quot;, 16, 87}
	students[2] = Student{2, &quot;bbb&quot;, 17, 89}
	lastId = 2

	return mux
}

// Id 로 정렬하는 인터페이스 구현
// Len(), Less(), Swap() 메서드를 구현해 sort.Interface를 사용할 수 있게함
// sort.Sort(Students(s)) 형태로 사용
type Students []Student

func (s Students) Len() int {
	return len(s)
}
func (s Students) Swap(i, j int) {
	s[i], s[j] = s[j], s[i]
}
func (s Students) Less(i, j int) bool {
	return s[i].Id &amp;lt; s[j].Id
}

// 학생 정보를 가져와 JSON 포맷으로 변경하는 핸들러
func GetStudentListHandler(w http.ResponseWriter, r *http.Request) {
	list := make(Students, 0)
	for _, student := range students {
		list = append(list, student)
	}

	sort.Sort(list) // 학생 목록을 Id로 정렬
    w.Header().Set(&quot;Content-Type&quot;, &quot;application/json&quot;)
	w.WriteHeader(http.StatusOK) // 책에서는 윗라인과 순서가 바뀜. w.Header().Set() 후에 w.WriteHeader() 해야한다. 
	json.NewEncoder(w).Encode(list) // JSON 포맷으로 변경해서 결과를 쓴다.
}

// id에 해당하는 학생을 가져와 반환하는 핸들러
func GetStudentHandler(w http.ResponseWriter, r *http.Request) {
	vars := mux.Vars(r)               // mux에 요청으로 들어온 URL에서 인수를 가져온다.
	id, _ := strconv.Atoi(vars[&quot;id&quot;]) // 경로가 /students/{id:[0-9]+} 이므로, gorillar/mux에서 자동으로 id값을 내부 맵에 저장한다. vars[&quot;id&quot;]로 id 값 가져온다.
	student, ok := students[id]       // student 맵에서 데이터가 있는지 확인한다.
	if !ok {
		w.WriteHeader(http.StatusNotFound)
		return
	}
	w.Header().Set(&quot;Content-Type&quot;, &quot;application/json&quot;)
    w.WriteHeader(http.StatusOK) 
	json.NewEncoder(w).Encode(student)
}

// POST 요청이 오면 학생을 추가하는 핸들러
func PostStudentHandler(w http.ResponseWriter, r *http.Request) {
	var student Student
	err := json.NewDecoder(r.Body).Decode(&amp;amp;student)
	if err != nil {
		w.WriteHeader(http.StatusBadRequest)
		return
	}
	lastId++
	student.Id = lastId
	students[lastId] = student
	w.WriteHeader(http.StatusCreated)
}

// DELETE 요청이 오면 학생을 삭제하는 핸들러
func DeleteStudentHandler(w http.ResponseWriter, r *http.Request) {
	vars := mux.Vars(r)
	id, _ := strconv.Atoi(vars[&quot;id&quot;])
	_, ok := students[id]

	if !ok {
		w.WriteHeader(http.StatusNotFound)
		return
	}
	delete(students, id)
	w.WriteHeader(http.StatusOK)
}

func main() {
	http.ListenAndServe(&quot;:3000&quot;, MakeWebHandler())
}&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;참고로 노출하지 않은 HTTP 메서드로 접근하면 아래와 같은 에러가 발생한다. 405: Method Not Allowed 이다.&lt;/p&gt;
&lt;pre class=&quot;groovy&quot;&gt;&lt;code&gt;--- FAIL: TestJsonHandler3 (0.00s)
    main_test.go:68:
                Error Trace:    C:/Users/montauk/Desktop/projects/goprojects/rest30/main_test.go:68
                Error:          Not equal:
                                expected: 201
                                actual  : 405
                Test:           TestJsonHandler3
&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;</description>
      <category>Book Study/Tucker의 Go Programming</category>
      <category>go</category>
      <category>묘공단</category>
      <author>한명</author>
      <guid isPermaLink="true">https://a-person.tistory.com/25</guid>
      <comments>https://a-person.tistory.com/25#entry25comment</comments>
      <pubDate>Sun, 5 Nov 2023 20:35:49 +0900</pubDate>
    </item>
    <item>
      <title>kind: Calico CNI 확인1</title>
      <link>https://a-person.tistory.com/24</link>
      <description>&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;span&gt;이 글은 '코어 쿠버네티스'의 5장 내용을 실습한 내용입니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;span&gt;kind에서 Calico CNI 를 테스트 해본다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;span&gt;kind 클러스터에 아래 config를 제공해 기본 CNI를 비활성화 하고 클러스터를 생성한다.&lt;/span&gt;&lt;/p&gt;
&lt;pre class=&quot;yaml&quot;&gt;&lt;code&gt;# cat kind-Calico-conf.yaml
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
networking:
        disableDefaultCNI: true
        podSubnet: 192.168.0.0/16
nodes:
- role: control-plane
- role: worker&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;span&gt;kind 설정에 대해서는 아래 문서를 참고할 수 있다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;span&gt;&lt;a href=&quot;https://kind.sigs.k8s.io/docs/user/configuration/&quot;&gt;https://kind.sigs.k8s.io/docs/user/configuration/&lt;/a&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;span&gt;책이 출간된 이후 apiVersion이 변경되었으므로 kind.sigs.k8s.io가 아닌 kind.x-k8s.io를 사용해야 한다.&lt;/span&gt;&lt;/p&gt;
&lt;pre class=&quot;subunit&quot;&gt;&lt;code&gt;# kind create cluster --name=calico --config=./kind-Calico-conf.yaml
ERROR: failed to create cluster: unknown apiVersion: kind.sigs.k8s.io/v1alpha4&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;span&gt;정상적으로 완료되면 아래와 같이 확인 가능하다. defaultCNI가 disable 되었으므로 아래와 같은 결과가 정상이다.&lt;/span&gt;&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;# kind create cluster --name=calico --config=./kind-Calico-conf.yaml
Creating cluster &quot;calico&quot; ...
 ✓ Ensuring node image (kindest/node:v1.27.3)  
 ✓ Preparing nodes    
 ✓ Writing configuration  
 ✓ Starting control-plane  ️
 ✓ Installing StorageClass  
 ✓ Joining worker nodes  
Set kubectl context to &quot;kind-calico&quot;
You can now use your cluster with:

kubectl cluster-info --context kind-calico

Have a nice day!  
# kubectl get no
NAME                   STATUS     ROLES           AGE   VERSION
calico-control-plane   NotReady   control-plane   40s   v1.27.3
calico-worker          NotReady   &amp;lt;none&amp;gt;          19s   v1.27.3
# kubectl get po -n kube-system
NAME                                           READY   STATUS    RESTARTS   AGE
coredns-5d78c9869d-5zkr4                       0/1     Pending   0          44s
coredns-5d78c9869d-t2c9x                       0/1     Pending   0          44s
etcd-calico-control-plane                      1/1     Running   0          57s
kube-apiserver-calico-control-plane            1/1     Running   0          57s
kube-controller-manager-calico-control-plane   1/1     Running   0          59s
kube-proxy-kqsfb                               1/1     Running   0          39s
kube-proxy-qqmw4                               1/1     Running   0          44s
kube-scheduler-calico-control-plane            1/1     Running   0          57s&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;span&gt;아래와 같이 Calico CNI를 설치한다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;span&gt;먼저 calico operator와 CRD를 생성한다.&lt;/span&gt;&lt;/p&gt;
&lt;pre class=&quot;lua&quot;&gt;&lt;code&gt;# kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.26.3/manifests/tigera-operator.yaml
namespace/tigera-operator created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgpfilters.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/apiservers.operator.tigera.io created
customresourcedefinition.apiextensions.k8s.io/imagesets.operator.tigera.io created
customresourcedefinition.apiextensions.k8s.io/installations.operator.tigera.io created
customresourcedefinition.apiextensions.k8s.io/tigerastatuses.operator.tigera.io created
serviceaccount/tigera-operator created
clusterrole.rbac.authorization.k8s.io/tigera-operator created
clusterrolebinding.rbac.authorization.k8s.io/tigera-operator created
deployment.apps/tigera-operator created&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;span&gt;위 단계에서는 operator ns와 operator가 생성된다.&lt;/span&gt;&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;# kubectl get po -A -w
NAMESPACE            NAME                                           READY   STATUS    RESTARTS   AGE
kube-system          coredns-5d78c9869d-5zkr4                       0/1     Pending   0          8m42s
kube-system          coredns-5d78c9869d-t2c9x                       0/1     Pending   0   ..
tigera-operator      tigera-operator-f6bb878c4-x4whq                1/1     Running   0          56s&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;span&gt;실제로 calico를 설치한다.&lt;/span&gt;&lt;/p&gt;
&lt;pre class=&quot;routeros&quot;&gt;&lt;code&gt;# kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.26.3/manifests/custom-resources.yaml
installation.operator.tigera.io/default created
apiserver.operator.tigera.io/default created&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;span&gt;아래와 같이 노드도 Ready 상태가 되고, calico 파드와 기타 네트워크 연결이 안되서 Pending이던 파드들이 보드 Running 상태가 되었다.&lt;/span&gt;&lt;/p&gt;
&lt;pre class=&quot;lsl&quot;&gt;&lt;code&gt;# kubectl get po -A
NAMESPACE            NAME                                           READY   STATUS    RESTARTS   AGE
calico-apiserver     calico-apiserver-76cc85cf7f-b2j7p              1/1     Running   0          5m24s
calico-apiserver     calico-apiserver-76cc85cf7f-lgnjs              1/1     Running   0          5m24s
calico-system        calico-kube-controllers-5f6db5bc7b-ntvhc       1/1     Running   0          9m56s
calico-system        calico-node-45w75                              1/1     Running   0          9m57s
calico-system        calico-node-s8nzv                              1/1     Running   0          9m57s
calico-system        calico-typha-5bbf9665bd-5mf7t                  1/1     Running   0          9m57s
calico-system        csi-node-driver-f4958                          2/2     Running   0          9m56s
calico-system        csi-node-driver-v54cj                          2/2     Running   0          9m56s
kube-system          coredns-5d78c9869d-5zkr4                       1/1     Running   0          21m
kube-system          coredns-5d78c9869d-t2c9x                       1/1     Running   0          21m
kube-system          etcd-calico-control-plane                      1/1     Running   0          21m
kube-system          kube-apiserver-calico-control-plane            1/1     Running   0          21m
kube-system          kube-controller-manager-calico-control-plane   1/1     Running   0          21m
kube-system          kube-proxy-kqsfb                               1/1     Running   0          21m
kube-system          kube-proxy-qqmw4                               1/1     Running   0          21m
kube-system          kube-scheduler-calico-control-plane            1/1     Running   0          21m
local-path-storage   local-path-provisioner-6bc4bddd6b-jhj7s        1/1     Running   0          21m
tigera-operator      tigera-operator-f6bb878c4-x4whq                1/1     Running   0          13m
# kubectl get no
NAME                   STATUS   ROLES           AGE   VERSION
calico-control-plane   Ready    control-plane   22m   v1.27.3
calico-worker          Ready    &amp;lt;none&amp;gt;          21m   v1.27.3&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;span&gt;책이 출간된 이후로 변경되어 아래 링크를 참고했다. 현재는 operator 방식을 사용하는 것으로 보인다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;span&gt;&lt;a href=&quot;https://docs.tigera.io/calico/latest/getting-started/kubernetes/kind&quot;&gt;https://docs.tigera.io/calico/latest/getting-started/kubernetes/kind&lt;/a&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;span&gt;그리고 daemonset로 calico-node 만 있었던 것에서 calico-typha 라는 파드가 추가로 생기는 아키텍처 변화가 있었던 것으로 보인다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;span&gt;node에서 노드에 필요한 BGP와 IP 경로를 설정하고, Typha에서 API server를 watch 하면서 k8s 리소스와 caclio custom resource의 변화를 바탕으로 node를 업데이트 한다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;span&gt;&lt;a href=&quot;https://docs.tigera.io/calico/latest/getting-started/kubernetes/hardway/install-typha&quot;&gt;https://docs.tigera.io/calico/latest/getting-started/kubernetes/hardway/install-typha&lt;/a&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;span&gt;&lt;b&gt;&lt;span&gt;Typha&lt;/span&gt;&lt;/b&gt;&lt;/span&gt;&lt;span&gt; sits between the Kubernetes API server and per-node daemons like &lt;/span&gt;&lt;span&gt;&lt;b&gt;&lt;span&gt;Felix&lt;/span&gt;&lt;/b&gt;&lt;/span&gt;&lt;span&gt; and &lt;/span&gt;&lt;span&gt;&lt;b&gt;&lt;span&gt;confd&lt;/span&gt;&lt;/b&gt;&lt;/span&gt;&lt;span&gt;&amp;nbsp;(running in &lt;/span&gt;&lt;span&gt;&lt;span&gt;`&lt;/span&gt;calico/node&lt;span&gt;`&lt;/span&gt;&lt;/span&gt;&lt;span&gt;). It watches the Kubernetes resources and Calico custom resources used by these daemons, and whenever a resource changes it fans out the update to the daemons. &lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;span&gt;calico에 대한 추가 테스트는 다시 작성한다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;span&gt;테스트를 마치면 아래의 명령으로 클러스터를 삭제할 수 있다.&lt;/span&gt;&lt;/p&gt;
&lt;pre class=&quot;pgsql&quot;&gt;&lt;code&gt;kind delete cluster --name calico&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;</description>
      <category>Book Study/코어 쿠버네티스</category>
      <category>calico</category>
      <category>Kind</category>
      <category>코어 쿠버네티스</category>
      <author>한명</author>
      <guid isPermaLink="true">https://a-person.tistory.com/24</guid>
      <comments>https://a-person.tistory.com/24#entry24comment</comments>
      <pubDate>Sun, 5 Nov 2023 13:36:29 +0900</pubDate>
    </item>
    <item>
      <title>wsl: docker, kind 설치</title>
      <link>https://a-person.tistory.com/23</link>
      <description>&lt;p style=&quot;color: #333333; text-align: start;&quot; data-ke-size=&quot;size16&quot;&gt;WSL(Windows Subsystem for Linux) 환경에서 kind를 실행하기 위해 필요한 패키지들을 설치한다.&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;&lt;span&gt;docker 설치&lt;/span&gt;&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;span&gt;참고로 Microsoft 공식 문서의 가이드에서는 docker desktop 에 대한 설치 가이드만 있다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;span&gt;&lt;a href=&quot;https://learn.microsoft.com/ko-kr/windows/wsl/tutorials/wsl-containers&quot;&gt;https://learn.microsoft.com/ko-kr/windows/wsl/tutorials/wsl-containers&lt;/a&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;span&gt;단순히 docker 인스턴스를 설치하기 위해 아래를 참고한다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;span&gt;&lt;a href=&quot;https://docs.docker.com/engine/install/ubuntu/#install-using-the-repository&quot;&gt;https://docs.docker.com/engine/install/ubuntu/#install-using-the-repository&lt;/a&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;span&gt;apt 리포지터리 설정한다.&lt;/span&gt;&lt;/p&gt;
&lt;pre class=&quot;reasonml&quot;&gt;&lt;code&gt;# Add Docker's official GPG key:
sudo apt-get update
sudo apt-get install ca-certificates curl gnupg
sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
sudo chmod a+r /etc/apt/keyrings/docker.gpg

# Add the repository to Apt sources:
echo \
  &quot;deb [arch=&quot;$(dpkg --print-architecture)&quot; signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
  &quot;$(. /etc/os-release &amp;amp;&amp;amp; echo &quot;$VERSION_CODENAME&quot;)&quot; stable&quot; | \
  sudo tee /etc/apt/sources.list.d/docker.list &amp;gt; /dev/null
sudo apt-get update&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;span&gt;docker 설치한다.&lt;/span&gt;&lt;/p&gt;
&lt;pre class=&quot;sql&quot;&gt;&lt;code&gt;sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;span&gt;다만 이렇게 해도 docker 명령을 수행해보면 아래와 같이 에러가 발생하는 것을 알 수 있다.&lt;/span&gt;&lt;/p&gt;
&lt;pre class=&quot;vhdl&quot;&gt;&lt;code&gt;# docker ps
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
# systemctl status docker
System has not been booted with systemd as init system (PID 1). Can't operate.
Failed to connect to bus: Host is down&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;span&gt;기본적으로 systemd에서 docker를 실행해 줄 것으로 기대하지만, wsl에서는 기본적으로 systemd 가 실행되지 않기 때문에 아래와 같이 /etc/wsl.conf 를 작성한다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;span&gt;&lt;a href=&quot;https://learn.microsoft.com/ko-kr/windows/wsl/systemd&quot;&gt;https://learn.microsoft.com/ko-kr/windows/wsl/systemd&lt;/a&gt;&lt;/span&gt;&lt;/p&gt;
&lt;pre class=&quot;ini&quot;&gt;&lt;code&gt;[boot]
systemd=true&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;span&gt;참고로, 위 방법이 제대로 동작하지 않는다면 사전에 wsl --update를 수행해야 할 수 있다. (WSL 버전이&amp;nbsp;&lt;span style=&quot;background-color: #ffffff; color: #161616; text-align: left;&quot;&gt;0.67.6&lt;span&gt; 이상)&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/p&gt;
&lt;pre class=&quot;erlang&quot;&gt;&lt;code&gt;&amp;gt; wsl --update
설치 중: Linux용 Windows 하위 시스템
[==========================59.0%===                        ]
Linux용 Windows 하위 시스템이(가) 설치되었습니다.&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;span&gt;이제 사용 가능하다.&lt;/span&gt;&lt;/p&gt;
&lt;pre class=&quot;pgsql&quot;&gt;&lt;code&gt;# systemctl status docker
● docker.service - Docker Application Container Engine
     Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
     Active: active (running) since Sun 2023-11-05 12:52:49 KST; 23s ago
TriggeredBy: ● docker.socket
       Docs: https://docs.docker.com
   Main PID: 397 (dockerd)
      Tasks: 14
     Memory: 102.7M
     CGroup: /system.slice/docker.service
             └─397 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock

Nov 05 12:52:49 DESKTOP-UN5DI0K dockerd[397]: time=&quot;2023-11-05T12:52:49.436289714+09:00&quot; level=info msg=&quot;Loading contai&amp;gt;
Nov 05 12:52:49 DESKTOP-UN5DI0K dockerd[397]: time=&quot;2023-11-05T12:52:49.537940657+09:00&quot; level=info msg=&quot;Loading contai&amp;gt;
Nov 05 12:52:49 DESKTOP-UN5DI0K dockerd[397]: time=&quot;2023-11-05T12:52:49.574554915+09:00&quot; level=warning msg=&quot;WARNING: No&amp;gt;
Nov 05 12:52:49 DESKTOP-UN5DI0K dockerd[397]: time=&quot;2023-11-05T12:52:49.574608315+09:00&quot; level=warning msg=&quot;WARNING: No&amp;gt;
Nov 05 12:52:49 DESKTOP-UN5DI0K dockerd[397]: time=&quot;2023-11-05T12:52:49.574617075+09:00&quot; level=warning msg=&quot;WARNING: No&amp;gt;
Nov 05 12:52:49 DESKTOP-UN5DI0K dockerd[397]: time=&quot;2023-11-05T12:52:49.574621785+09:00&quot; level=warning msg=&quot;WARNING: No&amp;gt;
Nov 05 12:52:49 DESKTOP-UN5DI0K dockerd[397]: time=&quot;2023-11-05T12:52:49.574642268+09:00&quot; level=info msg=&quot;Docker daemon&quot;&amp;gt;
Nov 05 12:52:49 DESKTOP-UN5DI0K dockerd[397]: time=&quot;2023-11-05T12:52:49.574820321+09:00&quot; level=info msg=&quot;Daemon has com&amp;gt;
Nov 05 12:52:49 DESKTOP-UN5DI0K systemd[1]: Started Docker Application Container Engine.
Nov 05 12:52:49 DESKTOP-UN5DI0K dockerd[397]: time=&quot;2023-11-05T12:52:49.609024013+09:00&quot; level=info msg=&quot;API listen on &amp;gt;
# docker ps
CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;span&gt;설치 확인&lt;/span&gt;&lt;/p&gt;
&lt;pre class=&quot;dockerfile&quot;&gt;&lt;code&gt;sudo docker run hello-world&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;span&gt;아래와 같은 결과를 얻는다면 정상이다.&lt;/span&gt;&lt;/p&gt;
&lt;pre class=&quot;sqf&quot;&gt;&lt;code&gt;# sudo docker run hello-world
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
719385e32844: Pull complete
Digest: sha256:88ec0acaa3ec199d3b7eaf73588f4518c25f9d34f58ce9a0df68429c5af48e8d
Status: Downloaded newer image for hello-world:latest

Hello from Docker!&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h2 style=&quot;color: #000000;&quot; data-ke-size=&quot;size26&quot;&gt;&lt;span&gt;kind 설치&lt;/span&gt;&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;span&gt;kind는 docker에 컨테이너로 노드를 실행 시켜서 kuberentes를 실행하므로써 로컬 환경에서 kubernetes 테스트를 가능케하는 도구이다.&amp;nbsp;&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;span&gt;아래를 참고한다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;span&gt;&lt;a href=&quot;https://kind.sigs.k8s.io/docs/user/quick-start/#installation&quot;&gt;https://kind.sigs.k8s.io/docs/user/quick-start/#installation&lt;/a&gt;&lt;/span&gt;&lt;/p&gt;
&lt;pre class=&quot;inform7&quot; style=&quot;background-color: #f8f8f8; color: #383a42;&quot;&gt;&lt;code&gt;# For AMD64 / x86_64
[ $(uname -m) = x86_64 ] &amp;amp;&amp;amp; curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.20.0/kind-linux-amd64
# For ARM64
[ $(uname -m) = aarch64 ] &amp;amp;&amp;amp; curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.20.0/kind-linux-arm64
chmod +x ./kind
sudo mv ./kind /usr/local/bin/kind&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;실제로 kind는 바이너리에 불과함을 알 수 있다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;kind로 클러스터를 생성하면, 아래와 같이 컨테이너 이미지가 각 노드 처럼 실행되는 것을 알 수 있다.&lt;/p&gt;
&lt;pre id=&quot;code_1699185449050&quot; class=&quot;angelscript&quot; style=&quot;background-color: #f8f8f8; color: #383a42;&quot; data-ke-type=&quot;codeblock&quot; data-ke-language=&quot;bash&quot;&gt;&lt;code&gt;CONTAINER ID   IMAGE                  COMMAND                  CREATED       STATUS       PORTS                       NAMES
e9d61cb5b472   kindest/node:v1.27.3   &quot;/usr/local/bin/entr&amp;hellip;&quot;   8 hours ago   Up 8 hours   127.0.0.1:44423-&amp;gt;6443/tcp   calico-control-plane
1d481afc16a6   kindest/node:v1.27.3   &quot;/usr/local/bin/entr&amp;hellip;&quot;   8 hours ago   Up 8 hours                               calico-worker&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;kind로 로컬 환경에서 kubernetes 를 테스트 해보기 위해 docker와 kind를 설치했다.&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;kind 실제 사용은 앞으로 작성될 블로그를 참고할 수 있다.&lt;/p&gt;</description>
      <category>기타</category>
      <category>Docker</category>
      <category>Kind</category>
      <category>WSL</category>
      <author>한명</author>
      <guid isPermaLink="true">https://a-person.tistory.com/23</guid>
      <comments>https://a-person.tistory.com/23#entry23comment</comments>
      <pubDate>Sun, 5 Nov 2023 12:57:53 +0900</pubDate>
    </item>
    <item>
      <title>wsl: root 패스워드 재설정</title>
      <link>https://a-person.tistory.com/22</link>
      <description>&lt;p data-ke-size=&quot;size16&quot;&gt;wsl에서 root 패스워드를 분실 했을 때, 아래 명령을 실행하면 root로 바로 진입할 수 있다.&lt;/p&gt;
&lt;pre id=&quot;code_1699154946741&quot; class=&quot;bash&quot; data-ke-language=&quot;bash&quot; data-ke-type=&quot;codeblock&quot;&gt;&lt;code&gt;&amp;gt; ubuntu config --default-user root&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;wsl로 들어가보면 root 로 바로 들어간 것을 알 수 있다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;root 사용자에서 passwd 로 변경하면 된다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;pre id=&quot;code_1699155024411&quot; class=&quot;bash&quot; data-ke-language=&quot;bash&quot; data-ke-type=&quot;codeblock&quot;&gt;&lt;code&gt;root@com:/# passwd
New password:
Retype new password:
passwd: password updated successfully&lt;/code&gt;&lt;/pre&gt;</description>
      <category>Quick Fix</category>
      <category>passwd</category>
      <category>ROOT</category>
      <category>WSL</category>
      <category>패스워드 분실</category>
      <category>패스워드 재설정</category>
      <author>한명</author>
      <guid isPermaLink="true">https://a-person.tistory.com/22</guid>
      <comments>https://a-person.tistory.com/22#entry22comment</comments>
      <pubDate>Sun, 5 Nov 2023 12:32:19 +0900</pubDate>
    </item>
    <item>
      <title>Go스터디: 5주차(23~26장)</title>
      <link>https://a-person.tistory.com/21</link>
      <description>&lt;p data-ke-size=&quot;size16&quot;&gt;이 글은 골든래빗 &amp;lsquo;Tucker의 Go 언어 프로그래밍의 23~26장 써머리입니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Go에서 에러를 처리하는 방법과 동시성 프로그래밍에 대한 주제를 다루고 있습니다.&lt;/p&gt;
&lt;h1&gt;23. 에러 핸들링&lt;/h1&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;에러 핸들링(error handling)은 프로그램의 에러를 처리하는 방법을 말한다. 특정 에러가 발생했을 때 프로그램이 강제 종료 되는 것보다는 적절한 메시지를 출력하고, 에러를 다른 방식으로 처리해서 사용자 경험을 향상 시킬 수 있다.&lt;/p&gt;
&lt;pre class=&quot;go&quot;&gt;&lt;code&gt;package main

import (
	&quot;os&quot;
)

const filename string = &quot;data.txt&quot;

func main() {
	file, _ := os.Open(filename)

	defer file.Close()
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이렇게 하면 Program exited.로 종료한다.&lt;/p&gt;
&lt;pre class=&quot;go&quot;&gt;&lt;code&gt;package main

import (
	&quot;fmt&quot;
	&quot;os&quot;
)

const filename string = &quot;data.txt&quot;

func main() {
	file, err := os.Open(filename)
	if err != nil {
		fmt.Println(err)
		return
	}
	defer file.Close()
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이렇게하면 open data.txt: no such file or directory으로 종료한다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;fmt 패키지의 Errorf() 함수로 err을 반환하면 원하는 에러 메시지를 만들어 전달할 수 있다. 또는 errors 패키지의 New()함수를 이용해서 error를 생성할 수도 있다.&lt;/p&gt;
&lt;pre class=&quot;gradle&quot;&gt;&lt;code&gt;import &quot;errors&quot;

errors.New(&quot;에러 메시지&quot;)
&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;error 타입은 인터페이스로 문자열을 반환하는 Error() 메서드로 구성되어 있다. 즉 어떤 타입이든 문자열을 반환하는 Error()메서드를 포함하고 있다면 에러로 사용할 수 있다.&lt;/p&gt;
&lt;pre class=&quot;routeros&quot;&gt;&lt;code&gt;type error interface {
	Error() sting
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;회원 가입에서 패스워드 길이를 체크하는 예제를 살펴본다.&lt;/p&gt;
&lt;pre class=&quot;go&quot;&gt;&lt;code&gt;package main

import (
	&quot;fmt&quot;
)

type PasswordError struct { // 에러 구조체 선언
	Len        int
	RequireLen int
}

func (err PasswordError) Error() string {
	return &quot;암호 길이가 짧습니다.&quot;
}

func RegisterAccount(name, password string) error {
	if len(password) &amp;lt; 8 {
		return PasswordError{len(password), 8} // 에러 반환, 암호 길이가 짧을 때 PasswordError 구조체 정보 반환
	}
	return nil
}

func main() {
	err := RegisterAccount(&quot;myaccnt&quot;, &quot;mypw&quot;)
	if err != nil { // 에러 확인
		if errInfo, ok := err.(PasswordError); ok { // 인터페이스 변환, 인터페이스 변환 성공 여부 검사(ok)-&amp;gt;다양한 에러 타입에 대응
			fmt.Printf(&quot;%v Len:%d RequireLen:%d\\n&quot;,
								 errInfo, errInfo.Len, errInfo.RequireLen)
		}
	} else {
		fmt.Println(&quot;회원 가입 했습니다.&quot;)
	}

}
&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;한편 패닉(panic)은 프로그램을 정상 진행시키기 어려운 상황을 만났을 때 프로그램 흐름을 중지시키는 기능이다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;지금까지 error 인터페이스를 사용해 에러를 처리해 사용자에게 에러의 이유를 알려주는 것이지만(사용자 관점), 이와는 다르게 panic()은 문제 발생 시점 빠르게 프로그램을 종료시켜서 빠르게 문제 발생 시점을 알게하는 방식이다(개발자 관점).&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;panic() 내장 함수를 호출하고 인수로 에러 메시지를 입려갛면 프로그램을 즉시 종료하고 에러 메시지를 추력하고, 함수 호출 순서를 나타내는 콜 스택(call stack)을 표시한다.&lt;/p&gt;
&lt;pre class=&quot;go&quot;&gt;&lt;code&gt;package main

import (
	&quot;os&quot;
)

const filename string = &quot;data.txt&quot;

func main() {
	file, err := os.Open(filename)
	if err != nil {
		panic(&quot;파일을 읽을 수 없습니다&quot;)
	}
	defer file.Close()
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;앞선 예제를 panic()으로 처리하면 아래와 같이 call stack이 떨어진다.&lt;/p&gt;
&lt;pre class=&quot;http&quot;&gt;&lt;code&gt;panic: 파일을 읽을 수 없습니다

goroutine 1 [running]:
main.main()
	/tmp/sandbox3015253680/prog.go:12 +0x85

Program exited.
&lt;/code&gt;&lt;/pre&gt;
&lt;blockquote data-ke-style=&quot;style3&quot;&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;콜 스택이란 panic 이 발생한 마지막 함수 위치부터 역순으로 호출 순서를 표시한다.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;프로그램을 개발하는 시점에는 문제점을 파악하고 수정하는 것이 중요하지만, 사용자에게 인도가 된 이후에는 문제가 발생하더라도 프로그램이 종료되는 대신 에러 메시지를 표시하고 복구를 시도하는 것이 나을 수 있다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;panic은 호출 순서를 거슬러 올라가며 전파되는데, main() &amp;rarr; f() &amp;rarr; g() &amp;rarr; h() 순서로 호출되었을 때, h()에서 패닉이 발생하면 호출 순서를 거꾸로 올라가면서 g() &amp;rarr; f() &amp;rarr; main() 으로 전달된다. 이때, recover()를 만나면 패닉을 복구할 수 있다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;다만 recover() 또한 제한적으로 사용하는 것이 좋다. 복구가 되더라도 프로그램 상태가 불안정한 상태(데이터가 일부가 쓰여지거나 하면서 데이터가 비정상 적으로 저장될 수 있음)로 남는 것을 주의해야 한다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h1&gt;24. 고루틴과 동시성 프로그래밍&lt;/h1&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;고루틴(goroutine)은 경량 스레드로 함수나 명령을 동시에 실행할 때 사용한다. 프로그램 시작점인 mian() 또한 고루틴에 의해 실행된다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;보통 단일 Core에서 멀티 스레드 프로세스를 지원하기 위해 시분할로 다수의 스레드를 처리하지만 이 과정에서 컨텍스트 스위치(context switch: 현재의 상태_context를 보관하고 새로운 상태를 복원) 비용이 발생한다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Go에서는 CPU 코어 마다 OS 스레드를 하나만 할당해서 사용하기 때문에 컨텍스트 스위칭 비용이 발생하지 않는다. 실제로 하나의 OS 스레드에 하나의 고루틴이 실행된다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;컨텍스트 스위칭은 CPU 코어가 스레드를 변경할 때 발생하는데, 고루틴을 이용하면 코어와 스레드는 변경되지 않고 오직 고루틴만 옮겨 다니기 때문에(코어가 스레드를 변경하지 않음) 컨텍스트 스위칭 비용이 발생하지 않는다.&lt;/p&gt;
&lt;pre class=&quot;go&quot;&gt;&lt;code&gt;package main

import (
	&quot;fmt&quot;
	&quot;sync&quot;
)

var wg sync.WaitGroup // wiatGroup 객체

func SumAtoB(I, a, b int) {
	sum := 0
	for i := a; i &amp;lt;= b; i++ {
		sum += i
	}
	fmt.Printf(&quot;%d번째: %d ~ %d 합계는 %d 이다\\n&quot;, I, a, b, sum)
	wg.Done() // 작업 1개 완료를 의미
}

func main() {
	wg.Add(10) // 총 작업 수 설정
	for i := 0; i &amp;lt; 10; i++ {
		go SumAtoB(i, 1, 100000)
	}
	wg.Wait() // 모든 작업의 완료 대기
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;순서를 보장하지 않는다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;9번째: 1 ~ 100000 합계는 5000050000 이다
5번째: 1 ~ 100000 합계는 5000050000 이다
0번째: 1 ~ 100000 합계는 5000050000 이다
1번째: 1 ~ 100000 합계는 5000050000 이다
2번째: 1 ~ 100000 합계는 5000050000 이다
3번째: 1 ~ 100000 합계는 5000050000 이다
4번째: 1 ~ 100000 합계는 5000050000 이다
8번째: 1 ~ 100000 합계는 5000050000 이다
6번째: 1 ~ 100000 합계는 5000050000 이다
7번째: 1 ~ 100000 합계는 5000050000 이다
&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;동시성 프로그래밍의 문제점은 동일한 메모리 자원에 여러 고루틴이 접근할 때 발생한다. 이 문제를 해결하기 위해서 한 고루틴에서 값을 변경할 때 다른 고루틴이 건들지 못하게 해당 자원에 대해서 뮤텍스(mutex)를 이용할 수 있다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;뮤텍스의 Lock() 메서드를 호출해 뮤텍스를 획득하면, 다른 고루틴에서는 획득한 뮤텍스가 반납될 때까지 대기하게 된다. 앞서 뮤텍스를 획득했으면 Unlock() 메서드를 호출해 반납해야 한다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아래는 뮤텍스 예제이다. 특정 리소스를 대상으로 뮤텍스를 획득한다기 보다는, 동시성 문제를 일으킬 수 있는 동작 자체를 뮤텍스 처리한다.&lt;/p&gt;
&lt;pre class=&quot;go&quot;&gt;&lt;code&gt;package main

import (
	&quot;fmt&quot;
	&quot;sync&quot;
	&quot;time&quot;
)

var mutex sync.Mutex // 패키지 전역 변수 뮤텍스

type Account struct {
	Balance int
}

func DepositAndWithdraw(account *Account) {
	mutex.Lock()         // 뮤텍스 획득
	defer mutex.Unlock() // defer로 Unlock() 지연 호출

	if account.Balance &amp;lt; 0 {
		panic(fmt.Sprintf(&quot;Blaance should not be negative&quot;))
	}
	// 1000원을 입금하고 1000원을 출금한다.
	account.Balance += 1000
	time.Sleep(time.Millisecond)
	account.Balance -= 1000
}

func main() {
	var wg sync.WaitGroup

	account := &amp;amp;Account{0}
	wg.Add(10)
	for i := 0; i &amp;lt; 10; i++ {
		go func() {
			DepositAndWithdraw(account)
			wg.Done()
		}()
	}
	wg.Wait()
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;다만 뮤텍스를 사용하면 동시성 프로그래밍 문제를 해결할 수 있지만, 성능 향상을 저해할 수 있고 또한 데드락(deadlock, 서로 뮤텍스에 들어간 자원을 얻으려고 대기하는 상황)이 발생할 수 있는 문제가 있다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h1&gt;25. 채널과 컨텍스트&lt;/h1&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;채널(channel)과 컨텍스트(context)는 Go에서 동시성 프로그래밍을 도와주는 기능이다.&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;25.1 채널&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;채널은 고루틴 간 메시지를 전달하는 메시지 큐이다. 메시지가 들어온 순서대로 쌓이고, 차례대로 읽게 된다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아래의 예시와 같이 사용할 수 있다.&lt;/p&gt;
&lt;pre class=&quot;go&quot;&gt;&lt;code&gt;package main

import (
	&quot;fmt&quot;
	&quot;sync&quot;
	&quot;time&quot;
)

func main() {
	var wg sync.WaitGroup
	ch := make(chan int) // 채널 생성
	wg.Add(1)

	go square(&amp;amp;wg, ch) // 고루틴 생성, 채널 전달
	ch &amp;lt;- 9            // 채널에 데이터 넣음
	wg.Wait()          // 작업이 완료될때까지 기다림
}

func square(wg *sync.WaitGroup, ch chan int) {
	n := &amp;lt;-ch // 채널에서 데이터 빼옴

	time.Sleep(time.Second)
	fmt.Printf(&quot;Square: %d\\n&quot;, n*n)
	wg.Done()
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;채널을 아래와 같이 생성하면 버퍼(내부에 데이터를 보관할 수 있는 메모리 영역)를 가진 채널을 만들 수 있다.&lt;/p&gt;
&lt;pre class=&quot;go&quot;&gt;&lt;code&gt;var chan string messages = make(chan string, 2)
&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;채널을 활용하면 생산자 소비자 패턴(Producer Consumer Pattern, 한쪽에서 데이터를 생성해서 넣어주면 다른 쪽에서 생성된 데이터를 빼서 사용하는 방식)을 구현할 수 있다.&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;25.2 컨텍스트&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;컨텍스트는 고루틴에 작업을 요청할 때 작업 취소나 작업 시간 등을 설정할 수 있는 작업 명세서 역할을 한다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;작업 취소가 가능한 컨텍스트를 아래와 같이 만들 수 있다.&lt;/p&gt;
&lt;pre class=&quot;mipsasm&quot;&gt;&lt;code&gt;ctx, cancel := context.WithCancel(context.Background())
&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;작업 시간을 설정한 컨텍스트를 아래와 같이 만들 수 있다. 아래는 3초 후 종료한다.&lt;/p&gt;
&lt;pre class=&quot;maxima&quot;&gt;&lt;code&gt;ctx, cancle := context.WithTimeout(context.Background(), 3*time.Second)
&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;별도 지시사항을 추가한 컨텍스틀 아래와 같이 만들 수 있다.&lt;/p&gt;
&lt;pre class=&quot;mipsasm&quot;&gt;&lt;code&gt;ctx := context.WithValue(context.Background(), &quot;number&quot;, 9)
&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h1&gt;26. 단어 검색 프로그램 만들기&lt;/h1&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;파일에서 단어 검색하는 프로그램에 대한 예제이다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;여기서는 앞서 다루지 않은 프로그램 그램 실행 시점 실행 인수를 전달하는 방법과 파일에서 한 줄 씩 읽어서 처리하는 방식을 추가로 다루고 있다. 한편 검색해야 하는 파일이 여러 개일 때, 파일을 하나 씩 검색하는 것보다 빠르게 실행하기 위해서, 각 파일에 대한 검색을 고루틴으로 적용한다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;먼저 Go에서는 실행 인수를 os.Args 변수를 이용해서 가져올 수 있다.&lt;/p&gt;
&lt;pre class=&quot;stata&quot;&gt;&lt;code&gt;os.Args // 실행 인수 개수
os.Args[1] // 실행 인수 가져오기
&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;그리고 파일에서 단어를 찾기 위해서 파일을 열고, bufio 패키지의 NewScanner()함수를 이용해 스캐너를 만들고 파일 내용을 한 라인씩 읽어 올 수 있다.&lt;/p&gt;
&lt;pre class=&quot;fortran&quot;&gt;&lt;code&gt;scanner := bufio.NewScanner(file) // 스캐너를 생성해서 한 줄씩 읽기
for scanner.Scan() {
	fmt.Println(scanner.Text()) 
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;실제 scanner.Text()에서 파일이 있는지 여부는 한 라인에 대해서 strings.Contains(라인, 찾는단어)로 확인한다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;다만 실행 인수에 단어를 찾을 다수의 파일이 들어온다면, 다수 파일에서 단어 검색을 처리하기 위해서 고루틴을 활용해 시간을 단축할 수 있다.&lt;/p&gt;</description>
      <category>Book Study/Tucker의 Go Programming</category>
      <category>go</category>
      <category>묘공단</category>
      <author>한명</author>
      <guid isPermaLink="true">https://a-person.tistory.com/21</guid>
      <comments>https://a-person.tistory.com/21#entry21comment</comments>
      <pubDate>Sun, 29 Oct 2023 20:37:13 +0900</pubDate>
    </item>
    <item>
      <title>Go스터디: 4주차(18~22장)</title>
      <link>https://a-person.tistory.com/20</link>
      <description>&lt;p data-ke-size=&quot;size16&quot;&gt;이 글은 골든래빗 &amp;lsquo;Tucker의 Go 언어 프로그래밍의 18~22장 써머리입니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;여기서 부터 주제가 조금씩 어려워지고 생각이 필요한 부분이 많습니다.&lt;/p&gt;
&lt;h1&gt;18. 슬라이스&lt;/h1&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;일반적인 배열은 고정 길이를 가진다. 아래의 배열은 10개까지 값을 저장할 수 있다.&lt;/p&gt;
&lt;pre class=&quot;smali&quot;&gt;&lt;code&gt;var array [10]int
&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;슬라이스는 배열과 비슷하지만 []안에 개수를 지정하지 않고 선언하는 동적 배열이다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;다만 슬라이스를 초기화 하지 않으면 길이가 0인 슬라이스가 만들어 지는 것이기 때문에 임의로 인덱스를 접근하면 패닉이 발생한다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;package main

func main() {
	var slice []int

	slice[1] = 10
}

// 에러
panic: runtime error: index out of range [1] with length 0

goroutine 1 [running]:
main.main()
	/tmp/sandbox4095475310/prog.go:6 +0x14
&lt;/code&gt;&lt;/pre&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;슬라이스의 초기화와 요소 추가, 삭제&lt;/h3&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;var slice1 []int{1,2,3} // 대괄호 안에 길이가 없다, {} 안에 요소값을 넣어서 초기화 할수 있다.
var array [...]int{1,2,3} // 고정길이 3인 배열이다. 슬라이스와 다르다.
var slice = make([]int,3) // 슬라이스는 make()를 통해서 초기화 할 수 있다.

// 슬라이스는 요소 추가를 위해 append()를 사용할 수 있다.
slice1 = append(slice1, 4)

// 슬라이스 요소 삭제를 위해 append()를 활용할 수 있다.
slice1 = append(slice[:idx], slice[idx+1:]...)

// 슬라이스의 중간에 요소 추가하기 위해 append()를 활용할 수 있다.
slice1 = append(slice[:idx], append([]int{100}, slice[idx:]...)...)

// 슬라이스 정렬
sort.Ints(slice1) // float64는 sort.Float64s() 사용
&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;hellip; 사용이 복잡하다 &amp;rarr; 예시 확인&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;append()함수는 append(slice, 3,4,5)와 같이 첫 번째 인자로 주어진 slice에 여러 값을 추가할 수 있다. 아래의 예시에서도 append(slice, 요소,요소)와 같이 사용하기 위해, 여러 요소라는 의미로 slice&amp;hellip;(슬라이스 전체)를 사용한 것이라고 이해하자.&lt;/p&gt;
&lt;pre class=&quot;go&quot;&gt;&lt;code&gt;package main

import &quot;fmt&quot;

func main() {
	var slice1 = []int{1, 2, 3}
	fmt.Println(slice1) //[1 2 3]

	//슬라이스 요소 추가
	slice1 = append(slice1, 4)
	fmt.Println(slice1) //[1 2 3 4]

	//슬라이스 요소 삭제
	idx := 1
	//slice1 = append(slice1[:idx], slice1[idx+1:])  //[에러] ./prog.go:13:38: cannot use slice1[idx + 1:] (value of type []int) as int value in argument to append
	slice1 = append(slice1[:idx], slice1[idx+1:]...) // 배열이나 슬라이스 뒤에 ...를 하면 모든 요소값을 넣어준 것과 같게 됨
	fmt.Println(slice1)                              //[1 2 3 4]

	//슬라이스의 중간에 요소 추가
	slice1 = append(slice1[:idx], append([]int{100}, slice1[idx:]...)...)
	fmt.Println(slice1) // [1 100 3 4]

}
&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;예시에서는 슬라이싱(배열의 일부를 집어내는 기능)을 사용했다. 슬라이싱의 결과는 슬라이스이다.&lt;/p&gt;
&lt;pre class=&quot;ada&quot;&gt;&lt;code&gt;array[startIdx:endIndx]
&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;슬라이스는 배열의 일부를 나타내는 타입으로 포인터, len, cap 필드로 구성되어 있다. 포인터로 배열의 중간을 가리키고, len으로 포인터부터 일정 개수를, 그리고 cap은 포인터가 가리키는 배열이 할당된 크기(안전하게 사용할 수 있는 남은 배열 개수)를 나타낸다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;array := [5]int{1,2,3,4,5}
slice := array[1:2] // 이때 포인터는 array[1], len은 1, cap은 4가 된다.

slice1 := []int{1,2,3,4,5}
slice2 := slice1[:3] // 처음부터 슬라이싱
slice3 := slice1[2:] // 끝까지 슬라이싱
&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;결과&lt;/p&gt;
&lt;pre class=&quot;go&quot;&gt;&lt;code&gt;package main

import &quot;fmt&quot;

func main() {
	slice1 := []int{1, 2, 3, 4, 5}
	slice2 := slice1[:3] // 처음부터 슬라이싱
	slice3 := slice1[2:] // 끝까지 슬라이싱

	fmt.Println(slice1)
	fmt.Println(slice2) // [1 2 3], 시작 인덱스 부터 슬라이싱
	fmt.Println(slice3) // [3 4 5], 끝 인덱스-1 까지 슬라이싱
}
&lt;/code&gt;&lt;/pre&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;슬라이스 동작 원리&lt;/h3&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;슬라이스의 내부 정의는 아래와 같다. 슬라이스가 실제 배열을 가리키는 포인터를 가지고 있어서, 쉽게 크기가 다른 배열을 가리키도록 변경할 수 있고, 슬라이스 변수 대입 시 배열에 비해서 사용되는 메모리나 속도에 이점이 있다.&lt;/p&gt;
&lt;pre class=&quot;go&quot;&gt;&lt;code&gt;type SliceHeader struct {
		Data uintptr // 실제 배열을 가리키는 포인터
	Len int        // 요소 개수
	Cap int        // 실제 배열의 길이
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;슬라이스와 배열 동작은 아래와 같은 차이가 있다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;package main

import &quot;fmt&quot;

// 배열은 모든 값이 복사되기 때문에, 함수내의 배열은 다른 배열이다.
func changeArray(array2 [5]int) {
	array2[2] = 500
}

// slice값이 복사되면 구조체의 각 필드값이 복사되어서 포인터의 메모리 주소값도 복사되고, 실제로 slice2는 복사되어도 같은 배열 데이터를 가리키게 된다.
func changeSlice(slice2 []int) {
	slice2[2] = 500
}

func main() {
	array := [5]int{1, 2, 3, 4, 5}
	slice := []int{1, 2, 3, 4, 5}

	changeArray(array)
	changeSlice(slice)

	fmt.Println(array) //[1 2 3 4 5]
	fmt.Println(slice) //[1 2 500 4 5]
}
&lt;/code&gt;&lt;/pre&gt;
&lt;h1&gt;&amp;nbsp;&lt;/h1&gt;
&lt;h1&gt;19. 메서드&lt;/h1&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;메서드(method)는 함수의 일종으로, 구조체 밖에서 메서드를 지정할 수 있다. 이때 특정 구조체의 메소드라는 것을 명시하기 위 리시버(reciver)를 func 키워드와 함수 이름 사이에 중괄호로 명시한다.&lt;/p&gt;
&lt;pre class=&quot;autoit&quot;&gt;&lt;code&gt;func (r Rabbit) info() int {
	return r.width * r.height
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;리시버로 모든 로컬 타입(해당 패키지 안에서 type 키워드로 선언된 타입)이 사용 가능하다. 기본 내장 타입도 사용자 정의 타입으로 별칭 타입으로 변환하여 메서드를 선언할 수 있다.&lt;/p&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;메서드는 왜 필요한가?&lt;/h3&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;메서드는 리서버에 속한다. 메서드를 사용해서 데이터와 기능을 묶을 수 있게 된다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;아래의 예시와 같이 함수에 포인터 변수를 전달해서 동일한 목적을 수행할 수도 있는데, 왜 메서드를 이용할까?&lt;/p&gt;
&lt;pre class=&quot;go&quot;&gt;&lt;code&gt;package main

import &quot;fmt&quot;

type account struct {
	balance int
}

func withdrawFunc(a *account, amount int) { // 일반 함수 표현
	a.balance -= amount
}

func (a *account) withdrawMethod(amount int) { // 메서드 표현
	a.balance -= amount
}

func main() {
	a := &amp;amp;account{100} // balance가 100인 account 포인터 변수 생성

	withdrawFunc(a, 30) // 함수 형태 호출
	fmt.Println(a)  // 70 

	a.withdrawMethod(30) // 메서드 형태 호출
	fmt.Println(a)  // 40
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;예를 들어, &amp;lsquo;성적 입력 프로그램&amp;rsquo;을 만들 때, Student라는 구조체가 있다고 하면, 이 구조체의 필드로 이름, 반, 번호, 성적 등의 &lt;b&gt;데이터&lt;/b&gt;가 있다. 메서드는 성적 입력, 반 배정 등의 Student 구조체의 &lt;b&gt;기능&lt;/b&gt;을 나타낸다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;좋은 프로그래밍이라면 결합도(coupling, 객체간의 의존 관계)를 낮추고, 응집도(cohesion, 모듈 내 요소들의 상호 관련성)을 높여햐 한다. 메서드는 데이터와 관련된 기능을 묶기 때문에 코드 응집도를 높이는 중요한 역할을 한다. 응집도가 낮으면 새로운 기능을 추가할 때 흩어진 모든 부분을 검토하고 고쳐야 하는 문제가 발생한다. 응집도가 높으면 필요한 코드만 수정하면 된다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;현대의 프로그래밍에서는 함수 호출 순서보다 객체를 만들고, 다른 객체와 상호 관계를 맺는 것이 더 중요해 졌다. 이때 객체 간 상호 관계는 메서드로 표현된다.&lt;/p&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;리시버를 값 타입 vs. 포인터 타입 메서드&lt;/h3&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;포인터 타입 메서드를 호출하면 포인터가 가리키고 있는 메모리 주소 값이 복사된다(서로 같은 주소값을 가지게 됨). 반면 값 타임 메서드를 호출하면 리시버 타입의 모든 값이 복사된다(서로 다른 주소값을 가지게 됨).&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;포인터 타입 메서드는 메서드 내부에서 리시버의 값을 변경시킬 수 있다. &amp;rarr; 인스턴스 중심&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;값 타입 메서드는 호출하는 쪽과 메서드 내부의 값은 별도 인스턴스로 독립되기 때문에 메서드 내부에서 리시버의 값을 변경시킬 수 없다. &amp;rarr; 값 중심&lt;/p&gt;
&lt;h1&gt;&amp;nbsp;&lt;/h1&gt;
&lt;h1&gt;20. 인터페이스&lt;/h1&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;인터페이스(interface)란 구현하지 않은 메서드 집합이다. 이를 이용하면 메서드 구현을 포함한 구체화된 객체(concrete object)가 아닌 추상화된 객체로 상호작용을 할 수 있다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;rarr; 구체화된 타입이 아닌, 인터페이스만 가지고 메서드를 호출할 수 있어, 추후 프로그램 요구사항 변경 시 유연하게 대체할 수 있다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;rarr; Go에서는 인터페이스 구현 여부를 그 타입이 인터페이스에 해당하는 메서드를 가지고 있는지로 판단한다(덕 타이핑: 타입 선언 시 인터페이스 구현 여부를 명시적으로 나타낼 필요 없이 인터페이스에 정의한 메서드 포함 여부만으로 결정한다).&lt;/p&gt;
&lt;blockquote data-ke-style=&quot;style3&quot;&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Keyword: 인터페이스를 구현하면 인터페이스로 메서드를 호출할 수 있다.&lt;br /&gt;프로그램의 요구사항 변경에 유연하게 대처 할 수 있다.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;인터페이스 선언은 아래와 같다.&lt;/p&gt;
&lt;pre class=&quot;reasonml&quot;&gt;&lt;code&gt;type DuckInterface interface {
	// 메서드 집합
	Fly()
	Walk(distance int) int
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;인터페이스 예시&lt;/p&gt;
&lt;pre class=&quot;armasm&quot;&gt;&lt;code&gt;package main

import &quot;fmt&quot;

type Stringer interface { // String 메서드를 가지면 뒤에 ~er 로 인터페이스명을 만든다.
	String() string
}

type Student struct {
	Name string
	Age  int
}

func (s Student) String() string { // Student의 String() 메서드가 Stringer 인터페이스를 구현함, Student 타입은 Stringer 인터페이스로 사용될 수 있다.
	return fmt.Sprintf(&quot;안녕! 나는 %d살 %s라고 해&quot;, s.Age, s.Name)
}

func main() {
	student := Student{&quot;후후&quot;, 12}
	var stringer Stringer

	stringer = student // stringer 값으로 Student  타입 변수 student를 대입힌다. stringer는 Stringer 인터페이스이고 Stduent 타입은 String() 메서드를 포함하고 있기 때문에 stinger 값으로 stduent를 대입할 수 있다.

	fmt.Printf(&quot;%s\\n&quot;, stringer.String()) // stringer 인터페이스가 가지고 있는 메서드 String()을 호출한다. stringer 값으로 Stduent 타입 ㄴtudent를 가지고 있기 때문에 student의 메서드 String()이 호출되어 반환된다.
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;rarr; 인터페이스 타입 변수에 인터페이스를 구현한 타입 변수를 대입할 수 있다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;rarr; 인터페이스 타입 변수로 메서드를 호출하면, 이를 구현한 타입의 메서드를 호출 한다.&lt;/p&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;추상화&lt;/h3&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;내부 동작을 감춰서 서비스를 제공하는 쪽과 사용하는 족 모두에게 자유를 주는 방식을 추상화라고 한다. 인터페이스는 추상화를 제공하는 추상화 계증(abstration layer)이다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이렇게 추상화 계층을 이용해 서로 결합을 귾는 것을 디커플링(decoupling)이라 말한다. 결함도는 낮출 수록 좋다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;추상화 계층을 거치면, 내부구현을 알 수 없고 오직 인터페이스(메서드의 집합)만 알 수 있다. 이것을 객체 간 관계라고 정의할 수 있다. 택배회사와 택배 이용자는 택배 전송 관계로, 은행과 은행 이용자는 입금과 인출 관계로 상호작용 한다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;rarr; 구체화된 타입(내부 구현이 모드 있는 타입)으로 상호작용 하는 것이 아니라 관계로 상호작용 한다.&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;인터페이스의 특별한 기능&lt;/h2&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;포함된 인터페이스: 인터페이스가 다른 인터페이스를 포함하는 경우가 있다.&lt;/li&gt;
&lt;li&gt;빈 인터페이스 interface{}를 인수로 받기: 어떤 값이든 받을 수 있는 함수, 메서드, 변수값을 만들 때 사용한다.&lt;/li&gt;
&lt;li&gt;인터페이스 변수의 기본값은 유효하지 않은 메모리 주소를 나타내는 nil이다.&lt;/li&gt;
&lt;li&gt;인터페이스를 구체화된 다른 타입으로 타입 변환하려면 a.(ConcreteType)와 같이 구체화된 타입이나 다른 인터페이스로 변환 할 수 있다.&lt;/li&gt;
&lt;/ul&gt;
&lt;h1&gt;&amp;nbsp;&lt;/h1&gt;
&lt;h1&gt;21. 함수 고급편&lt;/h1&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;21.1 가변 인수 함수&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;fmt.Println() 과 같이 함수의 인수가 정해져 있지 않은 경우, 즉 함수 인수 개수가 고정적이지 않은 함수를 가변 인수 함수(variadic function)이라고 한다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이런 함수는 &amp;hellip; 키워드를 사용해서 가변 인수를 처리하면 된다.&lt;/p&gt;
&lt;pre class=&quot;go&quot;&gt;&lt;code&gt;package main

import &quot;fmt&quot;

func sum(nums ...int) int { // 가변 인수 함수는 ... 키워드로 가변 인수를 받는다.
	sum := 0

	fmt.Printf(&quot;nums의 타입: %T\\n&quot;, nums) // nums 의 타입은 []int (슬라이스) 이다.
	for _, v := range nums {
		sum += v
	}
	return sum
}
func main() {
	fmt.Println(sum(1, 2, 3)) 
	fmt.Println(sum(10, 20))
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;만약 다양한 인수를 섞어서 사용하고 싶다면, &amp;hellip;interface{} 로 받을 수 있다. (모든 타입이 빈 인터페이스를 포함하고 있기 때문에 가능함)&lt;/p&gt;
&lt;pre class=&quot;go&quot;&gt;&lt;code&gt;package main

import &quot;fmt&quot;

func Print(args ...interface{}) {
	for _, arg := range args {
		switch arg.(type) {
		case bool:
			val := arg.(bool) // 인터페이스 변환
			fmt.Printf(&quot;type: %T\\n&quot;, val)
		case float64:
			val := arg.(float64) // 인터페이스 변환
			fmt.Printf(&quot;type: %T\\n&quot;, val)
		case int:
			val := arg.(int) // 인터페이스 변환
			fmt.Printf(&quot;type: %T\\n&quot;, val)
		}
	}
}
func main() {
	Print(2, true, 3.14)
}
&lt;/code&gt;&lt;/pre&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;21.2 defer 지연 실행&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;함수가 종료하기 직전에 실행해야 하는 코드가 있을 수 있다. 혹은 리소스를 사용하고 반납을 해야 하는 것과 같이 반드시 실행해야 하는 코드가 있다면, 리소스를 생성할 때 바로 defer를 사용해서 필요한 명령을 전달할 수 있다(흔히 발생하는 닫지 않은 리소스로 인한 leak을 방지한다).&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;defer 예시에서 주의할 점은 defer는 역순으로 호출된다. (아래는 3, 2, 1 순서로 출력된다)&lt;/p&gt;
&lt;pre class=&quot;go&quot;&gt;&lt;code&gt;package main

import (
	&quot;fmt&quot;
	&quot;os&quot;
)

func main() {
	f, err := os.Create(&quot;test.txt&quot;) // 파일 생성 -&amp;gt; 닫아줘야 한다
	if err != nil {
		fmt.Println(&quot;err&quot;)
		return
	}

	defer fmt.Println(&quot;1&quot;)
	defer f.Close() // defer로 처리해 놓으면 함수가 종료하기 전에 f를 닫아준다
	defer fmt.Println(&quot;2&quot;)
	fmt.Fprintln(f, &quot;hello&quot;)
	defer fmt.Println(&quot;3&quot;)
}
&lt;/code&gt;&lt;/pre&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;21.3 함수 타입 변수&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;함수 타입 변수란, 함수를 값으로 갖는 변수를 의미한다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;함수는 시작 번지를 가지고, 함수가 시작되면 순차적으로 시작한다고 가정할 때, CPU에서는 프로그램 카운터(program counter)가 증가하며 다음 실행 라인을 나타낸다. 이때 main() 함수에서 f() 함수를 실행하면, 프로그램 카운터가 f()함수의 시작 지점을 가리키게 된다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;즉 함수 시작 지점이 함수를 가리키는 값이고, 마치 포인터 처럼 함수를 가리킨다고 해서 함수 포인터(function pointer)라고 부른다.&lt;/p&gt;
&lt;pre class=&quot;go&quot;&gt;&lt;code&gt;func add(a,b int) int {
	return a+b
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;함수 add()를 카리키는 함수 포인터는 아래와 같이 표현한다.&lt;/p&gt;
&lt;pre class=&quot;go&quot;&gt;&lt;code&gt;func add(int, int) int
&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;함수 타입 변수를 활용한 예제를 살펴본다.&lt;/p&gt;
&lt;pre class=&quot;go&quot;&gt;&lt;code&gt;package main

import (
	&quot;fmt&quot;
)

func add(a, b int) int {
	return a + b
}

func mul(a, b int) int {
	return a * b
}

func getOperator(op string) func(int, int) int {
	if op == &quot;+&quot; {
		return add
	} else if op == &quot;*&quot; {
		return mul
	} else {
		return nil
	}
}

func main() {
	var operator func(int, int) int // int 타입 인수 2개를 받아서 int 타입을 반환하는 함수 타입 변수 operator 선언
	operator = getOperator(&quot;+&quot;)
	result := operator(1, 2)

	fmt.Println(result)
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;실제 활용 사례가 궁금하다.&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;&amp;nbsp;&lt;/h2&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;21.4 함수 리터럴&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;함수 리터럴(function literal)은 이름 없는 함수로 함수명을 적지 않고 &lt;b&gt;함수 타입 변수값&lt;/b&gt;으로 대입되는 함수값을 의미한다. (다른 언어의 익명함수 혹은 람다_Lambda와 동일하다)&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;예제를 살펴보면, 함수 타입 변수가 없이, 그냥 함수 자체를 정의한 것을 반환한다.&lt;/p&gt;
&lt;pre class=&quot;go&quot;&gt;&lt;code&gt;package main

import (
	&quot;fmt&quot;
)

type opFunc func(a, b int) int

func getOperator(op string) opFunc {
	if op == &quot;+&quot; {
		// 함수 리터럴을 사용해서 더하기 함수 자체를(함수 이름이 없어짐) 정의하고 반환
		return func(a, b int) int {
			return a + b
		}
	} else if op == &quot;*&quot; {
		return func(a, b int) int {
			return a * b
		}
	} else {
		return nil
	}
}

func main() {
	operator := getOperator(&quot;*&quot;)

	result := operator(1, 2)

	fmt.Println(result)
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;함수 리터럴은 필요한 변수를 내부 상태로 가질 수 있다. 함수 범위 내에서 유효할 것 같지만, 외부 변수를 실제로 변경할 수 있다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;함수 리터럴을 이용해서 원하는 함수를 그때그때 정의해서 함수 타입 변수값으로 사용할 수 있다. 한편 아래 예제에서 writeHello() 함수 입장에서는 인수로 Writer 함수 타입을 받는다. 실제로 어떤 동작을 할지는 호출했을 때 알 수 있게되는데, 이렇게 외부에서 로직을 주입하는 것을 의존성 주입(dependency injection)이라고 한다. 뭔가 아리송 하다.&lt;/p&gt;
&lt;pre class=&quot;go&quot;&gt;&lt;code&gt;package main

import (
	&quot;fmt&quot;
	&quot;os&quot;
)

type Writer func(string) // 함수 타입

func WriteHello(writer Writer) {
	writer(&quot;Hello World&quot;)
}

func main() {
	f, err := os.Create(&quot;test.txt&quot;)
	if err != nil {
		fmt.Println(&quot;failed&quot;)
		return
	}

	defer f.Close()

	WriteHello(func(msg string) {
		fmt.Fprintln(f, msg)
	})
}
&lt;/code&gt;&lt;/pre&gt;
&lt;h1&gt;&amp;nbsp;&lt;/h1&gt;
&lt;h1&gt;22. 자료 구조&lt;/h1&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;22.1 리스트&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;리스트(list)는 container 패키지에서 제공하는 자료 구조이다. 여러 데이터를 보관하는 배열과 비슷하다고 생각되지만, 배열이 연속된 메모리에 데이터를 저장하는 구조인 반면, 리스트는 연속되지 않는 메모리 공간에 데이터를 저장한다.&lt;/p&gt;
&lt;blockquote data-ke-style=&quot;style3&quot;&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이러한 구조로 배열과 리스트는 데이터의 지역성(data locality)에 차이가 있다. 컴퓨터는 연산을 할 때 메모리에서 데이터를 가져와 캐시라는 임시 저장소에 보관하는데, 실제로 필요한 데이터만 가져오지 않고, 그 주변의 데이터를 가져온다. 보통은 높은 확률로 연산이 주변 데이터를 참조하기 때문에 효과적이다. 필요한 데이터가 인접해 있을 수록 데이터 처리 속도가 빨라지는데 데이터 지역성이 좋다고 한다. 배열은 연속된 메모리 이기 때문에 지역성이 리스트에 비해 좋다. 단, 요소 수가 적으면 데이터 지역성 때문에 배열이 효율적이지만, 삽입/삭제가 빈번한 연산이라면 리스트가 더 효율적이다.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;리스트의 구조체 구조를 보면 요소들이 포인터로 연결된 링크드 리스트(Linked list) 형태인 것을 알 수 있다.&lt;/p&gt;
&lt;pre class=&quot;dart&quot;&gt;&lt;code&gt;type Element struct {
	value interface{}
	Next *Element
	Prev *Element
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;다음 예제에서 List의 기본적인 사용법과 순회 방법을 살펴본다.&lt;/p&gt;
&lt;pre class=&quot;go&quot;&gt;&lt;code&gt;package main

import (
	&quot;container/list&quot;
	&quot;fmt&quot;
)

func main() {
	v := list.New()
	e4 := v.PushBack(4)
	e1 := v.PushFront(1)
	v.InsertBefore(3, e4)
	v.InsertAfter(2, e1)

	for e := v.Front(); e != nil; e = e.Next() { // Next() 메서드는 현재 요소의 다음 요소를 반환한다. 다음 요소가 없으면 nil 반환
		// (초기문)e는 v.Front() 부터; (조건문)e가 nil 값일 때까지; (후처리)e의 다음 요소로 넘어감
		fmt.Print(e.Value, &quot; &quot;)  // 1 2 3 4
	}

}
&lt;/code&gt;&lt;/pre&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;&amp;nbsp;&lt;/h2&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;22.2 링&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;링(ring)은 container 패키지에서 제공하는 자료 구조이다. 리스트와 유사한 구조인데, 맨 뒤의 요소와 맨 앞의 요소가 서로 연결된 자료 구조이다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;기본 예제를 살펴본다.&lt;/p&gt;
&lt;pre class=&quot;go&quot;&gt;&lt;code&gt;package main

import (
	&quot;container/ring&quot;
	&quot;fmt&quot;
)

func main() {
	r := ring.New(5) // 요소가 5개인 링 생성

	//n := r.Len()

	// 순회하면서 모든 요소에 값 대입
	for i := 0; i &amp;lt; r.Len(); i++ {
		//r.Value = i // ring의 요소는 타입이 없나?, int를 넣으면 int가 저장됨.
		//r.Value = 'A' + i
		//이렇게 해도된다. 요소에 타입이 없는 것 같다.
		if i%2 == 0 {
			r.Value = 1
		} else {
			r.Value = '아'
		}
		r = r.Next()
	}

	for i := 0; i &amp;lt; r.Len(); i++ {
		fmt.Printf(&quot;%c &quot;, r.Value)
		r = r.Next()
	}

}
&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이러한 원형 구조이기 때문에, 개수가 고정되고 오래된 요소는 지워도 되는 경우에 적합한 자료 구조이다. 예를 들어, 문서 편집기의 실행 취소 기능(일정 개수의 명령을 저장하고, 실행 취소 할 수 있음, 너무 오래된 명령은 지워짐)에서 사용할 수 있다.&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;&amp;nbsp;&lt;/h2&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;22.3 맵&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;맵(map)은 키와 값(key/value) 형태로 데이터를 저장하는 자료 구조이다. 언어에 따라서 딕셔너리(dictionary), 해시테이블(hash table), 해시맵(hashmap) 등으로 부른다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;맵은 키와 값의 쌍으로 데이터를 저장하고, 키를 사용해 접근하여 값을 저장하거나 변경할 수 있다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;맵의 기본 예제를 살펴본다.&lt;/p&gt;
&lt;pre class=&quot;go&quot;&gt;&lt;code&gt;package main

import (
	&quot;fmt&quot;
)

type Product struct {
	Name   string
	Prince int
}

func main() {
	m := make(map[int]Product) // 맵 생성, map[key타입]value타입
	m[1001] = Product{&quot;볼펜&quot;, 500}
	m[1002] = Product{&quot;지우개&quot;, 500}
	m[1003] = Product{&quot;연필&quot;, 500}
	m[1004] = Product{&quot;샤프&quot;, 500}

	delete(m, 1002)
	delete(m, 1005) // 없는 값에 접근해도 에러가 발생하지는 않는다. v, ok := m[3] 으로 존재여부를 체크할 수 있다.

	for k, v := range m {
		fmt.Println(k, v)
	}
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;맵은 hash() 함수로 만들어진다. 해시 함수는 결과값이 항상 일정 범위(개수)를 가진다. 같은 입력에 대해서는 같은 결과를 보장하고, 일정 범위에서 반복된다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;그래서 입력값(key)을 hash()에 넣어 결과값을 인덱스로 사용해 배열에 값(value)를 넣는 방식으로 사용하면 맵과 같은 형태를 만들 수 있다. 물론 이런 단순한 구현에서는 다른 입력값으로 같은 결과가 나올 수 있는 함정이 있기 때문에 리스트를 활용할 수 있다. 그래도 hash() 정도의 연산으로 고정된 시간에서 데이터를 저장, 사용할 수 있는 장점이 있는 것 같다.&lt;/p&gt;</description>
      <category>Book Study/Tucker의 Go Programming</category>
      <category>go</category>
      <category>묘공단</category>
      <author>한명</author>
      <guid isPermaLink="true">https://a-person.tistory.com/20</guid>
      <comments>https://a-person.tistory.com/20#entry20comment</comments>
      <pubDate>Sun, 22 Oct 2023 13:47:15 +0900</pubDate>
    </item>
    <item>
      <title>Go스터디: 3주차(12~17장)</title>
      <link>https://a-person.tistory.com/19</link>
      <description>&lt;p data-ke-size=&quot;size16&quot;&gt;이 글은 골든래빗 &amp;lsquo;Tucker의 Go 언어 프로그래밍의 12~17장 써머리입니다.&lt;/p&gt;
&lt;h1&gt;12. 배열&lt;/h1&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;배열(array)은 같은 타입의 데이터들로 이루어진 타입이다. 배열의 각 값은 요소(element)라고 하고, 이를 가리키는 위치값을 인덱스(index)라고 한다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;// var 변수명 [요소개수]타입
var t [5]float64
days := [3]string{&quot;monday&quot;,&quot;tuesday&quot;,&quot;wednesday&quot;}
x := [...]int{10,20,30} // 요소 개수 생략
var b = [2][5]int{ // 다중 배열
	{1,2,3,4,5},
	{6,7,8,9,10}, // 초기화 시 닫는 중괄호 } 가 마지막 요소와 같은 줄에 있지 않은 경우 마지막 항목 뒤에 쉼표, 를 찍어줘야 함!
} // 추후 항목이 늘어날 경우 쉼표를 찍지 않아서 생길 수 있는 오류를 방지하기 위해 존재하는 규칙
&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;배열 선언 시 개수는 항상 상수여야 한다. 아니면 에러 발생!&lt;/p&gt;
&lt;pre class=&quot;smali&quot;&gt;&lt;code&gt;package main

func main() {
	x := 5
	b := [x]int{1, 2, 3, 4, 5} // invalid array length x
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;배열 순회&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;package main

import &quot;fmt&quot;

func main() {
	a := [5]int{1, 2, 3, 4, 5}
	for _, v := range a {
		fmt.Println(v)
	}

	var b = [2][5]int{ // 다중 배열
		{1, 2, 3, 4, 5},
		{6, 7, 8, 9, 10},
	}

	for _, b1 := range b {
		for _, b2 := range b1 {
			fmt.Print(b2, &quot; &quot;)
		}
		fmt.Println()
	}
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;배열의 핵심&lt;/p&gt;
&lt;ol style=&quot;list-style-type: decimal;&quot; data-ke-list-type=&quot;decimal&quot;&gt;
&lt;li&gt;배열은 연속된 메모리다.&lt;/li&gt;
&lt;li&gt;컴퓨터는 인덱스와 타입 크기를 사용해서 메모리 주소를 찾는다.&lt;/li&gt;
&lt;/ol&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;중요 공식&lt;/p&gt;
&lt;blockquote data-ke-style=&quot;style3&quot;&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;요소 위치 = 배열 시작 주소 + (인덱스 X 타입크기) &lt;br /&gt;&amp;rarr; a 배열 시작 주소가 100번지 라면 a[3] 주소는 100 + (3*4) = 112번지&lt;br /&gt;&lt;br /&gt;배열 크기 = 타입크기 X 항목개수 &lt;br /&gt;&amp;rarr; [5]int = 8*5 = 40bytes&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;왜 두 공식의 int 타입크기를 다르게 한거지? 오타?&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h1&gt;13. 구조체&lt;/h1&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;여러 필드(field)를 묶어서 하나의 구조체(structure)를 만든다. 구조체는 다른 타입의 값들을 변수 하나로 묶어준다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;프로그래밍의 역사는 객체 간 결합도(객체 간 의존관계)는 낮추고, 연관있는 데이터 간 응집도를 올리는 방향으로 흘러왔다. 함수와 구조체 모드 응집도를 증가시키는 역할을 한다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;rarr; 구조체의 등작으로, 프로그래머는 개별 데이터의 조작/연산보다는 구조체 간의 관계와 상호작용 중심으로 변화하게 되었다.&lt;/p&gt;
&lt;pre class=&quot;crystal&quot;&gt;&lt;code&gt;type 타입명 struct {
	필드명 타입
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;학생(Student) 구조체를 만들고, 이름, 반과 같은 정보를 넣는다. 구조체의 각 필드는 .을 통해서 접근할 수 있다.&lt;/p&gt;
&lt;pre class=&quot;scala&quot;&gt;&lt;code&gt;package main

import &quot;fmt&quot;

func main() {
	type Student struct {
		name  string
		class int
	}
	var std1 Student

	std1.name = &quot;홍&quot;
	std1.class = 1

	fmt.Println(std1.name, std1.class)

	// 초기화를 아래와 같이 해줄 수도 있다.
	std2 := Student{&quot;김&quot;, 2}

	fmt.Println(std2.name, std2.class)
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;구조체를 포함하는 구조체를 만들 때,&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;b&gt;내장 타입처럼 포함&lt;/b&gt;할 때는 instance.StructName.fieldName 으로 접근한다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;b&gt;포함된 필드 방식&lt;/b&gt;으로 사용할 수 있는데, 이때는 instance.fieldName으로 포함된 구조체 필드를 바로 접근할 수도 있다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;필드 배치 순서에 따른 구조체 크기 변화&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;구조체의 인스턴스가 생성되면 구조체 필드의 크기를 더한 만큼의 메모리 공간을 차지하게 된다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;다만 필드 배치 순서에 따라 구조체 크기가 달라 질 수 있는데, 메모리 정렬(Memory Alignment)를 고려해서 8의 배수인 메모리 주소에 데이터를 할당하도록 동작하기 때문에, 만약 적은 사이즈의 필드가 있는 경우는 메모리 패딩(memory Padding)을 넣고, 다음 8배수 공간에 데이터를 할당하기 때문이다.&lt;/p&gt;
&lt;blockquote data-ke-style=&quot;style3&quot;&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;메모리 정렬이란 컴퓨터가 데이터에 효과적으로 접근하고자 메모리를 일정 크기 간격으로 정렬하는 것을 말한다.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;pre class=&quot;go&quot;&gt;&lt;code&gt;package main

import (
	&quot;fmt&quot;
	&quot;unsafe&quot;
)

func main() {
	type User struct {
		Age   int32
		Score float64
	}

	user := User{23, 77.2}

	fmt.Println(unsafe.Sizeof(user.Age), unsafe.Sizeof(user.Score)) // 4, 8
	fmt.Println(unsafe.Sizeof(user)) // 16
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이러한 이유로 구조체에서 메모리 패딩을 고려한 필드 배치방법을 사용해야 한다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;rarr; 8 bytes 보다 작은 필든느 8 bytes 크기(단위)를 고려해서 몰아서 배치하자.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h1&gt;14. 포인터&lt;/h1&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;포인터는 메모리 주소를 값으로 갖는 타입이다. 포인터 변수를 초기화 하지 않으면 기본 값은 nil이다.&lt;/p&gt;
&lt;pre class=&quot;go&quot;&gt;&lt;code&gt;package main

import (
	&quot;fmt&quot;
)

func main() {
	var a int
	var p1 *int
	p1 = &amp;amp;a // a의 메모리 주소를 포인터 변수 p에 대입

	var p2 *int = &amp;amp;a

	fmt.Println(p1 == p2) // == 연산을 사용해 포인터가 같은 메모리 공가늘 가리키는 지 확인
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;포인터는 왜 쓸까?&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;rarr; 변수 대입이나 함수 인수 전달은 항상 값 복사를 하기 때문에, 메모리 공간을 사용하는 문제와 큰 메모리 공간을 복사할 때 발생하는 성능 문제가 있다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;rarr; 또한 다른 공간으로 복사되기 때문에 실제 변수의 값에 변경 사항이 적용되지 않는다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;구조체를 생성해 포인터 변수 초기화 하기&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;구조체 변수를 별도로 생성하지 않고, 곧바로 포인터 변수에 구조체를 생성해 주소를 초기값으로 대입하는 방법&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;방식1) Data타입 구조체 변수 data를 선언하고, data변수의 주소를 반환하여 대입&lt;/p&gt;
&lt;pre class=&quot;lasso&quot;&gt;&lt;code&gt;var data Data
var p *Data = &amp;amp;data
&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;방식2) *Data 타입 구조체 변수 p를 선언하고, Data 구조체를 만들어서 주소를 반환&lt;/p&gt;
&lt;pre class=&quot;lasso&quot;&gt;&lt;code&gt;var p *Data = &amp;amp;Data{}
&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;rarr; 이렇게 하면 포인터 변수 p만 가지고도 구조체의 필드값에 접근하고 변경할 수 있다. (실제 Data 구조체에 대한 변수를 생성하지 않음)&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;방식3) new() 내장 함수: 포인터값을 별도의 변수를 선언하지 않고 초기화 할 수도 있는데, new 내장 함수를 이용하면 더 간단히 표현할 수 있다.&lt;/p&gt;
&lt;pre class=&quot;lasso&quot;&gt;&lt;code&gt;p1 := &amp;amp;Data{} // &amp;amp;를 사용하는 초기화
var p2 = new(Data) // new()를 사용하는 초기화
&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;인스턴스&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;인스턴스란 메모리에 할당된 데이터의 실체이다. 포인터를 이용해서 인스턴스에 접근할 수 있다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;구조체 포인터를 함수 매개변수로 받는다는 말은 구조체 인스턴스로 입력을 받겠다는 것과 동일하다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Go 언어는 가비지 컬렉터(Garbage Collector)라는 메모리 정리 기능을 제공하는데, 카비지 컬렉터가 일정 간격으로 메모리에 쓸모 없어진 데이터를 정리한다.&lt;/p&gt;
&lt;blockquote data-ke-style=&quot;style3&quot;&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;쓸모 없어진 데이터: 아무도 찾지 않는 데이터는 쓸모 없는 데이터다. 예를 들어, 함수가 종료되면 함수에서 사용한 인스턴스는 더 이상 쓸모가 없게 된다.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;스택 메모리와 힙 메모리&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;대부분의 프로그래밍 언어는 메모리를 할당할 때 스택 메모리 영역 또는 힙 메모리 영역을 사용한다. 이론상 스택 메모리 영역이 효율적이지만, 스택 메모리는 함수 내부에서만 사용 가능한 영역이다. 그래서 함수 외부로 공개되는 메모리 공간은 힙 메모리 공간을 할당한다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;자바는 클래스 타입을 힙에, 기본 타입을 스택에 할당한다. Go에서는 탈출 검사(escape anlaysis)를 해서 어느 메모리에 할당할지 결정한다. 즉 Go 언어는 어떤 타입이나 메모리 할당 공간이 함수 외부로 공개되는지 여부를 자동으로 검사해서 스택 메모리에 할당할지 힙 메모리에 할당할지 결정한다.&lt;/p&gt;
&lt;pre class=&quot;go&quot;&gt;&lt;code&gt;package main

import (
	&quot;fmt&quot;
)

type User struct {
	Name string
	Age  int
}

func NewUser(name string, age int) *User {
	var u = User{name, age}
	return &amp;amp;u // 보통 함수가 종료하면, 함수 내 선언된 변수는 사라진다. 탈출 분석을 통해 u 메로리가 사라지지 않음
}

func main() {
	userPointer := NewUser(&quot;AAA&quot;, 22)
	fmt.Println(userPointer)
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Go의 스택 메모리는 계속 증가되는 동적 메모리 풀로, 일정한 크기를 같는 C/C++과 비교해 메모리 효율성 높고, 스택 고갈 문제도 발생하지 않는다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h1&gt;15. 문자열&lt;/h1&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;문자열은 문자 집합을 나타내는 타입으로 string이다. 문자열은 큰 따옴표나 백쿼트로 묶어서 표시한다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Go는 UTF-8 문자코드를 표준 문자 코드로 사용한다. UTF-8은 자주 사용되는 영문자, 숫자 일부 특수문자를 1바이트로 표현하고, 그 외 다른 문자들은 2~3바이트로 표현한다. (한글 사용 가능)&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;rune 타입&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;문자 하나를 표현하기 위해 rune 타입을 사용한다. UTF-8은 한 글자가 1~3바이트 크기이기 때문에, UTF-8 문자값을 가지려면 3바이트가 필요하다. Go에서는 기본 타입에서 3 바이트 정수 타입은 제공되지 않기 때문에 rune 타입은 4 바이트 정수 타입인 int32 타입의 별칭 타입이다.&lt;/p&gt;
&lt;pre class=&quot;css&quot;&gt;&lt;code&gt;package main

import (
	&quot;fmt&quot;
)

func main() {
	str := &quot;Hello 월드&quot;
	runes := []rune(str)

	fmt.Printf(&quot;len(str) = %d\\n&quot;, len(str))     // string 타입 길이,12 
	fmt.Printf(&quot;len(runes) = %d\\n&quot;, len(runes)) // []rune 타입 길이, 8
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;string에서 영문은 1바이트, 한글은 3바이트이므로, 총 12가 된다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;string 타입을 []rune으로 변환하면, 각 글자들로 이뤄진 배열로 변환된다. 그래서 각각이 1바이트로 8이 된다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;[]byte 타입&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;string 타입과 []byte 타입은 상호 타입 변환이 가능하다. []byte는 byte 즉 1바이트 부호 없는 정수 타입의 가변 길이 배열이다. 문자열은 메모리에 있는 데이터고, 메모리는 1바이트 단위로 저장되기 때문에 모든 문자열은 1바이트 배열로 변환 가능하다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;파일을 쓰거나, 네트워크로 데이터를 전송하는 경우, io.Writer 인터페이스를 사용하고, io.Writer 인터페이스는 []byte 타입을 인수로 받기 때문에 []byte 타입으로 변환해야 한다. 문자열을 쉽게 전송하고자 string에서 []byte 타입으로 변환을 지원한다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;문자열 구조&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;string은 필드가 1개인 구조체이다. 첫번째 필드 Data는 uintptr 타입으로 문자열의 데이터가 있는 메모리 주소를 나타내는 일정의 포인터이고, 두번째 필드 Len은 문자열의 길이를 나타낸다.&lt;/p&gt;
&lt;pre class=&quot;elm&quot;&gt;&lt;code&gt;type StringHeader struct {
	Data uintptr
	Len int
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;rarr; str2 변수에 str1 변수를 대입하면, str1의 Data와 Len값만 str2에 복사한다. 문자열 자체가 복사되지는 않는다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;문자열과 immutable&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;string 타입이 가리키는 문자열의 일부만 변경 할 수 없다. 변경하려면 슬라이스로 타입 변환하고, 변경하는 방식을 취할 수 있다.&lt;/p&gt;
&lt;pre class=&quot;go&quot;&gt;&lt;code&gt;package main

func main() {
	str := &quot;Hello world&quot;
	str = &quot;How are you&quot;
	str[2] = 'a' // cannot assign to str[2] (neither addressable nor a map index expression)
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;rarr; 문자열의 합산을 하면 기존 문자열 메모리 공간을 건드리지 않고, 새로운 메모리 공간을 만들어서 두 문자열을 합치기 때문에 주소값이 변경된다. (문자열 불변 원칙이 준수 된다.)&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이로 인해 메모리 낭비가 있을 수 있는데, strings 패키지의 Builder를 이용해서 메모리 낭비를 줄일 수 있다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h1&gt;16. 패키지&lt;/h1&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;16.01. 패키지&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;패지키(package)란 Go에서 코드를 묶는 가장 큰 단위 이다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;함수로 코드 블록을, 구조체로 데이터를, 패키지로 함수와 구조체와 그 외 코드를 묶는다. main 패키지는 특별한 패키지로 프로그램 시작점을 포함한 패키지이다. main() 함수와 다른 함수, 구조체 등을 가진다. 외부 패키지는 main() 을 가지지 않는다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;한 프로그램은 main 패키지 외에 다수의 다른 패키지를 포함할 수 있다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이러한 패키지를 import해 사용한다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;패키지를 가져오면 해당 패키지명을 쓰고 . 연산자를 사용해 패키지에서 제공하는 함수, 구조체 등에 접근할 수 있다.&lt;/p&gt;
&lt;pre class=&quot;gradle&quot;&gt;&lt;code&gt;import (
	&quot;fmt&quot;
	&quot;math/rand&quot;
	&quot;text/template&quot;
	htemplate &quot;html/template&quot; // 동일한 패키지 명에는 별칭 htemplate
	_ &quot;github.com/mattn/go-sqllite3&quot; // 패키지를 import하면 무조건 사용해야한다. 
  // 패키지를 직접 사용하지 않지만 부가효과를 얻는 경우 _ 을 패키지명 앞에 붙여준다.
  // 부과효과: 패키지가 초기화 되면서 실행되는 코드에 따른 효과
)

fmt.Println(&quot;Hello World&quot;) // fmt 패키지명 . Println() 함수명
fmt.Println(rand.Int()) // 경로가 있느 패키지인 math/rand 패키지의 경우는 마지막 폴더명인 rand만 사용한다.
&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;패키지를 import 하면 컴파일러는 패키지 내 전역 변수를 초기화 한다. 그런 다음 패키지에 init() 함수가 있다면 호출해 패키지를 초기화 한다. init() 함수는 반드시 입력 매개변수가 없고, 반환값도 없는 함수여야 한다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이때 패키지의 초기화 함수인 init() 함수 기능만 사용하기 원할 경우 밑줄 _을 이용해서 import 한다.&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;16.02. 모듈&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;모듈은 패키지를 모아 놓은 Go의 프로젝트 단위로 Go 1.16부터 기본이 됐다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이전에는 Go 모듈을 만들지 않는Go 코드는 모두 GOPATH/src 아래 폴더 아래에 있어야 했지만, 모듈이 기본이 되면서 모든 Go 코드는 Go 모듈 아래에 있어야 한다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;go build를 사용하려면 반드시 Go 모듈 루트 폴더에 go.mod 파일이 있어야 한다. go build를 통해 실행 파일을 만들 때, go.mod와 외부 저장소 패키지 버전 정보를 담고 있는 go.sum 파일을 통해 외부 패키지와 모듈 내 패키지를 합쳐서 실행 파일을 만든다.&lt;/p&gt;
&lt;pre class=&quot;maxima&quot;&gt;&lt;code&gt;go mod init 패키지명
&lt;/code&gt;&lt;/pre&gt;
&lt;blockquote data-ke-style=&quot;style3&quot;&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;go.mod : Go 버전과 외부 패키지 등이 명시된 파일 &lt;br /&gt;go.sum: 외부 저장소 패키지 버전 정보를 담고 있는 파일, 패키지 위조 여부 검사를 위한 checksum 결과가 있다.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;모듈 예제&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;1. goproject/usepkg 폴더를 만든다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;2. go 모듈 생성&lt;/p&gt;
&lt;pre id=&quot;code_1697356184053&quot; class=&quot;bash&quot; data-ke-language=&quot;bash&quot; data-ke-type=&quot;codeblock&quot;&gt;&lt;code&gt;go mod init goproject\usepkg&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;3. goproject/usepkg/custompkg/custompkg&lt;/p&gt;
&lt;pre id=&quot;code_1697356303741&quot; class=&quot;bash&quot; data-ke-language=&quot;bash&quot; data-ke-type=&quot;codeblock&quot;&gt;&lt;code&gt;package custompkg

import &quot;fmt&quot;

func PrintCustom() {
	fmt.Println(&quot;This is custom pkg.&quot;)
}&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;4. goproject/usepkg/usepkg.go&lt;/p&gt;
&lt;pre id=&quot;code_1697356312460&quot; class=&quot;bash&quot; data-ke-language=&quot;bash&quot; data-ke-type=&quot;codeblock&quot;&gt;&lt;code&gt;package main

import (
	&quot;fmt&quot; // go가 설치되면 같이 설치되는 표준 패키지
	&quot;goprojects/usepkg/custompkg&quot; // 현재 모듈에 속한 패키지

	&quot;github.com/guptarohit/asciigraph&quot; // 외부 저장소 패키지
	&quot;github.com/tuckersGo/musthaveGo/ch16/expkg&quot;
)

func main() {
	custompkg.PrintCustom()
	expkg.PrintSample()

	data := []float64{3, 4, 5, 5, 5, 2, 13, 5, 8, 6, 4}
	graph := asciigraph.Plot(data)
	fmt.Println(graph)
}&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;5. go mod tidy 로 모듈에 필요한 패키지를 찾아서 다운로드 해주고, 필요한 패키지 정보를 go.mod 파일과 go.sum 파일에 적어주게 된다. 다운 받은 외부 패키지인 asciigraph 패키지와 expkg 패키지는 GOPATH/pkg/mod 폴더에 버전별로 저장되어 있다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;6. 파일 내용: go.mod와 go.sum에 필요한 패키지의 버전 정보가 기입되어서 항상 같은 버전의 패키지가 사용되므로, 버전 업데이트에 따른 문제가 발생하지 않는다.&lt;/p&gt;
&lt;pre id=&quot;code_1697356331189&quot; class=&quot;bash&quot; data-ke-language=&quot;bash&quot; data-ke-type=&quot;codeblock&quot;&gt;&lt;code&gt;// go.mod
module goprojects/usepkg

go 1.20

require (
	github.com/guptarohit/asciigraph v0.5.6
	github.com/tuckersGo/musthaveGo/ch16/expkg v0.0.0-20230126175348-6f7945b85bda
)

// go.sum
github.com/guptarohit/asciigraph v0.5.6 h1:0tra3HEhfdj1sP/9IedrCpfSiXYTtHdCgBhBL09Yx6E=
github.com/guptarohit/asciigraph v0.5.6/go.mod h1:dYl5wwK4gNsnFf9Zp+l06rFiDZ5YtXM6x7SRWZ3KGag=
github.com/tuckersGo/musthaveGo/ch16/expkg v0.0.0-20230126175348-6f7945b85bda h1:F21GWOayUeFkA47sc6oB2zb6ly9emQUx2A1wHTETta0=
github.com/tuckersGo/musthaveGo/ch16/expkg v0.0.0-20230126175348-6f7945b85bda/go.mod h1:o12FpIqEJes/Y7CWE9BJemI9VUTQBsH7t3wYlDCw3Fw=&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h1&gt;17. 숫자 맞추기 게임 만들기&lt;/h1&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;숫자 맞추기 게임은 책에서 소개하는 첫번째 프로젝트로 간단히 랜던값을 생성하고, 표준 입력으로 숫자를 받고, 두 값을 비교해서 결과를 출력하는 프로젝트이다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;사용자 입력과 비교를 반복적으로 처리하기 위해 main()함수는 for문으로 구성되어 있다.&lt;/p&gt;</description>
      <category>Book Study/Tucker의 Go Programming</category>
      <category>go</category>
      <category>묘공단</category>
      <author>한명</author>
      <guid isPermaLink="true">https://a-person.tistory.com/19</guid>
      <comments>https://a-person.tistory.com/19#entry19comment</comments>
      <pubDate>Sun, 15 Oct 2023 16:54:49 +0900</pubDate>
    </item>
    <item>
      <title>Go스터디: 2주차(3~11장)</title>
      <link>https://a-person.tistory.com/18</link>
      <description>&lt;p data-ke-size=&quot;size16&quot;&gt;이 글은 골든래빗 &amp;lsquo;Tucker의 Go 언어 프로그래밍의 3~11장 써머리입니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이 책은 다른 언어에 익숙한 분들은 go의 특성을 이해할 수 있고, 다른 언어에 대한 이해가 없어도 프로그래밍에 대한 기본을 이해하기 쉽게 쓰여진 장점이 있습니다.&lt;/p&gt;
&lt;h1&gt;03 Hello Go World&lt;/h1&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;03.01. Go에서 코드 실행 단계&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;코드가 프로그램이 되어 실행되기 까지 5가지 단계를 거쳐야 한다.&lt;/p&gt;
&lt;ol style=&quot;list-style-type: decimal;&quot; data-ke-list-type=&quot;decimal&quot;&gt;
&lt;li&gt;폴더 생성
&lt;ol style=&quot;list-style-type: decimal;&quot; data-ke-list-type=&quot;decimal&quot;&gt;
&lt;li&gt;모든 코드는 패키지 단위로 작성된다. 같은 폴더에 위치한 .go 파일은 모두 같은 패키지에 포함되고, 패키지 명으로 폴더명을 사용한다.&lt;/li&gt;
&lt;li&gt;예를 들어, goproject/hello/extra
&lt;ol style=&quot;list-style-type: decimal;&quot; data-ke-list-type=&quot;decimal&quot;&gt;
&lt;li&gt;여기서 hello 폴더에 든 .go 파일은 hello 패키지가 된다.&lt;/li&gt;
&lt;li&gt;extra 폴더에 든 .go 패키지는 extra 패키지가 된다.&lt;/li&gt;
&lt;/ol&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;/li&gt;
&lt;li&gt;.go 파일 생성 및 작성&lt;/li&gt;
&lt;li&gt;Go 모듈 생성 (&amp;rarr; 16장)
&lt;ol style=&quot;list-style-type: decimal;&quot; data-ke-list-type=&quot;decimal&quot;&gt;
&lt;li&gt;go 1.16 버전 이후로 go 모듈이 기본으로 적용된다. 그러므로, go 코드는 빌드하기 전에 모듈을 생성해야 한다.&lt;/li&gt;
&lt;li&gt;모듈 생성: go mod init &amp;lt;모듈 이름&amp;gt;&lt;/li&gt;
&lt;li&gt;go mod init goproject/hello&lt;/li&gt;
&lt;li&gt;go 모듈을 생성하면 go.mod 파일이 생성되고, 여기에 모듈명, go버전, 필요한 패키지 목록 정보가 저장된다.&lt;/li&gt;
&lt;/ol&gt;
&lt;/li&gt;
&lt;li&gt;빌드
&lt;ol style=&quot;list-style-type: decimal;&quot; data-ke-list-type=&quot;decimal&quot;&gt;
&lt;li&gt;go build 명령으로 go 코드를 기계어로 변환해 실행 파일을 만든다.&lt;/li&gt;
&lt;li&gt;GOOS, GOARCH 환경변수로, 다른 환경에 실행되는 실행 파일 만들 수 있다.&lt;/li&gt;
&lt;li&gt;GOOS=linux GOARCH=amd64 go build&lt;/li&gt;
&lt;li&gt;go tool disk list 로 설정 가능한 값을 확인&lt;/li&gt;
&lt;/ol&gt;
&lt;/li&gt;
&lt;li&gt;실행
&lt;ol style=&quot;list-style-type: decimal;&quot; data-ke-list-type=&quot;decimal&quot;&gt;
&lt;li&gt;실행 파일을 명령어로 실행&lt;/li&gt;
&lt;/ol&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;03.02. 샘플 코드 설명&lt;/h2&gt;
&lt;pre class=&quot;go&quot;&gt;&lt;code&gt;package main // 패키지 선언: 이 코드가 어떤 패키지에 속하는지 알려줌
						 // main 패키지는 프로그램 시작점(entry point)을 포함하는 특별한 패키지

import &quot;fmt&quot; // 패키지 가져온다. fmt는 표준 입출력을 다루는 내장 패키지 -&amp;gt; 5장

func main() { // main 함수 시작, main() 함수는 프로그램 진입점 함수
	// Hello Go World 출력 _ // 는 주석 예약어
  // go 규약: 다른 프로그램에서 쓰이는 함수 앞에 함수명으로 시작하는 주석을 달아 함수를 설명
  // 한줄 주석 /* */ 여러줄 주석
	fmt.Println(&quot;Hello Go World&quot;) // 표준 출력(터미널 화면)으로 문자열을 출력 함수
} // 코드 블록 종료, 여기서는 main 함수 블록 종료
&lt;/code&gt;&lt;/pre&gt;
&lt;h1&gt;&amp;nbsp;&lt;/h1&gt;
&lt;h1&gt;04 변수&lt;/h1&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;04.01. 변수 기본&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;변수(Variable)는 &amp;lsquo;값을 저장하는 메모리 공간을 가리키는 이름&amp;rsquo;이다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;컴퓨터 입장에서 프로그램은 &amp;lsquo;메모리에 있는 데이터를 언제 어떻게 변경할지 나타낸 문서&amp;rsquo;이다&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;그래서 데이터를 어떻게 저장하고, 조작(선언, 사용, 출력)하는지가 핵심이 된다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;go에서 변수는 아래와 같이 선언한다. (메모리 할당이라고 부른다)&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;var a int = 10 // 변수선언 키워드, 변수명, 타입 = 초기값
&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;변수는 4가지의 속성을 가진다.&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;이름: 프로그래머는 이름을 통해 메모리 공간을 접근, 이름은 camel case 를 사용하는 것이 권장, 변수가 함수 내부가 아닌 외부에 선언되어 있고, 변수명이 대문자로 시작하면 이 변수는 패키지 외부로 공개된다&lt;/li&gt;
&lt;li&gt;값: 메모리 공간에 저장된 값&lt;/li&gt;
&lt;li&gt;주소: 메모리 공간의 시작 주소&lt;/li&gt;
&lt;li&gt;타입: 변수 값의 형태(정수, 실수, 문자열, ..), 실제로 메모리 시작 주소에서 타입 크기에 해당하는 공간을 할당한다&lt;/li&gt;
&lt;/ul&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;변수 선언은 선언 대입문을 사용할 수 있다 _ var 키워드와 타입을 생략해 변수를 선언&lt;/p&gt;
&lt;pre class=&quot;makefile&quot;&gt;&lt;code&gt;var b = 3.1415
c := 365
s := &quot;hello&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;근데 같은 변수에 대해서 :=를 또 사용하면 아래와 같은 에러가 발생한다. NO NEW VAR!!&lt;/p&gt;
&lt;pre class=&quot;vim&quot;&gt;&lt;code&gt;.\\5.1.go:18:9: no new variables on left side of :=
&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;go는 강 타입 언어이기 때문에 연산이나 대입에서 타입이 다르면 에러가 발생한다.&lt;/p&gt;
&lt;pre class=&quot;go&quot;&gt;&lt;code&gt;a := 3
b := 3.5

var c int = b // cannot use b (variable of type float64) as int value in variable declaration
d := a * b // invalid operation: a * b (mismatched types int and float64)
&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;타입을 변환하려면 원하는 타입명을 적고 ()로 변환할 수 있는데, 이럴 때 값이 변경될 수 있다&lt;/p&gt;
&lt;pre class=&quot;makefile&quot;&gt;&lt;code&gt;a := 3
b := 3.5

d := a * int(b)

fmt.Println(d)  // 9
&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;전역 변수(global variable)는 같은 패키지 내에서 언제나 접근할 수 있지만, 지역 변수(local variable)의 범위는 {} 안에서만 유효하다.&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;04.02. 숫자의 표현&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;정수의 표현에서 부호를 표현하기 위해 부호 비트를 이용한다. 단 값도 2의 보수(비트를 반전하고 +1)를 사용해 표현한다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;실수의 표현을 위해 부호비트(1비트), 지수부(8비트), 소수부(23비트)를 사용한다. 실수 타입은 유효자리 수가 정해져 있으므로 데이터 처리에 유의해야 한다.&lt;/p&gt;
&lt;h1&gt;&amp;nbsp;&lt;/h1&gt;
&lt;h1&gt;05 fmt 패키지를 이용한 텍스트 입출력&lt;/h1&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;05.01 입출력 기본&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;프로그램과 사용자는 입력과 출력을 통해서 상호작용을 하는데, 기본적으로는 입력을 위해 키보드, 출력을 위해 화면을 이용한다. 모든 입력과 출력을 프로그램에서 구현하기는 복잡하기 때문에, 운영체제가 제공하는 표준 입출력 스트림(standard input/output stream)을 사용하면, 프로그램 내부에서 입출력을 간편하게 처리할 수 있다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;go에서는 fmt 패키지를 이용한다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;fmt의 표준 출력 함수&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Print() 함수 입력값을 출력한다&lt;/p&gt;
&lt;table style=&quot;border-collapse: collapse; width: 100%;&quot; border=&quot;1&quot; data-ke-align=&quot;alignLeft&quot;&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Println()&lt;/td&gt;
&lt;td&gt;함수 입력값을 출력+개행한다&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Printf()&lt;/td&gt;
&lt;td&gt;함수 입력값을 서식 문자를 이용한 출력한다&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;fmt의 표준 입력 함수&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Scan() 표준 입력에서 값을 입력 받는다&lt;/p&gt;
&lt;table style=&quot;border-collapse: collapse; width: 100%;&quot; border=&quot;1&quot; data-ke-align=&quot;alignLeft&quot;&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Scanf()&lt;/td&gt;
&lt;td&gt;표준 입력에서 서식 문자로 입력 받는다&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Scanln()&lt;/td&gt;
&lt;td&gt;표준 입력에서 한 줄을 읽어서 값을 받는다&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Scan()&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;변수들의 메모리 주소를 인수로 받는다. _ 여러 값을 입력받을 때 입력 값은 공백이나 줄바꿈으로 구분한다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;반환값은 성공적으로 입력한 개수와 입력 실패 시 에러를 반환한다.&lt;/p&gt;
&lt;pre class=&quot;go&quot;&gt;&lt;code&gt;func Scan(a ...interface{}) (n int, err error)
&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;반환값의 의미를 아래 예시로 살펴본다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;go run .\\5.2.go
1 hello  // 입력
1 expected integer  // 1개 받고, err

go run .\\5.2.go
hello 1  // 입력
0 expected integer  // 0개 받고 err
&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Scanf() 함수 원형&lt;/p&gt;
&lt;pre class=&quot;go&quot;&gt;&lt;code&gt;func Scanf(format string, a ...interface{}) (n int, err error)
&lt;/code&gt;&lt;/pre&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;05.02 키보드 입력과 Scan()의 동작 원리&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;사용자가 표준 입력 장치로 입력하면 입력 데이터는 컴퓨터 내부에 표준 입출력 스트림(standard input stream)이라는 메모리 공간에 미시 저장되는데, Scan()함수들은 그 표준 입력 스트림에서 값을 읽어서 입력값을 처리한다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;Stream 이란 흐름의 의미이다. 즉, 입력 데이터가 연속된 데이터 흐름 형태를 가지고 있다는 뜻이다. 한편 한번 읽은 데이터를 다시 읽을 수 없다는 의미도 포함한다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이때 먼저 입력한 데이터 부터 읽어오기 때문에 데이터는 거꾸로 저장된다. (FIFO 구조)&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;다만 이러한 구조 때문에 var int a 에 대해서 Scan(&amp;amp;a)으로 Hello를 읽으면 H 먼저 읽게 되는데, 이러한 구조 때문에 첫번째 캐릭터에서 에러가 발생하면 표준 입력 스트림에 불필요한 입력이 남게된다(여기서는 ello)&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;그래서 여러 번 Scan() 함수를 호출할 때는 이러한 문제가 있으므로, 표준 입력 스트림을 비워줄 필요가 있다.&lt;/p&gt;
&lt;pre class=&quot;go&quot;&gt;&lt;code&gt;package main

import (
	&quot;bufio&quot; // io를 담당하는 패키지
	&quot;fmt&quot;
	&quot;os&quot; // 표준 입출력 등을 가지고 있는 패키지
)

func main() {
	stdin := bufio.NewReader(os.Stdin)  // 표준 입력을 읽는 객체

	var a int
	var b int

	n, err := fmt.Scan(&amp;amp;a, &amp;amp;b)
	if err != nil {
		fmt.Println(n, err)
		stdin.ReadString('\\n') // 줄바꿈 문자가 나올때까지 읽는다 (표준 입력 스트림 비워짐)
	} else {
		fmt.Println(n, a, b)
	}

	n, err = fmt.Scan(&amp;amp;a, &amp;amp;b)
	if err != nil {
		fmt.Println(n, err)
	} else {
		fmt.Println(n, a, b)
	}
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;결과 확인&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;// stdin.ReadString('\\n') 없을 때
PS C:\\Users\\montauk\\Desktop\\projects\\goprojects\\go05&amp;gt; go run .\\5.1.go
1 Hello
1 expected integer
0 expected integer

// stdin.ReadString('\\n') 있을 때
PS C:\\Users\\montauk\\Desktop\\projects\\goprojects\\go05&amp;gt; go run .\\5.1.go
1 Hello
1 expected integer
2 3
2 2 3
&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;bufio는 입력 스트림으로 부터 한 줄을 읽는 Reader 객체를 제공한다.&lt;/p&gt;
&lt;pre class=&quot;autoit&quot;&gt;&lt;code&gt;func NewReader(rd io.Reder) *Reader
&lt;/code&gt;&lt;/pre&gt;
&lt;h1&gt;&amp;nbsp;&lt;/h1&gt;
&lt;h1&gt;06 연산자&lt;/h1&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;산술연산자(사칙연산자, 비트 연산자, 시프트 연산자), 비교연산자(==, != , &amp;lt;, &amp;gt;), 논리 연산자(&amp;amp;&amp;amp;, ||, !)&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;대입 연산자(=)는 우변값을 좌변(메모리 공간)에 복사한다.&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;좌변에는 반드시 저장할 공간이 있는 변수가 와야 한다.&lt;/li&gt;
&lt;li&gt;대입연산자는 값을 반환하지 않는다.&lt;/li&gt;
&lt;li&gt;복수 대입연산자 사용 가능(a, b = 3, 4) 하다.&lt;/li&gt;
&lt;/ul&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;기타 연산자&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;[] 배열의 요소에 접근&lt;/p&gt;
&lt;table style=&quot;border-collapse: collapse; width: 100%;&quot; border=&quot;1&quot; data-ke-align=&quot;alignLeft&quot;&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;.&lt;/td&gt;
&lt;td&gt;구조체나 패키지 요소에 접근&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&amp;amp;&lt;/td&gt;
&lt;td&gt;변수의 메모리 주소값을 반환&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;*&lt;/td&gt;
&lt;td&gt;포인터 변수가 가리키는 메모리 주소에 접근&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&amp;hellip;&lt;/td&gt;
&lt;td&gt;슬라이스 요소에 접근하거나, 가변 인수를 만들 때 사용&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;:&lt;/td&gt;
&lt;td&gt;배열의 일부분을 집어올 때 사용&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&amp;larr;&lt;/td&gt;
&lt;td&gt;채널에서 값을 빼거나 넣을 때 사용&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h1&gt;&amp;nbsp;&lt;/h1&gt;
&lt;h1&gt;07 함수&lt;/h1&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;함수는 함수 키워드, 함수명, 매개변수, 반환타입, 함수 코드 블록으로 구성된다.&lt;/p&gt;
&lt;pre class=&quot;reasonml&quot;&gt;&lt;code&gt;func Add(a int, b int) int {
	// code block
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;함수를 호출할 때 입력하는 변수를 argument(아규먼트, 인수)라고 한다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;함수가 외부로 입력 받는 변수를 parameter(매개변수, 파라미터)라고 한다.&lt;/p&gt;
&lt;pre class=&quot;go&quot;&gt;&lt;code&gt;c := Add(3, 4) // 함수 호출 부에서 3, 4는 인수

func Add(a int, b int) int { // 함수 정의 부의 a, b는 매개변수
	return a+b 
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;함수를 호출하며 입력한 값은 실제 함수에 어떻게 전달되는가?&lt;/p&gt;
&lt;blockquote data-ke-style=&quot;style3&quot;&gt;인수는 매개변수로 복사된다. 매개변수와 함수 내 선언된 변수는 함수가 종료되면 변수 범위를 벗어나서 접근하지 못한다.&lt;/blockquote&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;보낸 값을 그대로 사용하지 않고, 값을 복사해서 사용한다. 이 것은 함수 내에서 선언한 a, b 변수에 초기값을 대입하는 것과 깉다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;위 예시에서, 3,4는 복사되어 매개변수에 전달되고, return 값은 복사되어 반환된다. 호출한 함수가 종료되면 함수에서 사용한 지역 변수에 접근할 수 없다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;return으로 함수 결과가 반환 되면서 함수가 즉시 종료되고, 함수를 호출했던 호출 위치로 명령 포인터(instruction pointer)가 되돌아가 수행을 이어간다.&lt;/p&gt;
&lt;h1&gt;&amp;nbsp;&lt;/h1&gt;
&lt;h1&gt;08 상수&lt;/h1&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;상수는 변하지 않는 값을 말한다. 기본 타입이(primitive) 아닌 타입(complex)은 상수를 사용할 수 없다.&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;const ContVal int = 10
&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;상수를 변경하려고 할 때 발생하는 에러&lt;/p&gt;
&lt;pre class=&quot;vhdl&quot;&gt;&lt;code&gt;cannot assign to c (neither addressable nor a map index expression)
&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;상수를 메모리 접근할 때 발생하는 에러&lt;/p&gt;
&lt;pre class=&quot;routeros&quot;&gt;&lt;code&gt;invalid operation: cannot take address of c (constant 10 of type int)
&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;상수는 초기화된 값이 변하지 않는다. 좌변에 올 수 없다. 상수는 메모리 주소값을 접근할 수 없다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;왜? 상수는 컴파일 할 때 숫자 자체로 치환되나?&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;rarr; 184P. 상수의 메모리 주솟갑셍 접근할 수 없는 이유 역시 컴파일 타임에 리터럴로 전환되어서 실행 파일에 값 형태로 쓰이기 때문이다. (동적 메모리 할당 영역 사용하지 않음)&lt;/p&gt;
&lt;blockquote data-ke-style=&quot;style3&quot;&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;프로그램이 로드될 때, 실행 파일이 올라간 영역을 코드 영역이라 하고, 프로그램 실행을 위해서 실행 중 할당해서 사용되는 영역을 동적 할당 메모리 영역이라고 한다. 상수는 리터럴로 코드 영역에 포함되기 때문에 동적 할당 영역을 사용 안함.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;상수는 언제 사용하나?&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;변하면 안되는 값&lt;/li&gt;
&lt;li&gt;코드 값: 어떤 숫자에 의미를 부여하는 것 (ex. 404: NOT FOUND)&lt;/li&gt;
&lt;li&gt;iota 로 간편하게 열거값 사용하기 (iota 키워드를 사용하면 1로 증가하는 코드값 생성 한다)&lt;/li&gt;
&lt;/ul&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;리터럴(literal)이란 고정된 값, 값 자체로 쓰인 문구라고 볼 수 있다. go에서 상수는 리터럴과 같이 취급한다. 그래서 컴파일 될 때 상수는 리터럴로 변환되어 실행 파일에 쓰인다.&lt;/p&gt;
&lt;h1&gt;&amp;nbsp;&lt;/h1&gt;
&lt;h1&gt;09~11 if, switch, for&lt;/h1&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;09. If&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;기본형&lt;/p&gt;
&lt;pre class=&quot;dust&quot;&gt;&lt;code&gt;if 조건문 {
	문장
} else if 조건문 {
	문장
} else {
	문장
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;쇼트 서킷(short circuit)? &amp;amp;&amp;amp;와 ||의 특성으로, 우변을 체크하지 않을 수 있음&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;&amp;amp;&amp;amp; 연산은 좌변이 false이면 우변을 검사하지 않고 false 처리한다.&lt;/li&gt;
&lt;li&gt;|| 연산은 좌변이 true 이면 우변은 검사하지 않고 true 처리한다.&lt;/li&gt;
&lt;/ul&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;if 초기문 사용 예시 (단, 초기문의 변수의 범위는 if 문에서만 사용된다)&lt;/p&gt;
&lt;pre class=&quot;sas&quot;&gt;&lt;code&gt;if filename, success := UploadFile(); success {
	fmt.Println(&quot;success:&quot;, filename)
} else {
	fmt.Println(&quot;fail&quot;)
}
&lt;/code&gt;&lt;/pre&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;10. switch&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;기본형&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;switch 비교값 {
case 값1:
	문장
case 값2:
	문장
default:
	문장
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;go의 switch 문의 각 case 종료 시에 break 가 없어도 빠져 나간다. 다음 case 문까지 체크하고 싶다면 fallthrough 키워드를 사용할 수 있다. (단, 혼동을 일으킬 수 있으므로 fallthrough를 권장하지 않음)&lt;/p&gt;
&lt;h2 data-ke-size=&quot;size26&quot;&gt;11. for&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;기본형&lt;/p&gt;
&lt;pre class=&quot;abnf&quot;&gt;&lt;code&gt;for 초기문; 조건문; 후처리 {
	코드블록
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;생략형&lt;/p&gt;
&lt;pre class=&quot;css&quot;&gt;&lt;code&gt;for ; 조건문; 후처리 {}
for 초기문; 후처리 {}
for ; 조건문; {}
for 조건문 {}
&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;중첩 for문에서 flag 변수로 break를 처리하기 &amp;rarr; 더 복잡한 경우는 label을 사용할 수 있다.&lt;/p&gt;</description>
      <category>Book Study/Tucker의 Go Programming</category>
      <category>go</category>
      <category>묘공단</category>
      <author>한명</author>
      <guid isPermaLink="true">https://a-person.tistory.com/18</guid>
      <comments>https://a-person.tistory.com/18#entry18comment</comments>
      <pubDate>Sun, 8 Oct 2023 16:34:00 +0900</pubDate>
    </item>
    <item>
      <title>curl 에 timeout 주기</title>
      <link>https://a-person.tistory.com/17</link>
      <description>&lt;p data-ke-size=&quot;size16&quot;&gt;curl 로 웹 서비스를 테스트할 때 옵션이 없으면 무한 대기할 수 있다. 이럴 때는 ctrl+c 로 끊어줘야 한다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;pre id=&quot;code_1696427199256&quot; class=&quot;bash&quot; data-ke-language=&quot;bash&quot; data-ke-type=&quot;codeblock&quot;&gt;&lt;code&gt;$ curl 192.168.0.1
^C&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;이때 -m &amp;lt;sec&amp;gt; 옵션을 주면 해당 시간(초) 만큼 요청하고, time out 한다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;pre id=&quot;code_1696427251432&quot; class=&quot;bash&quot; data-ke-language=&quot;bash&quot; data-ke-type=&quot;codeblock&quot;&gt;&lt;code&gt;$ curl -m 3 192.168.0.1
curl: (28) Connection timed out after 3002 milliseconds&lt;/code&gt;&lt;/pre&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;테스트를 할 때 -m 옵션을 주면 간단하게 성공 여부를 체크할 수 있다.&lt;/p&gt;</description>
      <category>기타</category>
      <category>curl</category>
      <category>Timeout</category>
      <author>한명</author>
      <guid isPermaLink="true">https://a-person.tistory.com/17</guid>
      <comments>https://a-person.tistory.com/17#entry17comment</comments>
      <pubDate>Wed, 4 Oct 2023 22:52:48 +0900</pubDate>
    </item>
    <item>
      <title>쿠버네티스에 containerd 를 사용하는 윈도우 워커노드 추가 (with Calico CNI)</title>
      <link>https://a-person.tistory.com/16</link>
      <description>&lt;h2 contenteditable=&quot;true&quot; data-ke-size=&quot;size26&quot;&gt;&lt;span&gt;containerd 를 사용하는 윈도우 노드 추가&lt;/span&gt;&lt;/h2&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;span&gt;지난 '쿠버네티스 윈도우 워커 노드 추가(with Calico CNI)' 에서 Docker EE 를 사용하여 Windows 워커 노드를 추가했습니다. Docker EE 가 deprecated 됨에 따라 containerd 를 활용할 필요가 있어서 추가로 containerd 를 이용해 Windows 워커노드를 추가하는 절차를 기록했습니다.&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-filename=&quot;edited_containerd-horizontal-color-1.webp&quot; data-origin-width=&quot;300&quot; data-origin-height=&quot;74&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/caz7lP/btrHapYcMIZ/ohPxBeyPqk6EArruC2QWUk/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/caz7lP/btrHapYcMIZ/ohPxBeyPqk6EArruC2QWUk/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/caz7lP/btrHapYcMIZ/ohPxBeyPqk6EArruC2QWUk/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2Fcaz7lP%2FbtrHapYcMIZ%2FohPxBeyPqk6EArruC2QWUk%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;300&quot; height=&quot;74&quot; data-filename=&quot;edited_containerd-horizontal-color-1.webp&quot; data-origin-width=&quot;300&quot; data-origin-height=&quot;74&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p contenteditable=&quot;true&quot; data-ke-size=&quot;size16&quot;&gt;&lt;span&gt;아래 'containerd 시작하기' 문서를 참고하여 containerd 를 설치합니다.&lt;/span&gt;&lt;/p&gt;
&lt;p contenteditable=&quot;true&quot; data-ke-size=&quot;size16&quot;&gt;&lt;span&gt;&lt;a href=&quot;https://github.com/containerd/containerd/blob/main/docs/getting-started.md#installing-containerd-on-windows&quot;&gt;https://github.com/containerd/containerd/blob/main/docs/getting-started.md#installing-containerd-on-windows&lt;/a&gt;&lt;/span&gt;&lt;/p&gt;
&lt;pre class=&quot;nix&quot;&gt;&lt;code&gt;PS C:\Users\Administrator&amp;gt; $Version=&quot;1.6.4&quot;
PS C:\Users\Administrator&amp;gt; curl.exe -L https://github.com/containerd/containerd/releases/download/v$Version/containerd-$Version-windows-amd64.tar.gz -o containerd-windows-amd64.tar.gz
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100 30.0M  100 30.0M    0     0  1393k      0  0:00:22  0:00:22 --:--:-- 1630k
PS C:\Users\Administrator&amp;gt; tar.exe xvf .\containerd-windows-amd64.tar.gz
x bin/
x bin/containerd.exe
x bin/containerd-shim-runhcs-v1.exe
x bin/containerd-stress.exe
x bin/ctr.exe
PS C:\Users\Administrator&amp;gt; Copy-Item -Path &quot;.\bin\&quot; -Destination &quot;$Env:ProgramFiles\containerd&quot; -Recurse -Force
PS C:\Users\Administrator&amp;gt; cd $Env:ProgramFiles\containerd\
PS C:\Program Files\containerd&amp;gt; .\containerd.exe config default | Out-File config.toml -Encoding ascii
PS C:\Program Files\containerd&amp;gt; Get-Content config.toml
disabled_plugins = []
imports = []
oom_score = 0
plugin_dir = &quot;&quot;
required_plugins = []
root = &quot;C:\\ProgramData\\containerd\\root&quot;
state = &quot;C:\\ProgramData\\containerd\\state&quot;
temp = &quot;&quot;
version = 2

[cgroup]
  path = &quot;&quot;

[debug]
  address = &quot;&quot;
  format = &quot;&quot;
  gid = 0
  level = &quot;&quot;
  uid = 0

[grpc]
  address = &quot;\\\\.\\pipe\\containerd-containerd&quot;
  gid = 0
  max_recv_message_size = 16777216
  max_send_message_size = 16777216
  tcp_address = &quot;&quot;
  tcp_tls_ca = &quot;&quot;
  tcp_tls_cert = &quot;&quot;
  tcp_tls_key = &quot;&quot;
  uid = 0

[metrics]
  address = &quot;&quot;
  grpc_histogram = false

[plugins]

  [plugins.&quot;io.containerd.gc.v1.scheduler&quot;]
    deletion_threshold = 0
    mutation_threshold = 100
    pause_threshold = 0.02
    schedule_delay = &quot;0s&quot;
    startup_delay = &quot;100ms&quot;

  [plugins.&quot;io.containerd.grpc.v1.cri&quot;]
    device_ownership_from_security_context = false
    disable_apparmor = false
    disable_cgroup = false
    disable_hugetlb_controller = false
    disable_proc_mount = false
    disable_tcp_service = true
    enable_selinux = false
    enable_tls_streaming = false
    enable_unprivileged_icmp = false
    enable_unprivileged_ports = false
    ignore_image_defined_volumes = false
    max_concurrent_downloads = 3
    max_container_log_line_size = 16384
    netns_mounts_under_state_dir = false
    restrict_oom_score_adj = false
    sandbox_image = &quot;k8s.gcr.io/pause:3.6&quot;
    selinux_category_range = 0
    stats_collect_period = 10
    stream_idle_timeout = &quot;4h0m0s&quot;
    stream_server_address = &quot;127.0.0.1&quot;
    stream_server_port = &quot;0&quot;
    systemd_cgroup = false
    tolerate_missing_hugetlb_controller = false
    unset_seccomp_profile = &quot;&quot;

    [plugins.&quot;io.containerd.grpc.v1.cri&quot;.cni]
      bin_dir = &quot;C:\\Program Files\\containerd\\cni\\bin&quot;
      conf_dir = &quot;C:\\Program Files\\containerd\\cni\\conf&quot;
      conf_template = &quot;&quot;
      ip_pref = &quot;&quot;
      max_conf_num = 1

    [plugins.&quot;io.containerd.grpc.v1.cri&quot;.containerd]
      default_runtime_name = &quot;runhcs-wcow-process&quot;
      disable_snapshot_annotations = false
      discard_unpacked_layers = false
      ignore_rdt_not_enabled_errors = false
      no_pivot = false
      snapshotter = &quot;windows&quot;

      [plugins.&quot;io.containerd.grpc.v1.cri&quot;.containerd.default_runtime]
        base_runtime_spec = &quot;&quot;
        cni_conf_dir = &quot;&quot;
        cni_max_conf_num = 0
        container_annotations = []
        pod_annotations = []
        privileged_without_host_devices = false
        runtime_engine = &quot;&quot;
        runtime_path = &quot;&quot;
        runtime_root = &quot;&quot;
        runtime_type = &quot;&quot;

        [plugins.&quot;io.containerd.grpc.v1.cri&quot;.containerd.default_runtime.options]

      [plugins.&quot;io.containerd.grpc.v1.cri&quot;.containerd.runtimes]

        [plugins.&quot;io.containerd.grpc.v1.cri&quot;.containerd.runtimes.runhcs-wcow-process]
          base_runtime_spec = &quot;&quot;
          cni_conf_dir = &quot;&quot;
          cni_max_conf_num = 0
          container_annotations = []
          pod_annotations = []
          privileged_without_host_devices = false
          runtime_engine = &quot;&quot;
          runtime_path = &quot;&quot;
          runtime_root = &quot;&quot;
          runtime_type = &quot;io.containerd.runhcs.v1&quot;

          [plugins.&quot;io.containerd.grpc.v1.cri&quot;.containerd.runtimes.runhcs-wcow-process.options]

      [plugins.&quot;io.containerd.grpc.v1.cri&quot;.containerd.untrusted_workload_runtime]
        base_runtime_spec = &quot;&quot;
        cni_conf_dir = &quot;&quot;
        cni_max_conf_num = 0
        container_annotations = []
        pod_annotations = []
        privileged_without_host_devices = false
        runtime_engine = &quot;&quot;
        runtime_path = &quot;&quot;
        runtime_root = &quot;&quot;
        runtime_type = &quot;&quot;

        [plugins.&quot;io.containerd.grpc.v1.cri&quot;.containerd.untrusted_workload_runtime.options]

    [plugins.&quot;io.containerd.grpc.v1.cri&quot;.image_decryption]
      key_model = &quot;node&quot;

    [plugins.&quot;io.containerd.grpc.v1.cri&quot;.registry]
      config_path = &quot;&quot;

      [plugins.&quot;io.containerd.grpc.v1.cri&quot;.registry.auths]

      [plugins.&quot;io.containerd.grpc.v1.cri&quot;.registry.configs]

      [plugins.&quot;io.containerd.grpc.v1.cri&quot;.registry.headers]

      [plugins.&quot;io.containerd.grpc.v1.cri&quot;.registry.mirrors]

    [plugins.&quot;io.containerd.grpc.v1.cri&quot;.x509_key_pair_streaming]
      tls_cert_file = &quot;&quot;
      tls_key_file = &quot;&quot;

  [plugins.&quot;io.containerd.internal.v1.opt&quot;]
    path = &quot;C:\\ProgramData\\containerd\\root\\opt&quot;

  [plugins.&quot;io.containerd.internal.v1.restart&quot;]
    interval = &quot;10s&quot;

  [plugins.&quot;io.containerd.internal.v1.tracing&quot;]
    sampling_ratio = 1.0
    service_name = &quot;containerd&quot;

  [plugins.&quot;io.containerd.metadata.v1.bolt&quot;]
    content_sharing_policy = &quot;shared&quot;

  [plugins.&quot;io.containerd.runtime.v2.task&quot;]
    platforms = [&quot;windows/amd64&quot;, &quot;linux/amd64&quot;]
    sched_core = false

  [plugins.&quot;io.containerd.service.v1.diff-service&quot;]
    default = [&quot;windows&quot;, &quot;windows-lcow&quot;]

  [plugins.&quot;io.containerd.service.v1.tasks-service&quot;]
    rdt_config_file = &quot;&quot;

  [plugins.&quot;io.containerd.tracing.processor.v1.otlp&quot;]
    endpoint = &quot;&quot;
    insecure = false
    protocol = &quot;&quot;

[proxy_plugins]

[stream_processors]

  [stream_processors.&quot;io.containerd.ocicrypt.decoder.v1.tar&quot;]
    accepts = [&quot;application/vnd.oci.image.layer.v1.tar+encrypted&quot;]
    args = [&quot;--decryption-keys-path&quot;, &quot;C:\\Program Files\\containerd\\ocicrypt\\keys&quot;]
    env = [&quot;OCICRYPT_KEYPROVIDER_CONFIG=C:\\Program Files\\containerd\\ocicrypt\\ocicrypt_keyprovider.conf&quot;]
    path = &quot;ctd-decoder&quot;
    returns = &quot;application/vnd.oci.image.layer.v1.tar&quot;

  [stream_processors.&quot;io.containerd.ocicrypt.decoder.v1.tar.gzip&quot;]
    accepts = [&quot;application/vnd.oci.image.layer.v1.tar+gzip+encrypted&quot;]
    args = [&quot;--decryption-keys-path&quot;, &quot;C:\\Program Files\\containerd\\ocicrypt\\keys&quot;]
    env = [&quot;OCICRYPT_KEYPROVIDER_CONFIG=C:\\Program Files\\containerd\\ocicrypt\\ocicrypt_keyprovider.conf&quot;]
    path = &quot;ctd-decoder&quot;
    returns = &quot;application/vnd.oci.image.layer.v1.tar+gzip&quot;

[timeouts]
  &quot;io.containerd.timeout.bolt.open&quot; = &quot;0s&quot;
  &quot;io.containerd.timeout.shim.cleanup&quot; = &quot;5s&quot;
  &quot;io.containerd.timeout.shim.load&quot; = &quot;5s&quot;
  &quot;io.containerd.timeout.shim.shutdown&quot; = &quot;3s&quot;
  &quot;io.containerd.timeout.task.state&quot; = &quot;2s&quot;

[ttrpc]
  address = &quot;&quot;
  gid = 0
  uid = 0
PS C:\Program Files\containerd&amp;gt; .\containerd.exe --register-service
PS C:\Program Files\containerd&amp;gt; Start-Service containerd
PS C:\Program Files\containerd&amp;gt; Get-Service containerd

Status   Name               DisplayName
------   ----               -----------
Running  containerd         containerd

&lt;/code&gt;&lt;/pre&gt;
&lt;p contenteditable=&quot;true&quot; data-ke-size=&quot;size16&quot;&gt;&lt;span&gt;containerd 가 윈도우의 서비스 형태로 실행됩니다. &lt;/span&gt;&lt;/p&gt;
&lt;p contenteditable=&quot;true&quot; data-ke-size=&quot;size16&quot;&gt;&lt;span&gt;docker 가 설치되지 않기 때문에 별도의 CLI를 설치합니다. crictl 명령을 수행해보면 각각의 컨테이너 런타임의 endpont를 찾기 때문에, 아래를 참고하여 user profile 쪽에 crictl.config을 생성했습니다.&lt;/span&gt;&lt;/p&gt;
&lt;p contenteditable=&quot;true&quot; data-ke-size=&quot;size16&quot;&gt;&lt;span&gt;&lt;a href=&quot;https://github.com/kubernetes-sigs/cri-tools/blob/master/docs/crictl.md&quot;&gt;https://github.com/kubernetes-sigs/cri-tools/blob/master/docs/crictl.md&lt;/a&gt;&lt;/span&gt;&lt;/p&gt;
&lt;pre class=&quot;elixir&quot;&gt;&lt;code&gt;PS C:\Users\Administrator&amp;gt; $VERSION=&quot;v1.24.2&quot;
PS C:\Users\Administrator&amp;gt; curl.exe -L https://github.com/kubernetes-sigs/cri-tools/releases/download/$VERSION/crictl-$VERSION-windows-amd64.tar.gz -o crictl-windows-amd64.tar.gz
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100 14.1M  100 14.1M    0     0  1275k      0  0:00:11  0:00:11 --:--:-- 1705k
PS C:\Users\Administrator&amp;gt; tar.exe xvf .\crictl-windows-amd64.tar.gz -C $ENV:WINDIR\system32
x crictl.exe
PS C:\Users\Administrator&amp;gt; crictl ps
time=&quot;2022-07-12T14:35:37+09:00&quot; level=warning msg=&quot;runtime connect using default endpoints: [npipe:////./pipe/dockershim npipe:////./pipe/containerd-containerd npipe:////./pipe/cri-dockerd]. As the default settings are now deprecated, you should set the endpoint instead.&quot;
time=&quot;2022-07-12T14:35:37+09:00&quot; level=error msg=&quot;unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \&quot;transport: Error while dialing open //./pipe/dockershim: The system cannot find the file specified.\&quot;&quot;
time=&quot;2022-07-12T14:35:37+09:00&quot; level=warning msg=&quot;image connect using default endpoints: [npipe:////./pipe/dockershim npipe:////./pipe/containerd-containerd npipe:////./pipe/cri-dockerd]. As the default settings are now deprecated, you should set the endpoint instead.&quot;
time=&quot;2022-07-12T14:35:37+09:00&quot; level=error msg=&quot;unable to determine image API version: rpc error: code = Unavailable desc = connection error: desc = \&quot;transport: Error while dialing open //./pipe/dockershim: The system cannot find the file specified.\&quot;&quot;
CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD

PS C:\Users\Administrator&amp;gt; mkdir $Env:UserProfile\.crictl


    Directory: C:\Users\Administrator


Mode                LastWriteTime         Length Name
----                -------------         ------ ----
d-----     2022-07-12   오후 2:40                .crictl


PS C:\Users\Administrator&amp;gt; notepad $Env:UserProfile\.crictl\crictl.yaml
PS C:\Users\Administrator&amp;gt; type $Env:UserProfile\.crictl\crictl.yaml
runtime-endpoint: npipe:\\\\.\\pipe\\containerd-containerd
image-endpoint: npipe:\\\\.\\pipe\\containerd-containerd
timeout: 2
debug: true
pull-image-on-create: false
PS C:\Users\Administrator&amp;gt; crictl ps
time=&quot;2022-07-12T14:43:19+09:00&quot; level=debug msg=&quot;get runtime connection&quot;
time=&quot;2022-07-12T14:43:19+09:00&quot; level=debug msg=&quot;get image connection&quot;
time=&quot;2022-07-12T14:43:19+09:00&quot; level=debug msg=&quot;ListContainerResponse: []&quot;
CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD&lt;/code&gt;&lt;/pre&gt;
&lt;p contenteditable=&quot;true&quot; data-ke-size=&quot;size16&quot;&gt;&lt;span&gt;Calico 를 설치하려고 진행하니 아래와 같이 에러가 발생합니다.&lt;/span&gt;&lt;/p&gt;
&lt;pre class=&quot;yaml&quot;&gt;&lt;code&gt;PS C:\Users\Administrator&amp;gt; mkdir c:\k


    Directory: C:\


Mode                LastWriteTime         Length Name
----                -------------         ------ ----
d-----     2022-07-12   오후 2:51                k


PS C:\Users\Administrator&amp;gt; scp root@172.16.3.170:~/.kube/config c:\k\
root@172.16.3.170's password:
config                                                                                                                                                                                                          100% 5640     5.5KB/s   00:00
PS C:\Users\Administrator&amp;gt; Invoke-WebRequest https://projectcalico.docs.tigera.io/scripts/install-calico-windows.ps1 -OutFile c:\install-calico-windows.ps1
PS C:\Users\Administrator&amp;gt; c:\install-calico-windows.ps1 -KubeVersion 1.22.6 -ServiceCidr 10.96.0.0/12 -DNSServerIPs 10.96.0.10
WARNING: The names of some imported commands from the module 'helper' include unapproved verbs that might make them less discoverable. To find the commands with unapproved verbs, run the Import-Module command again with the Verbose
parameter. For a list of approved verbs, type Get-Verb.
c:\calico-windows.zip not found, downloading Calico for Windows release...
Downloaded [https://github.com/projectcalico/calico/releases/download/v3.23.2//calico-windows-v3.23.2.zip] =&amp;gt; [c:\calico-windows.zip]
C:\install-calico-windows.ps1 : The term 'Get-HnsNetwork' is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path was included, verify that the path is correct and t
ry again.
At line:1 char:1
+ c:\install-calico-windows.ps1 -KubeVersion 1.22.6 -ServiceCidr 10.96. ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : ObjectNotFound: (Get-HnsNetwork:String) [install-calico-windows.ps1], CommandNotFoundException
    + FullyQualifiedErrorId : CommandNotFoundException,install-calico-windows.ps1
&lt;/code&gt;&lt;/pre&gt;
&lt;p contenteditable=&quot;true&quot; data-ke-size=&quot;size16&quot;&gt;&lt;span&gt;Docker EE를 사용할 때와 다르게, containerd 를 설치하는 것 자체만으로 Windows 의 컨테이너 feature를 설치하지 않기 때문에 HNS와 같은 서비스 및 기타 cmdlet이 설치되어 있지 않습니다. &lt;/span&gt;&lt;/p&gt;
&lt;blockquote data-ke-style=&quot;style3&quot;&gt;
&lt;p contenteditable=&quot;true&quot; data-ke-size=&quot;size16&quot;&gt;The term 'Get-HnsNetwork' is not recognized as the name of a cmdlet, function, script file, or operable program.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;span&gt;윈도우 서버에서 containers fature를 먼서 설치합니다. (재시작으로 수행결과가 없지만 아래와 같이 진행하면 됩니다)&lt;/span&gt;&lt;/p&gt;
&lt;pre class=&quot;taggerscript&quot;&gt;&lt;code&gt;PS C:\Users\Administrator&amp;gt; Install-WindowsFeature -Name containers
PS C:\Users\Administrator&amp;gt; Restart-Computer -Force&lt;/code&gt;&lt;/pre&gt;
&lt;p contenteditable=&quot;true&quot; data-ke-size=&quot;size16&quot;&gt;&lt;span&gt;재시작후 다시 확인해보면 Host Network Service 가 조회됩니다.&lt;/span&gt;&lt;/p&gt;
&lt;pre class=&quot;routeros&quot;&gt;&lt;code&gt;PS C:\Users\Administrator&amp;gt; Get-Service hns

Status   Name               DisplayName
------   ----               -----------
Stopped  hns                Host Network Service
&lt;/code&gt;&lt;/pre&gt;
&lt;p contenteditable=&quot;true&quot; data-ke-size=&quot;size16&quot;&gt;&lt;span&gt;containerd 로 변경한 이후 한 가지 에러가 더 발생합니다.&lt;/span&gt;&lt;/p&gt;
&lt;pre class=&quot;routeros&quot;&gt;&lt;code&gt;PS C:\Users\Administrator&amp;gt; c:\install-calico-windows.ps1 -KubeVersion 1.22.6 -ServiceCidr 10.96.0.0/12 -DNSServerIPs 10.96.0.10
WARNING: The names of some imported commands from the module 'helper' include unapproved verbs that might make them less discoverable. To find the commands with unapproved verbs, run the Import-Module command again with the Verbose
parameter. For a list of approved verbs, type Get-Verb.
Unzip Calico for Windows release...
Creating CNI directory
&amp;lt;생략&amp;gt;
Validating configuration...
CNI binary directory C:\Program Files\containerd\cni\bin doesn't exist.  Please create it and ensure kubelet is configured with matching --cni-bin-dir.
At C:\CalicoWindows\libs\calico\calico.psm1:35 char:13
+             throw &quot;CNI binary directory $env:CNI_BIN_DIR doesn't exis ...
+             ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : OperationStopped: (CNI binary dire... --cni-bin-dir.:String) [], RuntimeException
    + FullyQualifiedErrorId : CNI binary directory C:\Program Files\containerd\cni\bin doesn't exist.  Please create it and ensure kubelet is configured with matching --cni-bin-dir.&lt;/code&gt;&lt;/pre&gt;
&lt;p contenteditable=&quot;true&quot; data-ke-size=&quot;size16&quot;&gt;&lt;span&gt;containerd로 변경되면서 cni\bin 위치를 제대로 생성하지 않고, 참조하는 것 같습니다만.. 이 부분은 설치 스크립트를 더 분석해야 확인가능 할 것 같습니다.&lt;/span&gt;&lt;/p&gt;
&lt;p contenteditable=&quot;true&quot; data-ke-size=&quot;size16&quot;&gt;&lt;span&gt;일단 해당 디렉터리를 수동으로 생성해주고 시작합니다.&lt;/span&gt;&lt;/p&gt;
&lt;pre class=&quot;yaml&quot;&gt;&lt;code&gt;PS C:\Program Files\containerd\cni\conf&amp;gt; mkdir &quot;C:\Program Files\containerd\cni\bin&quot;


    Directory: C:\Program Files\containerd\cni


Mode                LastWriteTime         Length Name
----                -------------         ------ ----
d-----     2022-07-12   오후 3:51                bin


PS C:\Program Files\containerd\cni&amp;gt; c:\install-calico-windows.ps1 -KubeVersion 1.22.6 -ServiceCidr 10.96.0.0/12 -DNSServerIPs 10.96.0.10
WARNING: The names of some imported commands from the module 'helper' include unapproved verbs that might make them less discoverable. To find the commands with unapproved verbs, run the Import-Module command again with the Verbose
parameter. For a list of approved verbs, type Get-Verb.
Unzip Calico for Windows release...
Creating CNI directory
Downloading Windows Kubernetes scripts
[DownloadFile] File c:\k\hns.psm1 already exists.
WARNING: The names of some imported commands from the module 'hns' include unapproved verbs that might make them less discoverable. To find the commands with unapproved verbs, run the Import-Module command again with the Verbose
parameter. For a list of approved verbs, type Get-Verb.
Downloaded [https://dl.k8s.io/v1.22.6/kubernetes-node-windows-amd64.tar.gz] =&amp;gt; [C:\Users\Administrator\AppData\Local\Temp\2\tmp3B38.tar.gz]
Setup Calico for Windows...
Error from server (NotFound): namespaces &quot;calico-system&quot; not found
Calico running in kube-system namespace
Backend networking is vxlan

Start Calico for Windows install...

Setting environment variables if not set...
Environment variable KUBE_NETWORK is already set: Calico.*
Environment variable CALICO_NETWORKING_BACKEND is already set: vxlan
Environment variable K8S_SERVICE_CIDR is already set: 10.96.0.0/12
Environment variable DNS_NAME_SERVERS is already set: 10.96.0.10
Environment variable DNS_SEARCH is already set: svc.cluster.local
Environment variable CALICO_DATASTORE_TYPE is already set: kubernetes
Environment variable KUBECONFIG is already set: c:\k\config
Environment variable ETCD_ENDPOINTS is not set. Setting it to the default value:
Environment variable ETCD_KEY_FILE is not set. Setting it to the default value:
Environment variable ETCD_CERT_FILE is not set. Setting it to the default value:
Environment variable ETCD_CA_CERT_FILE is not set. Setting it to the default value:
Environment variable CNI_BIN_DIR is already set: C:\Program Files\containerd\cni\bin
Environment variable CNI_CONF_DIR is already set: C:\Program Files\containerd\cni\conf
Environment variable CNI_CONF_FILENAME is already set: 10-calico.conf
Environment variable CNI_IPAM_TYPE is already set: calico-ipam
Environment variable VXLAN_VNI is already set: 4096
Environment variable VXLAN_MAC_PREFIX is already set: 0E-2A
Environment variable VXLAN_ADAPTER is not set. Setting it to the default value:
Environment variable NODENAME is already set: k8s-ww2
Environment variable CALICO_K8S_NODE_REF is already set: k8s-ww2
Environment variable STARTUP_VALID_IP_TIMEOUT is already set: 90
Environment variable IP is already set: autodetect
Environment variable CALICO_LOG_DIR is already set: C:\CalicoWindows\logs
Environment variable FELIX_LOGSEVERITYFILE is already set: none
Environment variable FELIX_LOGSEVERITYSYS is already set: none
Validating configuration...
Installing node startup service...


    Hive: HKEY_LOCAL_MACHINE\Software


Name                           Property
----                           --------
Tigera


    Hive: HKEY_LOCAL_MACHINE\Software\Tigera


Name                           Property
----                           --------
Calico
Service &quot;CalicoNode&quot; installed successfully!
Set parameter &quot;AppParameters&quot; for service &quot;CalicoNode&quot;.
Set parameter &quot;AppDirectory&quot; for service &quot;CalicoNode&quot;.
Set parameter &quot;DisplayName&quot; for service &quot;CalicoNode&quot;.
Set parameter &quot;Description&quot; for service &quot;CalicoNode&quot;.
Set parameter &quot;Start&quot; for service &quot;CalicoNode&quot;.
Reset parameter &quot;ObjectName&quot; for service &quot;CalicoNode&quot; to its default.
Set parameter &quot;Type&quot; for service &quot;CalicoNode&quot;.
Reset parameter &quot;AppThrottle&quot; for service &quot;CalicoNode&quot; to its default.
Creating log directory.

PSPath            : Microsoft.PowerShell.Core\FileSystem::C:\CalicoWindows\logs
PSParentPath      : Microsoft.PowerShell.Core\FileSystem::C:\CalicoWindows
PSChildName       : logs
PSDrive           : C
PSProvider        : Microsoft.PowerShell.Core\FileSystem
PSIsContainer     : True
Name              : logs
FullName          : C:\CalicoWindows\logs
Parent            : CalicoWindows
Exists            : True
Root              : C:\
Extension         :
CreationTime      : 2022-07-12 오후 3:53:55
CreationTimeUtc   : 2022-07-12 오전 6:53:55
LastAccessTime    : 2022-07-12 오후 3:53:55
LastAccessTimeUtc : 2022-07-12 오전 6:53:55
LastWriteTime     : 2022-07-12 오후 3:53:55
LastWriteTimeUtc  : 2022-07-12 오전 6:53:55
Attributes        : Directory
Mode              : d-----
BaseName          : logs
Target            : {}
LinkType          :

Set parameter &quot;AppStdout&quot; for service &quot;CalicoNode&quot;.
Set parameter &quot;AppStderr&quot; for service &quot;CalicoNode&quot;.
Set parameter &quot;AppRotateFiles&quot; for service &quot;CalicoNode&quot;.
Set parameter &quot;AppRotateOnline&quot; for service &quot;CalicoNode&quot;.
Set parameter &quot;AppRotateSeconds&quot; for service &quot;CalicoNode&quot;.
Set parameter &quot;AppRotateBytes&quot; for service &quot;CalicoNode&quot;.
Done installing startup service.
Installing Felix service...
Service &quot;CalicoFelix&quot; installed successfully!
Set parameter &quot;AppParameters&quot; for service &quot;CalicoFelix&quot;.
Set parameter &quot;AppDirectory&quot; for service &quot;CalicoFelix&quot;.
Set parameter &quot;DependOnService&quot; for service &quot;CalicoFelix&quot;.
Set parameter &quot;DisplayName&quot; for service &quot;CalicoFelix&quot;.
Set parameter &quot;Description&quot; for service &quot;CalicoFelix&quot;.
Set parameter &quot;Start&quot; for service &quot;CalicoFelix&quot;.
Reset parameter &quot;ObjectName&quot; for service &quot;CalicoFelix&quot; to its default.
Set parameter &quot;Type&quot; for service &quot;CalicoFelix&quot;.
Reset parameter &quot;AppThrottle&quot; for service &quot;CalicoFelix&quot; to its default.
Set parameter &quot;AppStdout&quot; for service &quot;CalicoFelix&quot;.
Set parameter &quot;AppStderr&quot; for service &quot;CalicoFelix&quot;.
Set parameter &quot;AppRotateFiles&quot; for service &quot;CalicoFelix&quot;.
Set parameter &quot;AppRotateOnline&quot; for service &quot;CalicoFelix&quot;.
Set parameter &quot;AppRotateSeconds&quot; for service &quot;CalicoFelix&quot;.
Set parameter &quot;AppRotateBytes&quot; for service &quot;CalicoFelix&quot;.
Done installing Felix service.
Copying CNI binaries to C:\Program Files\containerd\cni\bin
Writing CNI configuration to C:\Program Files\containerd\cni\conf\10-calico.conf.
Wrote CNI configuration.

Calico for Windows installed

Starting Calico...
This may take several seconds if the vSwitch needs to be created.
Waiting for Calico initialisation to finish...
Waiting for Calico initialisation to finish...StoredLastBootTime , CurrentLastBootTime 2022-07-12 오후 3:09:43
Waiting for Calico initialisation to finish...StoredLastBootTime , CurrentLastBootTime 2022-07-12 오후 3:09:43
Waiting for Calico initialisation to finish...StoredLastBootTime , CurrentLastBootTime 2022-07-12 오후 3:09:43
Waiting for Calico initialisation to finish...StoredLastBootTime , CurrentLastBootTime 2022-07-12 오후 3:09:43
Waiting for Calico initialisation to finish...StoredLastBootTime , CurrentLastBootTime 2022-07-12 오후 3:09:43
Waiting for Calico initialisation to finish...StoredLastBootTime , CurrentLastBootTime 2022-07-12 오후 3:09:43
Waiting for Calico initialisation to finish...StoredLastBootTime , CurrentLastBootTime 2022-07-12 오후 3:09:43
Waiting for Calico initialisation to finish...StoredLastBootTime , CurrentLastBootTime 2022-07-12 오후 3:09:43
Waiting for Calico initialisation to finish...StoredLastBootTime , CurrentLastBootTime 2022-07-12 오후 3:09:43
Waiting for Calico initialisation to finish...StoredLastBootTime , CurrentLastBootTime 2022-07-12 오후 3:09:43
Waiting for Calico initialisation to finish...StoredLastBootTime , CurrentLastBootTime 2022-07-12 오후 3:09:43
Waiting for Calico initialisation to finish...StoredLastBootTime , CurrentLastBootTime 2022-07-12 오후 3:09:43
Waiting for Calico initialisation to finish...StoredLastBootTime , CurrentLastBootTime 2022-07-12 오후 3:09:43
Waiting for Calico initialisation to finish...StoredLastBootTime , CurrentLastBootTime 2022-07-12 오후 3:09:43
Waiting for Calico initialisation to finish...StoredLastBootTime , CurrentLastBootTime 2022-07-12 오후 3:09:43
Waiting for Calico initialisation to finish...StoredLastBootTime , CurrentLastBootTime 2022-07-12 오후 3:09:43
Waiting for Calico initialisation to finish...StoredLastBootTime , CurrentLastBootTime 2022-07-12 오후 3:09:43
Waiting for Calico initialisation to finish...StoredLastBootTime , CurrentLastBootTime 2022-07-12 오후 3:09:43
Waiting for Calico initialisation to finish...StoredLastBootTime , CurrentLastBootTime 2022-07-12 오후 3:09:43
Waiting for Calico initialisation to finish...StoredLastBootTime , CurrentLastBootTime 2022-07-12 오후 3:09:43
Waiting for Calico initialisation to finish...StoredLastBootTime , CurrentLastBootTime 2022-07-12 오후 3:09:43
Waiting for Calico initialisation to finish...StoredLastBootTime , CurrentLastBootTime 2022-07-12 오후 3:09:43
Waiting for Calico initialisation to finish...StoredLastBootTime , CurrentLastBootTime 2022-07-12 오후 3:09:43
Waiting for Calico initialisation to finish...StoredLastBootTime , CurrentLastBootTime 2022-07-12 오후 3:09:43
Waiting for Calico initialisation to finish...StoredLastBootTime , CurrentLastBootTime 2022-07-12 오후 3:09:43
Waiting for Calico initialisation to finish...StoredLastBootTime , CurrentLastBootTime 2022-07-12 오후 3:09:43
Waiting for Calico initialisation to finish...StoredLastBootTime , CurrentLastBootTime 2022-07-12 오후 3:09:43
Waiting for Calico initialisation to finish...StoredLastBootTime , CurrentLastBootTime 2022-07-12 오후 3:09:43
Waiting for Calico initialisation to finish...StoredLastBootTime , CurrentLastBootTime 2022-07-12 오후 3:09:43
Waiting for Calico initialisation to finish...StoredLastBootTime , CurrentLastBootTime 2022-07-12 오후 3:09:43
Waiting for Calico initialisation to finish...StoredLastBootTime , CurrentLastBootTime 2022-07-12 오후 3:09:43
Waiting for Calico initialisation to finish...StoredLastBootTime , CurrentLastBootTime 2022-07-12 오후 3:09:43
Waiting for Calico initialisation to finish...StoredLastBootTime , CurrentLastBootTime 2022-07-12 오후 3:09:43
Waiting for Calico initialisation to finish...StoredLastBootTime , CurrentLastBootTime 2022-07-12 오후 3:09:43
Waiting for Calico initialisation to finish...StoredLastBootTime , CurrentLastBootTime 2022-07-12 오후 3:09:43
Waiting for Calico initialisation to finish...StoredLastBootTime , CurrentLastBootTime 2022-07-12 오후 3:09:43
Waiting for Calico initialisation to finish...StoredLastBootTime , CurrentLastBootTime 2022-07-12 오후 3:09:43
Waiting for Calico initialisation to finish...StoredLastBootTime , CurrentLastBootTime 2022-07-12 오후 3:09:43
Waiting for Calico initialisation to finish...StoredLastBootTime , CurrentLastBootTime 2022-07-12 오후 3:09:43
Waiting for Calico initialisation to finish...StoredLastBootTime , CurrentLastBootTime 2022-07-12 오후 3:09:43
Waiting for Calico initialisation to finish...StoredLastBootTime , CurrentLastBootTime 2022-07-12 오후 3:09:43
Calico initialisation finished.
Done, the Calico services are running:

Status      : Running
Name        : CalicoFelix
DisplayName : Calico Windows Agent


Status      : Running
Name        : CalicoNode
DisplayName : Calico Windows Startup


Caption                 :
Description             : Enable kubectl exec and log
ElementName             : kubectl exec 10250
InstanceID              : KubectlExec10250
CommonName              :
PolicyKeywords          :
Enabled                 : True
PolicyDecisionStrategy  : 2
PolicyRoles             :
ConditionListType       : 3
CreationClassName       : MSFT|FW|FirewallRule|KubectlExec10250
ExecutionStrategy       : 2
Mandatory               :
PolicyRuleName          :
Priority                :
RuleUsage               :
SequencedActions        : 3
SystemCreationClassName :
SystemName              :
Action                  : Allow
Direction               : Inbound
DisplayGroup            :
DisplayName             : kubectl exec 10250
EdgeTraversalPolicy     : Block
EnforcementStatus       : NotApplicable
LocalOnlyMapping        : False
LooseSourceMapping      : False
Owner                   :
Platforms               : {}
PolicyStoreSource       : PersistentStore
PolicyStoreSourceType   : Local
PrimaryStatus           : OK
Profiles                : 0
RuleGroup               :
Status                  : The rule was parsed successfully from the store. (65536)
StatusCode              : 65536
PSComputerName          :
Name                    : KubectlExec10250
ID                      : KubectlExec10250
Group                   :
Profile                 : Any
Platform                : {}
LSM                     : False
&lt;/code&gt;&lt;/pre&gt;
&lt;p contenteditable=&quot;true&quot; data-ke-size=&quot;size16&quot;&gt;&lt;span&gt;Calico Node와 Calico Felix 가 정상 실행되었습니다.&lt;/span&gt;&lt;/p&gt;
&lt;pre class=&quot;fortran&quot;&gt;&lt;code&gt;PS C:\Program Files\containerd\cni\bin&amp;gt; Get-Service -Name CalicoNode

Status   Name               DisplayName
------   ----               -----------
Running  CalicoNode         Calico Windows Startup


PS C:\Program Files\containerd\cni\bin&amp;gt; Get-Service -Name CalicoFelix

Status   Name               DisplayName
------   ----               -----------
Running  CalicoFelix        Calico Windows Agent
&lt;/code&gt;&lt;/pre&gt;
&lt;p contenteditable=&quot;true&quot; data-ke-size=&quot;size16&quot;&gt;&lt;span&gt;추가 스크립트를 수행하여 kubelet과 kube-proxy 를 설치하고 서비스를 실행합니다.&lt;/span&gt;&lt;/p&gt;
&lt;pre class=&quot;routeros&quot;&gt;&lt;code&gt;PS C:\Program Files\containerd\cni\bin&amp;gt; C:\CalicoWindows\kubernetes\install-kube-services.ps1
Installing kubelet service...
Service &quot;kubelet&quot; installed successfully!
Set parameter &quot;AppParameters&quot; for service &quot;kubelet&quot;.
Set parameter &quot;AppDirectory&quot; for service &quot;kubelet&quot;.
Set parameter &quot;DisplayName&quot; for service &quot;kubelet&quot;.
Set parameter &quot;Description&quot; for service &quot;kubelet&quot;.
Set parameter &quot;Start&quot; for service &quot;kubelet&quot;.
Reset parameter &quot;ObjectName&quot; for service &quot;kubelet&quot; to its default.
Set parameter &quot;Type&quot; for service &quot;kubelet&quot;.
Reset parameter &quot;AppThrottle&quot; for service &quot;kubelet&quot; to its default.
Set parameter &quot;AppStdout&quot; for service &quot;kubelet&quot;.
Set parameter &quot;AppStderr&quot; for service &quot;kubelet&quot;.
Set parameter &quot;AppRotateFiles&quot; for service &quot;kubelet&quot;.
Set parameter &quot;AppRotateOnline&quot; for service &quot;kubelet&quot;.
Set parameter &quot;AppRotateSeconds&quot; for service &quot;kubelet&quot;.
Set parameter &quot;AppRotateBytes&quot; for service &quot;kubelet&quot;.
Done installing kubelet service.
Installing kube-proxy service...
Service &quot;kube-proxy&quot; installed successfully!
Set parameter &quot;AppParameters&quot; for service &quot;kube-proxy&quot;.
Set parameter &quot;AppDirectory&quot; for service &quot;kube-proxy&quot;.
Set parameter &quot;DisplayName&quot; for service &quot;kube-proxy&quot;.
Set parameter &quot;Description&quot; for service &quot;kube-proxy&quot;.
Set parameter &quot;Start&quot; for service &quot;kube-proxy&quot;.
Reset parameter &quot;ObjectName&quot; for service &quot;kube-proxy&quot; to its default.
Set parameter &quot;Type&quot; for service &quot;kube-proxy&quot;.
Reset parameter &quot;AppThrottle&quot; for service &quot;kube-proxy&quot; to its default.
Set parameter &quot;AppStdout&quot; for service &quot;kube-proxy&quot;.
Set parameter &quot;AppStderr&quot; for service &quot;kube-proxy&quot;.
Set parameter &quot;AppRotateFiles&quot; for service &quot;kube-proxy&quot;.
Set parameter &quot;AppRotateOnline&quot; for service &quot;kube-proxy&quot;.
Set parameter &quot;AppRotateSeconds&quot; for service &quot;kube-proxy&quot;.
Set parameter &quot;AppRotateBytes&quot; for service &quot;kube-proxy&quot;.
Done installing kube-proxy service.
PS C:\Program Files\containerd\cni\bin&amp;gt; Get-Service kubelet

Status   Name               DisplayName
------   ----               -----------
Stopped  kubelet            kubelet service


PS C:\Program Files\containerd\cni\bin&amp;gt; Get-Service kube-proxy

Status   Name               DisplayName
------   ----               -----------
Stopped  kube-proxy         kube-proxy service

PS C:\Program Files\containerd\cni\bin&amp;gt; Start-Service kubelet
PS C:\Program Files\containerd\cni\bin&amp;gt; start-Service kube-proxy
PS C:\Program Files\containerd\cni\bin&amp;gt; Get-Service kubelet

Status   Name               DisplayName
------   ----               -----------
Running  kubelet            kubelet service


PS C:\Program Files\containerd\cni\bin&amp;gt; Get-Service kube-proxy

Status   Name               DisplayName
------   ----               -----------
Running  kube-proxy         kube-proxy service
&lt;/code&gt;&lt;/pre&gt;
&lt;p contenteditable=&quot;true&quot; data-ke-size=&quot;size16&quot;&gt;&lt;span&gt;kubelet과 kube-proxy가 실행되면 워커 노드가 Join 된 것으로 확인됩니다. 기존에 조인되었던 k8s-ww 서버가 다운된 상태라, 해당 노드에 스케줄되었던 pod가 신규로 조인된 서버로 스케줄링이 되었습니다.&lt;/span&gt;&lt;/p&gt;
&lt;pre class=&quot;angelscript&quot;&gt;&lt;code&gt;root@k8s-m:~# kubectl get no
NAME     STATUS     ROLES                  AGE    VERSION
k8s-lw   Ready      &amp;lt;none&amp;gt;                 120d   v1.22.6
k8s-m    Ready      control-plane,master   120d   v1.22.6
k8s-ww   NotReady   &amp;lt;none&amp;gt;                 120d   v1.22.6
root@k8s-m:~# kubectl get no
NAME      STATUS     ROLES                  AGE    VERSION
k8s-lw    Ready      &amp;lt;none&amp;gt;                 120d   v1.22.6
k8s-m     Ready      control-plane,master   120d   v1.22.6
k8s-ww    NotReady   &amp;lt;none&amp;gt;                 120d   v1.22.6
k8s-ww2   Ready      &amp;lt;none&amp;gt;                 49s    v1.22.6
root@k8s-m:~# kubectl get po -owide
NAME                     READY   STATUS              RESTARTS         AGE     IP              NODE      NOMINATED NODE   READINESS GATES
iis-7dfbf869dd-4472b     0/1     ContainerCreating   0                5h46m   &amp;lt;none&amp;gt;          k8s-ww2   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
iis-7dfbf869dd-brqrc     1/1     Terminating         0                120d    192.168.208.5   k8s-ww    &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
netshoot                 1/1     Running             11 (6h51m ago)   120d    192.168.114.5   k8s-lw    &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
nginx-868547d6bf-kv858   1/1     Running             1 (6h51m ago)    120d    192.168.114.4   k8s-lw    &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
root@k8s-m:~# kubectl get no
NAME      STATUS     ROLES                  AGE    VERSION
k8s-lw    Ready      &amp;lt;none&amp;gt;                 120d   v1.22.6
k8s-m     Ready      control-plane,master   120d   v1.22.6
k8s-ww    NotReady   &amp;lt;none&amp;gt;                 120d   v1.22.6
k8s-ww2   Ready      &amp;lt;none&amp;gt;                 85m    v1.22.6
root@k8s-m:~# kubectl get po -owide
NAME                     READY   STATUS        RESTARTS      AGE     IP              NODE      NOMINATED NODE   READINESS GATES
iis-7dfbf869dd-4472b     1/1     Running       0             7h11m   192.168.112.4   k8s-ww2   &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
iis-7dfbf869dd-brqrc     1/1     Terminating   0             120d    192.168.208.5   k8s-ww    &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
netshoot                 1/1     Running       11 (8h ago)   120d    192.168.114.5   k8s-lw    &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
nginx-868547d6bf-kv858   1/1     Running       1 (8h ago)    120d    192.168.114.4   k8s-lw    &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;&lt;/code&gt;&lt;/pre&gt;
&lt;p contenteditable=&quot;true&quot; data-ke-size=&quot;size16&quot;&gt;&lt;span&gt;윈도우 워커노드에서도 crictl 로 컨테이너를 확인할 수 있습니다.&lt;/span&gt;&lt;/p&gt;
&lt;pre class=&quot;roboconf&quot;&gt;&lt;code&gt;C:\Users\Administrator&amp;gt;crictl ps
time=&quot;2022-07-12T22:45:46+09:00&quot; level=debug msg=&quot;get runtime connection&quot;
time=&quot;2022-07-12T22:45:46+09:00&quot; level=debug msg=&quot;get image connection&quot;
time=&quot;2022-07-12T22:45:46+09:00&quot; level=debug msg=&quot;ListContainerResponse: [&amp;amp;Container{Id:077ab4df6b9972af7806c017a7e85e3b437475de5f3a8c9316c983240af73af7,PodSandboxId:f994462e02b028c6cf9e0f38eab984bf0158734db64aaf4379c326224fe53c87,Metadata:&amp;amp;ContainerMetadata{Name:iis,Attempt:0,},Image:&amp;amp;ImageSpec{Image:sha256:cf88a43a7460e1a84be0a24ee22042f73069346710907e702dadb1a7e8a39eaf,Annotations:map[string]string{},},ImageRef:sha256:cf88a43a7460e1a84be0a24ee22042f73069346710907e702dadb1a7e8a39eaf,State:CONTAINER_RUNNING,CreatedAt:1657614998696454400,Labels:map[string]string{io.kubernetes.container.name: iis,io.kubernetes.pod.name: iis-7dfbf869dd-4472b,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8aecf61b-2e61-4f55-80a0-551b146b4cc6,},Annotations:map[string]string{io.kubernetes.container.hash: 42b06ae0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},}]&quot;
CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
077ab4df6b997       cf88a43a7460e       5 hours ago         Running             iis                 0                   f994462e02b02       iis-7dfbf869dd-4472b
&lt;/code&gt;&lt;/pre&gt;
&lt;p contenteditable=&quot;true&quot; data-ke-size=&quot;size16&quot;&gt;&lt;span&gt;calico 에서 제공하는 스크립트를 통해 워커노드 조인이 진행되어 과정이 많이 생략되어 있습니다. 상세한 과정을 이해하기위해서는 스크립트를 전반적으로 살펴봐야 할 것 같습다. &lt;/span&gt;&lt;/p&gt;
&lt;p contenteditable=&quot;true&quot; data-ke-size=&quot;size16&quot;&gt;&lt;span&gt;현재 calico 의 QuickStart 가이드에는 'Install Calico for Windows using HostProcess containers' 라는 방식을 추가로 제공하고 있습니다. 2022/7/12일 현재 GA가 안된 상태이긴 한데, 이후 이과정도 한번 살펴보겠습니다.&lt;/span&gt;&lt;/p&gt;</description>
      <category>Kubernetes</category>
      <category>calico</category>
      <category>containerd</category>
      <category>kubernets</category>
      <category>windows</category>
      <author>한명</author>
      <guid isPermaLink="true">https://a-person.tistory.com/16</guid>
      <comments>https://a-person.tistory.com/16#entry16comment</comments>
      <pubDate>Tue, 12 Jul 2022 22:51:49 +0900</pubDate>
    </item>
    <item>
      <title>개행문자(\n)를 줄바꿈으로 변환하기</title>
      <link>https://a-person.tistory.com/14</link>
      <description>&lt;p data-ke-size=&quot;size16&quot;&gt;로그 분석을 위해 로그를 확인했는데 로그의 한줄 한줄이 너무 길다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;곰곰히 살펴보면 개행 문자(\n)까지 엄청 들어가서 줄바꿈을 해서 보고 싶다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;notepadd++ 을 사용한다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;766&quot; data-origin-height=&quot;569&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/u0yRP/btrwonwZ88b/oMGBBZtmUWf44lF9nOFyfK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/u0yRP/btrwonwZ88b/oMGBBZtmUWf44lF9nOFyfK/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/u0yRP/btrwonwZ88b/oMGBBZtmUWf44lF9nOFyfK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2Fu0yRP%2FbtrwonwZ88b%2FoMGBBZtmUWf44lF9nOFyfK%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;766&quot; height=&quot;569&quot; data-origin-width=&quot;766&quot; data-origin-height=&quot;569&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;b&gt;바꾸기(ctrl+h)&lt;/b&gt;를 한다.&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;573&quot; data-origin-height=&quot;356&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/xsjPZ/btrwmZKrw25/rKMWDpnYROjskAcJaTmiIk/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/xsjPZ/btrwmZKrw25/rKMWDpnYROjskAcJaTmiIk/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/xsjPZ/btrwmZKrw25/rKMWDpnYROjskAcJaTmiIk/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FxsjPZ%2FbtrwmZKrw25%2FrKMWDpnYROjskAcJaTmiIk%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;573&quot; height=&quot;356&quot; data-origin-width=&quot;573&quot; data-origin-height=&quot;356&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;span&gt;내용을 작성하고,&lt;span&gt;&amp;nbsp;&lt;/span&gt;&lt;/span&gt;&lt;b&gt;확장 (\n, \r, \t ..)&lt;/b&gt;&lt;span&gt;를 체크하고&lt;span&gt;&amp;nbsp;&lt;/span&gt;&lt;/span&gt;&lt;b&gt;모두 바꾸기(A)&lt;/b&gt;&lt;span&gt;&lt;span&gt;&amp;nbsp;&lt;/span&gt;한다.&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-origin-width=&quot;766&quot; data-origin-height=&quot;569&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bOHNPg/btrwnDfl8Uc/uM6Gpr58aiqgcJMKHlbR0K/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bOHNPg/btrwnDfl8Uc/uM6Gpr58aiqgcJMKHlbR0K/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bOHNPg/btrwnDfl8Uc/uM6Gpr58aiqgcJMKHlbR0K/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbOHNPg%2FbtrwnDfl8Uc%2FuM6Gpr58aiqgcJMKHlbR0K%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;766&quot; height=&quot;569&quot; data-origin-width=&quot;766&quot; data-origin-height=&quot;569&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;끗&lt;/p&gt;</description>
      <category>기타</category>
      <category>Notepad++</category>
      <category>개행문자</category>
      <category>줄바꿈</category>
      <author>한명</author>
      <guid isPermaLink="true">https://a-person.tistory.com/14</guid>
      <comments>https://a-person.tistory.com/14#entry14comment</comments>
      <pubDate>Sat, 19 Mar 2022 17:05:42 +0900</pubDate>
    </item>
  </channel>
</rss>