a story

[4] MinIO - MNMD 배포 본문

MinIO

[4] MinIO - MNMD 배포

한명 2025. 9. 24. 20:55

이번 게시물에서는 MinIO의 MNMD(Multi-Node Multi-Drive)를 구성해보겠습니다.

 

실습 환경으로 클라우드를 고민해 보았지만, non-managed 쿠버네티스를 구성하기에 어려움이 있고, Managed 쿠버네티스를 활용하면 로컬 디스크를 연결는 데 어려움이 있습니다.

 

이러한 이유로 Vagrant로 로컬 쿠버네티스 환경을 구성하고 노드에 여러 개 디스크를 연결한 뒤, DirectPV로 드라이브를 구성하고 MinIO의 MNMD 배포 환경을 테스트 해보겠습니다.

 

목차

  1. 로컬 쿠버네티스 환경 구성
  2. DirectPV 구성
  3. MinIO MNMD 배포

 

1. 로컬 쿠버네티스 환경 구성

vagrant로 4개 worker node에 각 4개의 로컬 디스크를 가지는 환경을 생성하겠습니다.

아래와 같이 Vagrantfile을 생성하고 vagrnat up을 통해서 환경을 구성합니다.

# Variables
K8SV = '1.33.2-1.1' # Kubernetes Version : apt list -a kubelet , ex) 1.32.5-1.1
CONTAINERDV = '1.7.27-1' # Containerd Version : apt list -a containerd.io , ex) 1.6.33-1
CILIUMV = '1.17.6' # Cilium CNI Version : https://github.com/cilium/cilium/tags
N = 4 # max number of worker nodes

# Base Image  https://portal.cloud.hashicorp.com/vagrant/discover/bento/ubuntu-24.04
BOX_IMAGE = "bento/ubuntu-24.04"
BOX_VERSION = "202502.21.0"

Vagrant.configure("2") do |config|
#-ControlPlane Node
    config.vm.define "k8s-ctr" do |subconfig|
      subconfig.vm.box = BOX_IMAGE

      subconfig.vm.box_version = BOX_VERSION
      subconfig.vm.provider "virtualbox" do |vb|
        vb.customize ["modifyvm", :id, "--groups", "/Cilium-Lab"]
        vb.customize ["modifyvm", :id, "--nicpromisc2", "allow-all"]
        vb.name = "k8s-ctr"
        vb.cpus = 2
        vb.memory = 2048
        vb.linked_clone = true
      end
      subconfig.vm.host_name = "k8s-ctr"
      subconfig.vm.network "private_network", ip: "192.168.10.100"
      subconfig.vm.network "forwarded_port", guest: 22, host: 60000, auto_correct: true, id: "ssh"
      subconfig.vm.synced_folder "./", "/vagrant", disabled: true
      subconfig.vm.provision "shell", path: "https://raw.githubusercontent.com/gasida/vagrant-lab/refs/heads/main/cilium-study/2w/init_cfg.sh", args: [ K8SV, CONTAINERDV ]
      subconfig.vm.provision "shell", path: "https://raw.githubusercontent.com/gasida/vagrant-lab/refs/heads/main/cilium-study/2w/k8s-ctr.sh", args: [ N, CILIUMV ]
    end

#-Worker Nodes Subnet1
  (1..N).each do |i|
    config.vm.define "k8s-w#{i}" do |subconfig|
      subconfig.vm.box = BOX_IMAGE
      subconfig.vm.box_version = BOX_VERSION
      subconfig.vm.provider "virtualbox" do |vb|
        vb.customize ["modifyvm", :id, "--groups", "/Cilium-Lab"]
        vb.customize ["modifyvm", :id, "--nicpromisc2", "allow-all"]
        vb.name = "k8s-w#{i}"
        vb.cpus = 2
        vb.memory = 1536
        vb.linked_clone = true

        (1..4).each do |d|
          disk_path = "disk-w#{i}-#{d}.vdi"
          vb.customize ["createhd", "--filename", disk_path, "--size", 10240] # 10GB
          vb.customize ["storageattach", :id, "--storagectl", "SATA Controller", "--port", d, "--device", 0, "--type", "hdd", "--medium", disk_path]
        end
      end
      subconfig.vm.host_name = "k8s-w#{i}"
      subconfig.vm.network "private_network", ip: "192.168.10.10#{i}"
      subconfig.vm.network "forwarded_port", guest: 22, host: "6000#{i}", auto_correct: true, id: "ssh"
      subconfig.vm.synced_folder "./", "/vagrant", disabled: true
      subconfig.vm.provision "shell", path: "https://raw.githubusercontent.com/gasida/vagrant-lab/refs/heads/main/cilium-study/2w/init_cfg.sh", args: [ K8SV, CONTAINERDV]
      subconfig.vm.provision "shell", path: "https://raw.githubusercontent.com/gasida/vagrant-lab/refs/heads/main/cilium-study/2w/k8s-w.sh"
    end
  end
end

 

[Note] 해당 vagrantfile과 참조하는 shell script는 CloudNet 스터디 그룹을 통해서 제공받은 내용임을 알려드립니다.

 

 

참고로 vagrant up 이 정상 수행되지 않는 경우에는 노드의 리소스가 부족한 상황일 수 있습니다.

정상적으로 vagrant up이 수행되면 vagrant ssh k8s-ctr로 컨트롤 플레인에 접속하여 나머지 실습을 진행하겠습니다.

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get no -owide
NAME      STATUS   ROLES           AGE   VERSION   INTERNAL-IP      EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION     CONTAINER-RUNTIME
k8s-ctr   Ready    control-plane   67m   v1.33.2   192.168.10.100   <none>        Ubuntu 24.04.2 LTS   6.8.0-53-generic   containerd://1.7.27
k8s-w1    Ready    <none>          56m   v1.33.2   192.168.10.101   <none>        Ubuntu 24.04.2 LTS   6.8.0-53-generic   containerd://1.7.27
k8s-w2    Ready    <none>          51m   v1.33.2   192.168.10.102   <none>        Ubuntu 24.04.2 LTS   6.8.0-53-generic   containerd://1.7.27
k8s-w3    Ready    <none>          46m   v1.33.2   192.168.10.103   <none>        Ubuntu 24.04.2 LTS   6.8.0-53-generic   containerd://1.7.27
k8s-w4    Ready    <none>          27m   v1.33.2   192.168.10.104   <none>        Ubuntu 24.04.2 LTS   6.8.0-53-generic   containerd://1.7.27

 

컨트롤 플레인 역할을 하는 k8s-ctr을 제외한 워커 노드에는 4개의 disk가 연결되어 있습니다.

root@k8s-w1:~# lsblk
NAME                      MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sda                         8:0    0  64G  0 disk
├─sda1                      8:1    0   1M  0 part
├─sda2                      8:2    0   2G  0 part /boot
└─sda3                      8:3    0  62G  0 part
  └─ubuntu--vg-ubuntu--lv 252:0    0  31G  0 lvm  /
sdb                         8:16   0  10G  0 disk
sdc                         8:32   0  10G  0 disk
sdd                         8:48   0  10G  0 disk
sde                         8:64   0  10G  0 disk

 

이제 로컬 디스크를 관리하기 위해서MinIO의 DirectPV 부터 설치를 진행해 보겠습니다.

 

 

2. DirectPV 구성

DirectPV는 kubectl krew 플러그인을 통해서 설치할 수 있습니다.

먼저 krew 플러그인을 설치합니다.

# Install Krew
wget -P /root "https://github.com/kubernetes-sigs/krew/releases/latest/download/krew-linux_amd64.tar.gz"
tar zxvf "/root/krew-linux_amd64.tar.gz" --warning=no-unknown-keyword
./krew-linux_amd64 install krew
export PATH="${KREW_ROOT:-$HOME/.krew}/bin:$PATH" # export PATH="$PATH:/root/.krew/bin"
echo 'export PATH="$PATH:/root/.krew/bin:/root/go/bin"' >> /etc/profile
kubectl krew list

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl krew list
PLUGIN  VERSION
krew    v0.4.5

 

아래와 같이 krew에 directpv 플러그인을 설치하겠습니다.

# directpv 플러그인 설치
kubectl krew install directpv
kubectl directpv -h

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl directpv -h
Kubectl plugin for managing DirectPV drives and volumes.

USAGE:
  directpv [command]

FLAGS:
      --kubeconfig string   Path to the kubeconfig file to use for CLI requests
      --quiet               Suppress printing error messages
  -h, --help                help for directpv
      --version             version for directpv

AVAILABLE COMMANDS:
  install     Install DirectPV in Kubernetes
  discover    Discover new drives
  init        Initialize the drives
  info        Show information about DirectPV installation
  list        List drives and volumes
  label       Set labels to drives and volumes
  cordon      Mark drives as unschedulable
  uncordon    Mark drives as schedulable
  migrate     Migrate drives and volumes from legacy DirectCSI
  move        Move volumes excluding data from source drive to destination drive on a same node
  clean       Cleanup stale volumes
  suspend     Suspend drives and volumes
  resume      Resume suspended drives and volumes
  repair      Repair filesystem of drives
  remove      Remove unused drives from DirectPV
  uninstall   Uninstall DirectPV in Kubernetes

Use "directpv [command] --help" for more information about this command.

 

krew에 directpv 플로그인 설치가 완료되면, 쿠버네티스에 DirectPV를 설치를 진행합니다.

# DirectPV 설치
kubectl directpv install

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl directpv install


 ███████████████████████████████████████████████████████████████████████████ 100%

┌──────────────────────────────────────┬──────────────────────────┐
│ NAME                                 │ KIND                     │
├──────────────────────────────────────┼──────────────────────────┤
│ directpv                             │ Namespace                │
│ directpv-min-io                      │ ServiceAccount           │
│ directpv-min-io                      │ ClusterRole              │
│ directpv-min-io                      │ ClusterRoleBinding       │
│ directpv-min-io                      │ Role                     │
│ directpv-min-io                      │ RoleBinding              │
│ directpvdrives.directpv.min.io       │ CustomResourceDefinition │
│ directpvvolumes.directpv.min.io      │ CustomResourceDefinition │
│ directpvnodes.directpv.min.io        │ CustomResourceDefinition │
│ directpvinitrequests.directpv.min.io │ CustomResourceDefinition │
│ directpv-min-io                      │ CSIDriver                │
│ directpv-min-io                      │ StorageClass             │
│ node-server                          │ Daemonset                │
│ controller                           │ Deployment               │
└──────────────────────────────────────┴──────────────────────────┘

DirectPV installed successfully

# 확인
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get sc
NAME              PROVISIONER       RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
directpv-min-io   directpv-min-io   Delete          WaitForFirstConsumer   true                   60s
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get all -n directpv
NAME                              READY   STATUS    RESTARTS   AGE
pod/controller-7fcf6ddd76-lj8kn   2/3     Running   0          65s
pod/controller-7fcf6ddd76-wxlpz   2/3     Running   0          65s
pod/controller-7fcf6ddd76-zdlr2   2/3     Running   0          65s
pod/node-server-2vv5h             3/4     Running   0          65s
pod/node-server-7bkv6             3/4     Running   0          66s
pod/node-server-nww89             3/4     Running   0          66s
pod/node-server-xbsw4             3/4     Running   0          66s

NAME                         DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
daemonset.apps/node-server   4         4         0       4            0           <none>          66s

NAME                         READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/controller   0/3     3            0           66s

NAME                                    DESIRED   CURRENT   READY   AGE
replicaset.apps/controller-7fcf6ddd76   3         3         0       66s

 

잠시 대기하면 controller 파드까지 Ready가 완료됩니다.

이후 DirectPV를 통해서 디스크를 discover하고 초기화 해보겠습니다.

# direct pv로 관리되는 드라이브 확인
kubectl directpv info

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl directpv info
┌──────────┬──────────┬───────────┬─────────┬────────┐
│ NODE     │ CAPACITY │ ALLOCATED │ VOLUMES │ DRIVES │
├──────────┼──────────┼───────────┼─────────┼────────┤
│ • k8s-w1 │ -        │ -         │ -       │ -      │
│ • k8s-w2 │ -        │ -         │ -       │ -      │
│ • k8s-w3 │ -        │ -         │ -       │ -      │
│ • k8s-w4 │ -        │ -         │ -       │ -      │
└──────────┴──────────┴───────────┴─────────┴────────┘

0 B/0 B used, 0 volumes, 0 drives

# discover 진행
kubectl directpv discover

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl directpv discover

 Discovered node 'k8s-w1' ✔
 Discovered node 'k8s-w2' ✔
 Discovered node 'k8s-w3' ✔
 Discovered node 'k8s-w4' ✔

┌─────────────────────┬────────┬───────┬────────┬────────────┬───────────────────┬───────────┬─────────────┐
│ ID                  │ NODE   │ DRIVE │ SIZE   │ FILESYSTEM │ MAKE              │ AVAILABLE │ DESCRIPTION │
├─────────────────────┼────────┼───────┼────────┼────────────┼───────────────────┼───────────┼─────────────┤
│ 8:16$poCi/BftVMA... │ k8s-w1 │ sdb   │ 10 GiB │ -          │ ATA VBOX_HARDDISK │ YES       │ -           │
│ 8:32$22H3uom5IOY... │ k8s-w1 │ sdc   │ 10 GiB │ -          │ ATA VBOX_HARDDISK │ YES       │ -           │
│ 8:48$u3HIe1sE+p6... │ k8s-w1 │ sdd   │ 10 GiB │ -          │ ATA VBOX_HARDDISK │ YES       │ -           │
│ 8:64$yics0QWKvft... │ k8s-w1 │ sde   │ 10 GiB │ -          │ ATA VBOX_HARDDISK │ YES       │ -           │
│ 8:16$b3pnXr9RpwI... │ k8s-w2 │ sdb   │ 10 GiB │ -          │ ATA VBOX_HARDDISK │ YES       │ -           │
│ 8:32$ymtWBIelp6q... │ k8s-w2 │ sdc   │ 10 GiB │ -          │ ATA VBOX_HARDDISK │ YES       │ -           │
│ 8:48$G2YkrXtl+uz... │ k8s-w2 │ sdd   │ 10 GiB │ -          │ ATA VBOX_HARDDISK │ YES       │ -           │
│ 8:64$hIDV0oBlCCV... │ k8s-w2 │ sde   │ 10 GiB │ -          │ ATA VBOX_HARDDISK │ YES       │ -           │
│ 8:16$EURY5fbb8T8... │ k8s-w3 │ sdb   │ 10 GiB │ -          │ ATA VBOX_HARDDISK │ YES       │ -           │
│ 8:32$n/4+uDq2Gn1... │ k8s-w3 │ sdc   │ 10 GiB │ -          │ ATA VBOX_HARDDISK │ YES       │ -           │
│ 8:48$BTigOfLE531... │ k8s-w3 │ sdd   │ 10 GiB │ -          │ ATA VBOX_HARDDISK │ YES       │ -           │
│ 8:64$XqEdziKUCmD... │ k8s-w3 │ sde   │ 10 GiB │ -          │ ATA VBOX_HARDDISK │ YES       │ -           │
│ 8:16$X83+Qf4i0g2... │ k8s-w4 │ sdb   │ 10 GiB │ -          │ ATA VBOX_HARDDISK │ YES       │ -           │
│ 8:32$FwpJO3yFPEu... │ k8s-w4 │ sdc   │ 10 GiB │ -          │ ATA VBOX_HARDDISK │ YES       │ -           │
│ 8:48$6PCJY7rbVop... │ k8s-w4 │ sdd   │ 10 GiB │ -          │ ATA VBOX_HARDDISK │ YES       │ -           │
│ 8:64$UvZbw4QKXA1... │ k8s-w4 │ sde   │ 10 GiB │ -          │ ATA VBOX_HARDDISK │ YES       │ -           │
└─────────────────────┴────────┴───────┴────────┴────────────┴───────────────────┴───────────┴─────────────┘

Generated 'drives.yaml' successfully.

 

각 노드에 4개의 디스크가 확인됩니다.

discover 명령을 수행하면, 해당 폴더에 drives.yaml 파일이 생성됩니다. 이 파일을 인자로 init하면 초기화가 이뤄집니다.

# (참고) 적용 예외 설정 시 select: "no" 설정
cat drives.yaml

(⎈|HomeLab:N/A) root@k8s-ctr:~# cat drives.yaml
version: v1
nodes:
    - name: k8s-w2
      drives:
        - id: 8:32$ymtWBIelp6qh/m1COfQEByjhTh3b3bSAd/UTRh6XRSw=
          name: sdc
          size: 10737418240
          make: ATA VBOX_HARDDISK
          select: "yes"
        - id: 8:48$G2YkrXtl+uz+QqUW3KmxtlmoyhDNVWLRJqtRMh9OW/0=
          name: sdd
          size: 10737418240
          make: ATA VBOX_HARDDISK
          select: "yes"
        - id: 8:64$hIDV0oBlCCV+OpcFmqH1cHBDDxjWfi6JkaGVcbUf4RA=
          name: sde
          size: 10737418240
          make: ATA VBOX_HARDDISK
          select: "yes"
        - id: 8:16$b3pnXr9RpwIz1mbdQC/GCJ5Nrvm4DpwXUAH8hEZIDtw=
          name: sdb
          size: 10737418240
          make: ATA VBOX_HARDDISK
          select: "yes"
    - name: k8s-w3
      drives:
        - id: 8:64$XqEdziKUCmDkPCGbe2khHIpjlxV8eL8W72CDriSf9fw=
          name: sde
          size: 10737418240
          make: ATA VBOX_HARDDISK
          select: "yes"
        - id: 8:32$n/4+uDq2Gn14BIitmPY0yU3fK3Y/bFq1vBGMT0pvf1Y=
          name: sdc
          size: 10737418240
          make: ATA VBOX_HARDDISK
          select: "yes"
        - id: 8:16$EURY5fbb8T8KNvOTNYyEEP73RsfObj+jakjzAMn5cAY=
          name: sdb
          size: 10737418240
          make: ATA VBOX_HARDDISK
          select: "yes"
        - id: 8:48$BTigOfLE531fSrM10Pe6GkIySn3Y16Puiq1dOnMQLfY=
          name: sdd
          size: 10737418240
          make: ATA VBOX_HARDDISK
          select: "yes"
    - name: k8s-w4
      drives:
        - id: 8:64$UvZbw4QKXA1jsfwAdWMUZFW6Z232EXYI/UmWhZh3Oi4=
          name: sde
          size: 10737418240
          make: ATA VBOX_HARDDISK
          select: "yes"
        - id: 8:48$6PCJY7rbVoprbVT8BFr4AyPgKyu4AdyFcHGrGC9AHSk=
          name: sdd
          size: 10737418240
          make: ATA VBOX_HARDDISK
          select: "yes"
        - id: 8:32$FwpJO3yFPEugMOPzApdxFMYCV+nz29NadD4P32+Zt98=
          name: sdc
          size: 10737418240
          make: ATA VBOX_HARDDISK
          select: "yes"
        - id: 8:16$X83+Qf4i0g2z0JW4ZxyR6Km7QPFY9n1vxPJqGITwlhU=
          name: sdb
          size: 10737418240
          make: ATA VBOX_HARDDISK
          select: "yes"
    - name: k8s-w1
      drives:
        - id: 8:16$poCi/BftVMAUIio+XwTBRXyDD/SVcCoeAOaTqA16X8U=
          name: sdb
          size: 10737418240
          make: ATA VBOX_HARDDISK
          select: "yes"
        - id: 8:32$22H3uom5IOYEbYA2m98gPa9UBU5bvi3GwKsC7DZwqfc=
          name: sdc
          size: 10737418240
          make: ATA VBOX_HARDDISK
          select: "yes"
        - id: 8:48$u3HIe1sE+p6ap/9LamRZsq9eFlwaNuJjyQ0CJDvJwxQ=
          name: sdd
          size: 10737418240
          make: ATA VBOX_HARDDISK
          select: "yes"
        - id: 8:64$yics0QWKvftlG6rBfwjzz9T0PtE99s8BQO0DErOb1H0=
          name: sde
          size: 10737418240
          make: ATA VBOX_HARDDISK
          select: "yes"

# 초기화 (데이터가 지워짐!)
kubectl directpv init drives.yaml --dangerous

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl directpv init drives.yaml --dangerous

 ███████████████████████████████████████████████████████████████████████████ 100%

 Processed initialization request '026f7148-cce3-4758-9147-9050a41516dd' for node 'k8s-w4' ✔
 Processed initialization request '2c9d4f0e-0511-4867-be8d-082189a96b7b' for node 'k8s-w1' ✔
 Processed initialization request '43965296-bcee-47cf-96a5-67d063fd43d6' for node 'k8s-w2' ✔
 Processed initialization request 'd757d040-b0d7-4ee2-8b80-3b803f840a0a' for node 'k8s-w3' ✔

┌──────────────────────────────────────┬────────┬───────┬─────────┐
│ REQUEST_ID                           │ NODE   │ DRIVE │ MESSAGE │
├──────────────────────────────────────┼────────┼───────┼─────────┤
│ 2c9d4f0e-0511-4867-be8d-082189a96b7b │ k8s-w1 │ sdb   │ Success │
│ 2c9d4f0e-0511-4867-be8d-082189a96b7b │ k8s-w1 │ sdc   │ Success │
│ 2c9d4f0e-0511-4867-be8d-082189a96b7b │ k8s-w1 │ sdd   │ Success │
│ 2c9d4f0e-0511-4867-be8d-082189a96b7b │ k8s-w1 │ sde   │ Success │
│ 43965296-bcee-47cf-96a5-67d063fd43d6 │ k8s-w2 │ sdb   │ Success │
│ 43965296-bcee-47cf-96a5-67d063fd43d6 │ k8s-w2 │ sdc   │ Success │
│ 43965296-bcee-47cf-96a5-67d063fd43d6 │ k8s-w2 │ sdd   │ Success │
│ 43965296-bcee-47cf-96a5-67d063fd43d6 │ k8s-w2 │ sde   │ Success │
│ d757d040-b0d7-4ee2-8b80-3b803f840a0a │ k8s-w3 │ sdb   │ Success │
│ d757d040-b0d7-4ee2-8b80-3b803f840a0a │ k8s-w3 │ sdc   │ Success │
│ d757d040-b0d7-4ee2-8b80-3b803f840a0a │ k8s-w3 │ sdd   │ Success │
│ d757d040-b0d7-4ee2-8b80-3b803f840a0a │ k8s-w3 │ sde   │ Success │
│ 026f7148-cce3-4758-9147-9050a41516dd │ k8s-w4 │ sdb   │ Success │
│ 026f7148-cce3-4758-9147-9050a41516dd │ k8s-w4 │ sdc   │ Success │
│ 026f7148-cce3-4758-9147-9050a41516dd │ k8s-w4 │ sdd   │ Success │
│ 026f7148-cce3-4758-9147-9050a41516dd │ k8s-w4 │ sde   │ Success │

# 드라이브 확인
kubectl directpv list drives

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl directpv list drives
┌────────┬──────┬───────────────────┬────────┬────────┬─────────┬────────┐
│ NODE   │ NAME │ MAKE              │ SIZE   │ FREE   │ VOLUMES │ STATUS │
├────────┼──────┼───────────────────┼────────┼────────┼─────────┼────────┤
│ k8s-w1 │ sdb  │ ATA VBOX_HARDDISK │ 10 GiB │ 10 GiB │ -       │ Ready  │
│ k8s-w1 │ sdc  │ ATA VBOX_HARDDISK │ 10 GiB │ 10 GiB │ -       │ Ready  │
│ k8s-w1 │ sdd  │ ATA VBOX_HARDDISK │ 10 GiB │ 10 GiB │ -       │ Ready  │
│ k8s-w1 │ sde  │ ATA VBOX_HARDDISK │ 10 GiB │ 10 GiB │ -       │ Ready  │
│ k8s-w2 │ sdb  │ ATA VBOX_HARDDISK │ 10 GiB │ 10 GiB │ -       │ Ready  │
│ k8s-w2 │ sdc  │ ATA VBOX_HARDDISK │ 10 GiB │ 10 GiB │ -       │ Ready  │
│ k8s-w2 │ sdd  │ ATA VBOX_HARDDISK │ 10 GiB │ 10 GiB │ -       │ Ready  │
│ k8s-w2 │ sde  │ ATA VBOX_HARDDISK │ 10 GiB │ 10 GiB │ -       │ Ready  │
│ k8s-w3 │ sdb  │ ATA VBOX_HARDDISK │ 10 GiB │ 10 GiB │ -       │ Ready  │
│ k8s-w3 │ sdc  │ ATA VBOX_HARDDISK │ 10 GiB │ 10 GiB │ -       │ Ready  │
│ k8s-w3 │ sdd  │ ATA VBOX_HARDDISK │ 10 GiB │ 10 GiB │ -       │ Ready  │
│ k8s-w3 │ sde  │ ATA VBOX_HARDDISK │ 10 GiB │ 10 GiB │ -       │ Ready  │
│ k8s-w4 │ sdb  │ ATA VBOX_HARDDISK │ 10 GiB │ 10 GiB │ -       │ Ready  │
│ k8s-w4 │ sdc  │ ATA VBOX_HARDDISK │ 10 GiB │ 10 GiB │ -       │ Ready  │
│ k8s-w4 │ sdd  │ ATA VBOX_HARDDISK │ 10 GiB │ 10 GiB │ -       │ Ready  │
│ k8s-w4 │ sde  │ ATA VBOX_HARDDISK │ 10 GiB │ 10 GiB │ -       │ Ready  │
└────────┴──────┴───────────────────┴────────┴────────┴─────────┴────────┘

# 각 노드당 4개의 드라이브, 총 16개가 인식됨
kubectl directpv info

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl directpv info
┌──────────┬──────────┬───────────┬─────────┬────────┐
│ NODE     │ CAPACITY │ ALLOCATED │ VOLUMES │ DRIVES │
├──────────┼──────────┼───────────┼─────────┼────────┤
│ • k8s-w1 │ 40 GiB   │ 0 B       │ 0       │ 4      │
│ • k8s-w2 │ 40 GiB   │ 0 B       │ 0       │ 4      │
│ • k8s-w3 │ 40 GiB   │ 0 B       │ 0       │ 4      │
│ • k8s-w4 │ 40 GiB   │ 0 B       │ 0       │ 4      │
└──────────┴──────────┴───────────┴─────────┴────────┘

0 B/160 GiB used, 0 volumes, 16 drives

# 확인
lsblk

# 사전 정보
root@k8s-w1:~# lsblk
NAME                      MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sda                         8:0    0  64G  0 disk
├─sda1                      8:1    0   1M  0 part
├─sda2                      8:2    0   2G  0 part /boot
└─sda3                      8:3    0  62G  0 part
  └─ubuntu--vg-ubuntu--lv 252:0    0  31G  0 lvm  /
sdb                         8:16   0  10G  0 disk
sdc                         8:32   0  10G  0 disk
sdd                         8:48   0  10G  0 disk
sde                         8:64   0  10G  0 disk

# 사후 정보
root@k8s-w1:~# lsblk
NAME                      MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
loop0                       7:0    0  16M  0 loop
sda                         8:0    0  64G  0 disk
├─sda1                      8:1    0   1M  0 part
├─sda2                      8:2    0   2G  0 part /boot
└─sda3                      8:3    0  62G  0 part
  └─ubuntu--vg-ubuntu--lv 252:0    0  31G  0 lvm  /
sdb                         8:16   0  10G  0 disk /var/lib/directpv/mnt/b3bf1d12-430f-413d-8d6d-4f2300ac7e2d
sdc                         8:32   0  10G  0 disk /var/lib/directpv/mnt/83242820-7ec4-4018-95ee-33d6e477c9b1
sdd                         8:48   0  10G  0 disk /var/lib/directpv/mnt/ad5cdb87-4dde-4ddb-ae6f-54d2b93303e7
sde                         8:64   0  10G  0 disk /var/lib/directpv/mnt/1ec669cd-106d-42eb-9a75-c74acace67d6

# 디스크가 xfs로 포맷팅되어 마운트 된 상태
df -hT --type xfs

root@k8s-w1:~# df -hT --type xfs
Filesystem     Type  Size  Used Avail Use% Mounted on
/dev/sdd       xfs    10G  104M  9.9G   2% /var/lib/directpv/mnt/ad5cdb87-4dde-4ddb-ae6f-54d2b93303e7
/dev/sde       xfs    10G  104M  9.9G   2% /var/lib/directpv/mnt/1ec669cd-106d-42eb-9a75-c74acace67d6
/dev/sdb       xfs    10G  104M  9.9G   2% /var/lib/directpv/mnt/b3bf1d12-430f-413d-8d6d-4f2300ac7e2d
/dev/sdc       xfs    10G  104M  9.9G   2% /var/lib/directpv/mnt/83242820-7ec4-4018-95ee-33d6e477c9b1


tree -h /var/lib/directpv/

root@k8s-w1:~# tree -h /var/lib/directpv/
[4.0K]  /var/lib/directpv/
├── [4.0K]  mnt
│   ├── [  75]  1ec669cd-106d-42eb-9a75-c74acace67d6
│   ├── [  75]  83242820-7ec4-4018-95ee-33d6e477c9b1
│   ├── [  75]  ad5cdb87-4dde-4ddb-ae6f-54d2b93303e7
│   └── [  75]  b3bf1d12-430f-413d-8d6d-4f2300ac7e2d
└── [  40]  tmp

7 directories, 0 files

# 각 드라이브는 directpvdirves로 등록됨
kubectl get directpvdrives.directpv.min.io -o yaml | yq

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get directpvdrives.directpv.min.io
NAME                                   AGE
11679efe-44ef-4849-8755-136085abe018   3m8s
1ec669cd-106d-42eb-9a75-c74acace67d6   3m8s
254b8fc1-9159-471e-9df5-8b7467149ac4   3m8s
2f708625-91a1-4751-94f1-4560f494afc8   3m8s
3b3ef65c-42f6-4b50-a44a-44ffc28cbbac   3m8s
696c0595-1854-459d-a49d-704dc8141389   3m8s
7f34ec30-ad93-4506-a0ca-657163eb5fc3   3m7s
83242820-7ec4-4018-95ee-33d6e477c9b1   3m8s
ad5cdb87-4dde-4ddb-ae6f-54d2b93303e7   3m8s
b305dca2-ef12-4a61-955a-9ea12db10740   3m8s
b3bf1d12-430f-413d-8d6d-4f2300ac7e2d   3m8s
cb4ac76d-e415-440d-817d-83a0c095e249   3m8s
cba257bb-7576-4fb2-8c1d-1b7200e6fe03   3m8s
d1e0638b-b1bc-4276-a3a9-71ae8366f11b   3m8s
e9e46095-447a-4e09-afa0-817a75e36893   3m7s
f9474487-102f-4435-97a2-5a0a50fa98e8   3m8s

 

DirectPV를 통해서 각 노드의 로컬 볼륨이 인식되었고, MinIO를 설치해보겠습니다.

 

 

3. MinIO MNMD 배포

이제 MinIO를 설치해보고 실습을 이어가겠습니다.

# helm repo 등록
helm repo add minio-operator https://operator.min.io

# https://github.com/minio/operator/blob/master/helm/operator/values.yaml
cat << EOF > minio-operator-values.yaml
operator:  
  replicaCount: 1
EOF
helm install --namespace minio-operator --create-namespace minio-operator minio-operator/operator --values minio-operator-values.yaml


(⎈|HomeLab:N/A) root@k8s-ctr:~# helm install --namespace minio-operator --create-namespace minio-operator minio-operator/operator --values minio-operator-values.yaml
NAME: minio-operator
LAST DEPLOYED: Wed Sep 24 00:07:01 2025
NAMESPACE: minio-operator
STATUS: deployed
REVISION: 1
TEST SUITE: None


# 확인 
kubectl get all -n minio-operator
kubectl get pod,svc,ep -n minio-operator
kubectl get crd

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get all -n minio-operator
NAME                                  READY   STATUS    RESTARTS   AGE
pod/minio-operator-75946dc4db-qz6j6   1/1     Running   0          14s

NAME               TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)    AGE
service/operator   ClusterIP   10.96.80.111   <none>        4221/TCP   16s
service/sts        ClusterIP   10.96.8.120    <none>        4223/TCP   16s

NAME                             READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/minio-operator   1/1     1            1           15s

NAME                                        DESIRED   CURRENT   READY   AGE
replicaset.apps/minio-operator-75946dc4db   1         1         1       15s
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get pod,svc,ep -n minio-operator
Warning: v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
NAME                                  READY   STATUS    RESTARTS   AGE
pod/minio-operator-75946dc4db-qz6j6   1/1     Running   0          29s

NAME               TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)    AGE
service/operator   ClusterIP   10.96.80.111   <none>        4221/TCP   31s
service/sts        ClusterIP   10.96.8.120    <none>        4223/TCP   31s

NAME                 ENDPOINTS           AGE
endpoints/operator   172.20.3.244:4221   30s
endpoints/sts        172.20.3.244:4223   30s
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get crd
NAME                                         CREATED AT
ciliumcidrgroups.cilium.io                   2025-09-23T13:57:26Z
ciliumclusterwidenetworkpolicies.cilium.io   2025-09-23T13:57:29Z
ciliumendpoints.cilium.io                    2025-09-23T13:57:26Z
ciliumexternalworkloads.cilium.io            2025-09-23T13:57:26Z
ciliumidentities.cilium.io                   2025-09-23T13:57:26Z
ciliuml2announcementpolicies.cilium.io       2025-09-23T13:57:26Z
ciliumloadbalancerippools.cilium.io          2025-09-23T13:57:26Z
ciliumnetworkpolicies.cilium.io              2025-09-23T13:57:29Z
ciliumnodeconfigs.cilium.io                  2025-09-23T13:57:26Z
ciliumnodes.cilium.io                        2025-09-23T13:57:27Z
ciliumpodippools.cilium.io                   2025-09-23T13:57:27Z
directpvdrives.directpv.min.io               2025-09-23T14:58:17Z
directpvinitrequests.directpv.min.io         2025-09-23T14:58:17Z
directpvnodes.directpv.min.io                2025-09-23T14:58:17Z
directpvvolumes.directpv.min.io              2025-09-23T14:58:17Z
policybindings.sts.min.io                    2025-09-23T15:07:03Z
tenants.minio.min.io                         2025-09-23T15:07:03Z

 

MinIO Operator 설치가 완료되었고, tenant 설치를 진행하겠습니다.

# tenant values : https://github.com/minio/operator/blob/master/helm/tenant/values.yaml
cat << EOF > minio-tenant-1-values.yaml
tenant:
  name: tenant1

  configSecret:
    name: tenant1-env-configuration
    accessKey: minio
    secretKey: minio123

  pools:
    - servers: 4
      name: pool-0
      volumesPerServer: 4
      size: 10Gi 
      storageClassName: directpv-min-io # directpv를 storageclass를 사용함을 명시
  env:
    - name: MINIO_STORAGE_CLASS_STANDARD
      value: "EC:1"

  metrics:
    enabled: true
    port: 9000
    protocol: http
EOF

 

아래는 tenant 생성 과정에서 실패한 내용을 참고로 작성한 내용입니다.

참고: tenant 생성 실패 이슈

helm install --namespace tenant1 --create-namespace --values minio-tenant-1-values.yaml tenant1 minio-operator/tenant \
 && kubectl get tenants -A -w

(⎈|HomeLab:N/A) root@k8s-ctr:~# helm install --namespace tenant1 --create-namespace --values minio-tenant-1-values.yaml tenant1 minio-operator/tenant \
 && kubectl get tenants -A -w
NAME: tenant1
LAST DEPLOYED: Wed Sep 24 00:12:40 2025
NAMESPACE: tenant1
STATUS: deployed
REVISION: 1
TEST SUITE: None
NAMESPACE   NAME      STATE   HEALTH   AGE
tenant1     tenant1                    0s
tenant1     tenant1                    5s
tenant1     tenant1                    5s
tenant1     tenant1   Waiting for MinIO TLS Certificate            5s
tenant1     tenant1   Provisioning MinIO Cluster IP Service            15s
tenant1     tenant1   Provisioning Console Service                     15s
tenant1     tenant1   Provisioning MinIO Headless Service              16s
tenant1     tenant1   Provisioning MinIO Headless Service              17s
tenant1     tenant1   Provisioning MinIO Statefulset                   17s
tenant1     tenant1   Provisioning MinIO Statefulset                   17s
tenant1     tenant1   Provisioning MinIO Statefulset                   17s
tenant1     tenant1   Waiting for Tenant to be healthy                 17s

 

내용을 확인해보면 minio server가 전체 Pending인 것을 확인할 수 있습니다. 파드를 describe 해보면 VolumeBinding으로 스케줄링이 되지 않은 상태입니다.

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get all -n tenant1
NAME                   READY   STATUS    RESTARTS   AGE
pod/tenant1-pool-0-0   0/2     Pending   0          6m28s
pod/tenant1-pool-0-1   0/2     Pending   0          6m27s
pod/tenant1-pool-0-2   0/2     Pending   0          6m27s
pod/tenant1-pool-0-3   0/2     Pending   0          6m26s

NAME                      TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)    AGE
service/minio             ClusterIP   10.96.139.12   <none>        443/TCP    6m31s
service/tenant1-console   ClusterIP   10.96.15.45    <none>        9443/TCP   6m30s
service/tenant1-hl        ClusterIP   None           <none>        9000/TCP   6m30s

NAME                              READY   AGE
statefulset.apps/tenant1-pool-0   0/4     6m29s
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl describe po -n tenant1 tenant1-pool-0-0
Name:             tenant1-pool-0-0
Namespace:        tenant1
Priority:         0
Service Account:  tenant1-sa
Node:             <none>
Labels:           apps.kubernetes.io/pod-index=0
                  controller-revision-hash=tenant1-pool-0-b5b7b8c97
                  statefulset.kubernetes.io/pod-name=tenant1-pool-0-0
                  v1.min.io/console=tenant1-console
                  v1.min.io/pool=pool-0
                  v1.min.io/tenant=tenant1
Annotations:      min.io/revision: 0
Status:           Pending
IP:
IPs:              <none>
Controlled By:    StatefulSet/tenant1-pool-0
Init Containers:
  validate-arguments:
    Image:      quay.io/minio/operator-sidecar:v7.0.1
    Port:       <none>
    Host Port:  <none>
    Args:
      validate
      --tenant
      tenant1
    Environment:
      CLUSTER_DOMAIN:  cluster.local
    Mounts:
      /tmp/minio-config from configuration (rw)
      /tmp/minio/ from cfg-vol (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-pmj9n (ro)
Containers:
  minio:
    Image:       quay.io/minio/minio:RELEASE.2025-04-08T15-41-24Z
    Ports:       9000/TCP, 9443/TCP
    Host Ports:  0/TCP, 0/TCP
    Args:
      server
      --certs-dir
      /tmp/certs
      --console-address
      :9443
    Environment:
      MINIO_CONFIG_ENV_FILE:  /tmp/minio/config.env
    Mounts:
      /export0 from data0 (rw)
      /export1 from data1 (rw)
      /export2 from data2 (rw)
      /export3 from data3 (rw)
      /tmp/certs from tenant1-tls (rw)
      /tmp/minio/ from cfg-vol (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-pmj9n (ro)
  sidecar:
    Image:      quay.io/minio/operator-sidecar:v7.0.1
    Port:       <none>
    Host Port:  <none>
    Args:
      sidecar
      --tenant
      tenant1
      --config-name
      tenant1-env-configuration
    Readiness:  http-get http://:4444/ready delay=5s timeout=1s period=1s #success=1 #failure=1
    Environment:
      CLUSTER_DOMAIN:  cluster.local
    Mounts:
      /tmp/minio-config from configuration (rw)
      /tmp/minio/ from cfg-vol (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-pmj9n (ro)
Conditions:
  Type           Status
  PodScheduled   False
Volumes:
  data3:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  data3-tenant1-pool-0-0
    ReadOnly:   false
  data0:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  data0-tenant1-pool-0-0
    ReadOnly:   false
  data1:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  data1-tenant1-pool-0-0
    ReadOnly:   false
  data2:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  data2-tenant1-pool-0-0
    ReadOnly:   false
  cfg-vol:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    SizeLimit:  <unset>
  tenant1-tls:
    Type:        Projected (a volume that contains injected data from multiple sources)
    SecretName:  tenant1-tls
    Optional:    false
  configuration:
    Type:        Projected (a volume that contains injected data from multiple sources)
    SecretName:  tenant1-env-configuration
    Optional:    false
  kube-api-access-pmj9n:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    Optional:                false
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason            Age                    From               Message
  ----     ------            ----                   ----               -------
  Warning  FailedScheduling  6m43s                  default-scheduler  running PreBind plugin "VolumeBinding": binding volumes: provisioning failed for PVC "data3-tenant1-pool-0-0"
  Warning  FailedScheduling  6m36s (x2 over 6m40s)  default-scheduler  running PreBind plugin "VolumeBinding": binding volumes: provisioning failed for PVC "data3-tenant1-pool-0-0"

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get pvc -n tenant1
NAME                     STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS      VOLUMEATTRIBUTESCLASS   AGE
data0-tenant1-pool-0-0   Pending                                      directpv-min-io   <unset>                 6m57s
data0-tenant1-pool-0-1   Pending                                      directpv-min-io   <unset>                 6m57s
data0-tenant1-pool-0-2   Pending                                      directpv-min-io   <unset>                 6m57s
data0-tenant1-pool-0-3   Pending                                      directpv-min-io   <unset>                 6m56s
data1-tenant1-pool-0-0   Pending                                      directpv-min-io   <unset>                 6m57s
data1-tenant1-pool-0-1   Pending                                      directpv-min-io   <unset>                 6m57s
data1-tenant1-pool-0-2   Pending                                      directpv-min-io   <unset>                 6m57s
data1-tenant1-pool-0-3   Pending                                      directpv-min-io   <unset>                 6m56s
data2-tenant1-pool-0-0   Pending                                      directpv-min-io   <unset>                 6m58s
data2-tenant1-pool-0-1   Pending                                      directpv-min-io   <unset>                 6m57s
data2-tenant1-pool-0-2   Pending                                      directpv-min-io   <unset>                 6m57s
data2-tenant1-pool-0-3   Pending                                      directpv-min-io   <unset>                 6m55s
data3-tenant1-pool-0-0   Pending                                      directpv-min-io   <unset>                 6m57s
data3-tenant1-pool-0-1   Pending                                      directpv-min-io   <unset>                 6m57s
data3-tenant1-pool-0-2   Pending                                      directpv-min-io   <unset>                 6m57s
data3-tenant1-pool-0-3   Pending                                      directpv-min-io   <unset>                 6m56s
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl describe pvc -n tenant1 data0-tenant1-pool-0-0
Name:          data0-tenant1-pool-0-0
Namespace:     tenant1
StorageClass:  directpv-min-io
Status:        Pending
Volume:
Labels:        v1.min.io/console=tenant1-console
               v1.min.io/pool=pool-0
               v1.min.io/tenant=tenant1
Annotations:   volume.beta.kubernetes.io/storage-provisioner: directpv-min-io
               volume.kubernetes.io/storage-provisioner: directpv-min-io
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode:    Filesystem
Used By:       tenant1-pool-0-0
Events:
  Type     Reason                Age                     From                                                                              Message
  ----     ------                ----                    ----                                                                              -------
  Normal   WaitForFirstConsumer  7m11s                   persistentvolume-controller                                                       waiting for first consumer to be created before binding
  Warning  ProvisioningFailed    7m10s                   directpv-min-io_controller-7fcf6ddd76-zdlr2_fb2d2b49-6e1b-466b-9da6-bf74b5151674  failed to provision volume with StorageClass "directpv-min-io": rpc error: code = ResourceExhausted desc = no drive found for requested topology; requested node(s): k8s-w4; requested size: 10737418240 bytes
  Normal   ExternalProvisioning  5m59s (x11 over 7m11s)  persistentvolume-controller                                                       Waiting for a volume to be created either by the external provisioner 'directpv-min-io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered.
  Warning  ProvisioningFailed    5m27s (x11 over 7m14s)  directpv-min-io_controller-7fcf6ddd76-zdlr2_fb2d2b49-6e1b-466b-9da6-bf74b5151674  failed to provision volume with StorageClass "directpv-min-io": rpc error: code = ResourceExhausted desc = no drive found for requested topology; requested node(s): k8s-w3; requested size: 10737418240 bytes
  Normal   WaitForPodScheduled   2m11s (x44 over 7m10s)  persistentvolume-controller                                                       waiting for pod tenant1-pool-0-0 to be scheduled
  Normal   Provisioning          2m3s (x29 over 7m14s)   directpv-min-io_controller-7fcf6ddd76-zdlr2_fb2d2b49-6e1b-466b-9da6-bf74b5151674  External provisioner is provisioning volume for claim "tenant1/data0-tenant1-pool-0-0"

 

PVC가 Pending으로 확인되며, PVC의 이벤트를 확인해보면 ResourceExhausted가 발생한 것으로 보입니다. 로컬디스크를 10240*4개을 연결했지만 파일시스템 포맷 등으로 용량이 실제로는 10GiB까지는 부족했던 것으로 추정됩니다.

용량을 조금 줄여서 tenant를 생성해보겠습니다. 참고로, tenant 생성이 실패하면 helm으로 tenant를 삭제한 뒤 PVC도 모두 삭제해줘야 합니다.

 

테스트이므로 5G로 변경하고 다시 작업 진행하면 정상적으로 초기화가 이뤄진 것을 알 수 있습니다.

cat << EOF > minio-tenant-1-values.yaml
tenant:
  name: tenant1

  configSecret:
    name: tenant1-env-configuration
    accessKey: minio
    secretKey: minio123

  pools:
    - servers: 4
      name: pool-0
      volumesPerServer: 4
      size: 5Gi 
      storageClassName: directpv-min-io # directpv를 storageclass를 사용함을 명시
  env:
    - name: MINIO_STORAGE_CLASS_STANDARD
      value: "EC:1"

  metrics:
    enabled: true
    port: 9000
    protocol: http
EOF


(⎈|HomeLab:N/A) root@k8s-ctr:~# helm install --namespace tenant1 --create-namespace --values minio-tenant-1-values.yaml tenant1 minio-operator/tenant  && kubectl get tenants -A -w
NAME: tenant1
LAST DEPLOYED: Wed Sep 24 00:30:37 2025
NAMESPACE: tenant1
STATUS: deployed
REVISION: 1
TEST SUITE: None
NAMESPACE   NAME      STATE   HEALTH   AGE
tenant1     tenant1                    1s
tenant1     tenant1                    5s
tenant1     tenant1                    5s
tenant1     tenant1   Waiting for MinIO TLS Certificate            10s
tenant1     tenant1   Provisioning MinIO Cluster IP Service            16s
tenant1     tenant1   Provisioning Console Service                     17s
tenant1     tenant1   Provisioning MinIO Headless Service              17s
tenant1     tenant1   Provisioning MinIO Headless Service              18s
tenant1     tenant1   Provisioning MinIO Statefulset                   18s
tenant1     tenant1   Provisioning MinIO Statefulset                   19s
tenant1     tenant1   Provisioning MinIO Statefulset                   19s
tenant1     tenant1   Waiting for Tenant to be healthy                 19s
tenant1     tenant1   Waiting for Tenant to be healthy        red      65s
tenant1     tenant1   Waiting for Tenant to be healthy        green    67s
tenant1     tenant1   Waiting for Tenant to be healthy        green    68s
tenant1     tenant1   Waiting for Tenant to be healthy        green    69s
tenant1     tenant1   Initialized                             green    69s

 

tenant 구성이 완료된 이후 상태를 확인해보겠습니다.

# 확인
(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get all -n tenant1
NAME                   READY   STATUS    RESTARTS   AGE
pod/tenant1-pool-0-0   2/2     Running   0          20h
pod/tenant1-pool-0-1   2/2     Running   0          20h
pod/tenant1-pool-0-2   2/2     Running   0          20h
pod/tenant1-pool-0-3   2/2     Running   0          20h

NAME                      TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
service/minio             NodePort    10.96.130.108   <none>        443:30002/TCP    20h
service/tenant1-console   NodePort    10.96.116.65    <none>        9443:30001/TCP   20h
service/tenant1-hl        ClusterIP   None            <none>        9000/TCP         20h

NAME                              READY   AGE
statefulset.apps/tenant1-pool-0   4/4     20h

(⎈|default:N/A) root@k3s-s:~# kubectl describe tenants -n tenant1
Name:         tenant1
Namespace:    tenant1
Labels:       app=minio
              app.kubernetes.io/managed-by=Helm
Annotations:  meta.helm.sh/release-name: tenant1
              meta.helm.sh/release-namespace: tenant1
              prometheus.io/path: /minio/v2/metrics/cluster
              prometheus.io/port: 9000
              prometheus.io/scheme: http
              prometheus.io/scrape: true
API Version:  minio.min.io/v2
Kind:         Tenant
Metadata:
  Creation Timestamp:  2025-09-17T14:56:25Z
  Generation:          1
  Resource Version:    6390
  UID:                 12a0ce88-64ad-4212-bcfb-63ca4269b203
Spec:
  Configuration:
    Name:  tenant1-env-configuration
  Env:
    Name:   MINIO_STORAGE_CLASS_STANDARD
    Value:  EC:1
  Features:
    Bucket DNS:           false
    Enable SFTP:          false
  Image:                  quay.io/minio/minio:RELEASE.2025-04-08T15-41-24Z
  Image Pull Policy:      IfNotPresent
  Mount Path:             /export
  Pod Management Policy:  Parallel
  Pools:
    Name:     pool-0
    Servers:  1
    Volume Claim Template:
      Metadata:
        Name:  data
      Spec:
        Access Modes:
          ReadWriteOnce
        Resources:
          Requests:
            Storage:         10Gi
        Storage Class Name:  directpv-min-io
    Volumes Per Server:      4
  Pools Metadata:
    Annotations:
    Labels:
  Prometheus Operator:  false
  Request Auto Cert:    true
  Sub Path:             /data
Status:
  Available Replicas:  1
  Certificates:
    Auto Cert Enabled:  true
    Custom Certificates:
  Current State:  Initialized
  Drives Online:  4
  Health Status:  green
  Pools:
    Legacy Security Context:  false
    Ss Name:                  tenant1-pool-0
    State:                    PoolInitialized
  Revision:                   0
  Sync Version:               v6.0.0
  Usage:
    Capacity:      32212193280
    Raw Capacity:  42949591040
    Raw Usage:     81920
    Usage:         61440
  Write Quorum:    3
Events:
  Type     Reason                 Age                  From            Message
  ----     ------                 ----                 ----            -------
  Normal   CSRCreated             2m20s                minio-operator  MinIO CSR Created
  Normal   SvcCreated             2m9s                 minio-operator  MinIO Service Created
  Normal   SvcCreated             2m9s                 minio-operator  Console Service Created
  Normal   SvcCreated             2m9s                 minio-operator  Headless Service created
  Normal   PoolCreated            2m9s                 minio-operator  Tenant pool pool-0 created
  Normal   Updated                2m4s                 minio-operator  Headless Service Updated
  Warning  WaitingMinIOIsHealthy  114s (x4 over 2m8s)  minio-operator  Waiting for MinIO to be ready

 

앞서 tenant에서 pool에 4개 볼륨(volumesPerServer: 4)이 생성되고, 또한 스토리지 사이즈를 10Gi(size: 10Gi)으로 지정했습니다. 이후 정보가 어떻게 변경되었는지 확인해보겠습니다.

# 확인
lsblk
kubectl directpv info
kubectl directpv list drives
kubectl directpv list volumes

(⎈|default:N/A) root@k3s-s:~# lsblk
NAME         MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTS
loop0          7:0    0 73.9M  1 loop /snap/core22/2111
loop1          7:1    0 27.6M  1 loop /snap/amazon-ssm-agent/11797
loop2          7:2    0 50.8M  1 loop /snap/snapd/25202
loop3          7:3    0   16M  0 loop
nvme1n1      259:0    0   30G  0 disk /var/lib/kubelet/pods/72ee3a1a-4a61-44f0-94b6-a7cc861eb829/volumes/kubernetes.io~csi/pvc-9eba11d7-3331-423c-91d7-2a8f45a08ce3/mount
                                      /var/lib/kubelet/plugins/kubernetes.io/csi/directpv-min-io/f94049e38beb31a7b9cf88a9d48e54c8af90509d141e70ff851eb8cdf87b09f2/globalmount
                                      /var/lib/directpv/mnt/ffd730c8-c056-454a-830f-208b9529104c
nvme0n1      259:1    0   30G  0 disk
├─nvme0n1p1  259:5    0   29G  0 part /
├─nvme0n1p14 259:6    0    4M  0 part
├─nvme0n1p15 259:7    0  106M  0 part /boot/efi
└─nvme0n1p16 259:8    0  913M  0 part /boot
nvme4n1      259:2    0   30G  0 disk /var/lib/kubelet/pods/72ee3a1a-4a61-44f0-94b6-a7cc861eb829/volumes/kubernetes.io~csi/pvc-88ff8de1-0702-4783-9a24-f63af88dda30/mount
                                      /var/lib/kubelet/plugins/kubernetes.io/csi/directpv-min-io/20cd114efbb71cad4c72f66f980b71335e29a50b57ad159a6c18566c3d01eaf9/globalmount
                                      /var/lib/directpv/mnt/7f010ba0-6e36-4bac-8734-8101f5fc86cd
nvme3n1      259:3    0   30G  0 disk /var/lib/kubelet/pods/72ee3a1a-4a61-44f0-94b6-a7cc861eb829/volumes/kubernetes.io~csi/pvc-08bdde0c-b472-4dfe-8b95-09e59e6aa4d8/mount
                                      /var/lib/kubelet/plugins/kubernetes.io/csi/directpv-min-io/3f4d3fabd87e625fc0d887fdf2f9c90a2743b72354a7de4a6ab53ac502d291c6/globalmount
                                      /var/lib/directpv/mnt/ff9fbf17-a2ca-475a-83c3-88b9c4c77140
nvme2n1      259:4    0   30G  0 disk /var/lib/kubelet/pods/72ee3a1a-4a61-44f0-94b6-a7cc861eb829/volumes/kubernetes.io~csi/pvc-e846556e-da9f-4670-8c69-7479a723af37/mount
                                      /var/lib/kubelet/plugins/kubernetes.io/csi/directpv-min-io/28f2fa689cc75aff33f7429c65d5912fb23dfa3394a23dbc6ff22fbaacc112e4/globalmount
                                      /var/lib/directpv/mnt/d29e80c7-dc3b-4a48-9a81-82352886d63f
(⎈|default:N/A) root@k3s-s:~# kubectl directpv info
┌─────────┬──────────┬───────────┬─────────┬────────┐
│ NODE    │ CAPACITY │ ALLOCATED │ VOLUMES │ DRIVES │
├─────────┼──────────┼───────────┼─────────┼────────┤
│ • k3s-s │ 120 GiB  │ 40 GiB    │ 4       │ 4      │
└─────────┴──────────┴───────────┴─────────┴────────┘

40 GiB/120 GiB used, 4 volumes, 4 drives
(⎈|default:N/A) root@k3s-s:~# kubectl directpv list drives
┌───────┬─────────┬────────────────────────────┬────────┬────────┬─────────┬────────┐
│ NODE  │ NAME    │ MAKE                       │ SIZE   │ FREE   │ VOLUMES │ STATUS │
├───────┼─────────┼────────────────────────────┼────────┼────────┼─────────┼────────┤
│ k3s-s │ nvme1n1 │ Amazon Elastic Block Store │ 30 GiB │ 20 GiB │ 1       │ Ready  │
│ k3s-s │ nvme2n1 │ Amazon Elastic Block Store │ 30 GiB │ 20 GiB │ 1       │ Ready  │
│ k3s-s │ nvme3n1 │ Amazon Elastic Block Store │ 30 GiB │ 20 GiB │ 1       │ Ready  │
│ k3s-s │ nvme4n1 │ Amazon Elastic Block Store │ 30 GiB │ 20 GiB │ 1       │ Ready  │
└───────┴─────────┴────────────────────────────┴────────┴────────┴─────────┴────────┘
(⎈|default:N/A) root@k3s-s:~# kubectl directpv list volumes
┌──────────────────────────────────────────┬──────────┬───────┬─────────┬──────────────────┬──────────────┬─────────┐
│ VOLUME                                   │ CAPACITY │ NODE  │ DRIVE   │ PODNAME          │ PODNAMESPACE │ STATUS  │
├──────────────────────────────────────────┼──────────┼───────┼─────────┼──────────────────┼──────────────┼─────────┤
│ pvc-9eba11d7-3331-423c-91d7-2a8f45a08ce3 │ 10 GiB   │ k3s-s │ nvme1n1 │ tenant1-pool-0-0 │ tenant1      │ Bounded │
│ pvc-e846556e-da9f-4670-8c69-7479a723af37 │ 10 GiB   │ k3s-s │ nvme2n1 │ tenant1-pool-0-0 │ tenant1      │ Bounded │
│ pvc-08bdde0c-b472-4dfe-8b95-09e59e6aa4d8 │ 10 GiB   │ k3s-s │ nvme3n1 │ tenant1-pool-0-0 │ tenant1      │ Bounded │
│ pvc-88ff8de1-0702-4783-9a24-f63af88dda30 │ 10 GiB   │ k3s-s │ nvme4n1 │ tenant1-pool-0-0 │ tenant1      │ Bounded │
└──────────────────────────────────────────┴──────────┴───────┴─────────┴──────────────────┴──────────────┴─────────┘

 

지정된 정보와 같이 4개의 볼륨이 각 10Gi씩 생성된 것을 확인할 수 있습니다. 추가로 정보를 확인해보겠습니다.

# 확인
kubectl get directpvvolumes.directpv.min.io
kubectl get directpvvolumes.directpv.min.io -o yaml | yq
kubectl describe directpvvolumes
tree -ah /var/lib/kubelet/plugins
tree -ah /var/lib/directpv/mnt
cat /var/lib/kubelet/plugins/kubernetes.io/csi/directpv-min-io/*/vol_data.json

(⎈|default:N/A) root@k3s-s:~# kubectl get directpvvolumes.directpv.min.io
NAME                                       AGE
pvc-08bdde0c-b472-4dfe-8b95-09e59e6aa4d8   26m
pvc-88ff8de1-0702-4783-9a24-f63af88dda30   26m
pvc-9eba11d7-3331-423c-91d7-2a8f45a08ce3   26m
pvc-e846556e-da9f-4670-8c69-7479a723af37   26m

(⎈|default:N/A) root@k3s-s:~# tree -h /var/lib/kubelet/plugins
[4.0K]  /var/lib/kubelet/plugins
├── [4.0K]  controller-controller
│   └── [   0]  csi.sock
├── [4.0K]  directpv-min-io
│   └── [   0]  csi.sock
└── [4.0K]  kubernetes.io
    └── [4.0K]  csi
        └── [4.0K]  directpv-min-io
            ├── [4.0K]  20cd114efbb71cad4c72f66f980b71335e29a50b57ad159a6c18566c3d01eaf9
            │   ├── [  18]  globalmount
            │   │   └── [  24]  data
            │   └── [  91]  vol_data.json
            ├── [4.0K]  28f2fa689cc75aff33f7429c65d5912fb23dfa3394a23dbc6ff22fbaacc112e4
            │   ├── [  18]  globalmount
            │   │   └── [  24]  data
            │   └── [  91]  vol_data.json
            ├── [4.0K]  3f4d3fabd87e625fc0d887fdf2f9c90a2743b72354a7de4a6ab53ac502d291c6
            │   ├── [  18]  globalmount
            │   │   └── [  24]  data
            │   └── [  91]  vol_data.json
            └── [4.0K]  f94049e38beb31a7b9cf88a9d48e54c8af90509d141e70ff851eb8cdf87b09f2
                ├── [  18]  globalmount
                │   └── [  24]  data
                └── [  91]  vol_data.json

18 directories, 6 files

(⎈|default:N/A) root@k3s-s:~# tree -h /var/lib/directpv/mnt
[4.0K]  /var/lib/directpv/mnt
├── [ 123]  7f010ba0-6e36-4bac-8734-8101f5fc86cd
│   └── [  18]  pvc-88ff8de1-0702-4783-9a24-f63af88dda30
│       └── [  24]  data
├── [ 123]  d29e80c7-dc3b-4a48-9a81-82352886d63f
│   └── [  18]  pvc-e846556e-da9f-4670-8c69-7479a723af37
│       └── [  24]  data
├── [ 123]  ff9fbf17-a2ca-475a-83c3-88b9c4c77140
│   └── [  18]  pvc-08bdde0c-b472-4dfe-8b95-09e59e6aa4d8
│       └── [  24]  data
└── [ 123]  ffd730c8-c056-454a-830f-208b9529104c
    └── [  18]  pvc-9eba11d7-3331-423c-91d7-2a8f45a08ce3
        └── [  24]  data

13 directories, 0 files

(⎈|default:N/A) root@k3s-s:~# cat /var/lib/kubelet/plugins/kubernetes.io/csi/directpv-min-io/*/vol_data.json
{"driverName":"directpv-min-io","volumeHandle":"pvc-88ff8de1-0702-4783-9a24-f63af88dda30"}
{"driverName":"directpv-min-io","volumeHandle":"pvc-e846556e-da9f-4670-8c69-7479a723af37"}
{"driverName":"directpv-min-io","volumeHandle":"pvc-08bdde0c-b472-4dfe-8b95-09e59e6aa4d8"}
{"driverName":"directpv-min-io","volumeHandle":"pvc-9eba11d7-3331-423c-91d7-2a8f45a08ce3"}

# PVC 정보
kubectl get pvc -n tenant1
kubectl get pvc -n tenant1 -o yaml | yq
kubectl describe pvc -n tenant1

(⎈|default:N/A) root@k3s-s:~# kubectl get pvc -n tenant1
NAME                     STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS      VOLUMEATTRIBUTESCLASS   AGE
data0-tenant1-pool-0-0   Bound    pvc-e846556e-da9f-4670-8c69-7479a723af37   10Gi       RWO            directpv-min-io   <unset>                 28m
data1-tenant1-pool-0-0   Bound    pvc-08bdde0c-b472-4dfe-8b95-09e59e6aa4d8   10Gi       RWO            directpv-min-io   <unset>                 28m
data2-tenant1-pool-0-0   Bound    pvc-9eba11d7-3331-423c-91d7-2a8f45a08ce3   10Gi       RWO            directpv-min-io   <unset>                 28m
data3-tenant1-pool-0-0   Bound    pvc-88ff8de1-0702-4783-9a24-f63af88dda30   10Gi       RWO            directpv-min-io   <unset>                 28m

 

지난 게시물에서 살펴본 바와 같이 tenant를 생성해야 실제로 MinIO 오브젝트 스토리지가 설치됩니다.

tenant를 생성하면서 MinIO가 배포된 상태를 확인해보겠습니다.

# tenant 확인
kubectl get sts,pod,svc,ep,pvc,secret -n tenant1
kubectl get pod -n tenant1 -l v1.min.io/pool=pool-0 -owide
kubectl describe pod -n tenant1 -l v1.min.io/pool=pool-0
kubectl logs -n tenant1 -l v1.min.io/pool=pool-0
kubectl exec -it -n tenant1 sts/tenant1-pool-0 -c minio -- id
kubectl exec -it -n tenant1 sts/tenant1-pool-0 -c minio -- env
kubectl exec -it -n tenant1 sts/tenant1-pool-0 -c minio -- cat /tmp/minio/config.env
kubectl get secret -n tenant1 tenant1-env-configuration -o jsonpath='{.data.config\.env}' | base64 -d ; echo
kubectl get secret -n tenant1 tenant1-tls -o jsonpath='{.data.public\.crt}' | base64 -d
kubectl get secret -n tenant1 tenant1-tls -o jsonpath='{.data.public\.crt}' | base64 -d | openssl x509 -noout -text


((⎈|HomeLab:N/A) root@k8s-ctr:~kubectl get sts,pod,svc,ep,pvc,secret -n tenant1t1
Warning: v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
NAME                              READY   AGE
statefulset.apps/tenant1-pool-0   4/4     4m21s

NAME                   READY   STATUS    RESTARTS   AGE
pod/tenant1-pool-0-0   2/2     Running   0          4m20s
pod/tenant1-pool-0-1   2/2     Running   0          4m18s
pod/tenant1-pool-0-2   2/2     Running   0          4m19s
pod/tenant1-pool-0-3   2/2     Running   0          4m17s

NAME                      TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
service/minio             ClusterIP   10.96.130.108   <none>        443/TCP    4m23s
service/tenant1-console   ClusterIP   10.96.116.65    <none>        9443/TCP   4m22s
service/tenant1-hl        ClusterIP   None            <none>        9000/TCP   4m22s

NAME                        ENDPOINTS                                                         AGE
endpoints/minio             172.20.1.234:9000,172.20.2.58:9000,172.20.3.97:9000 + 1 more...   4m22s
endpoints/tenant1-console   172.20.1.234:9443,172.20.2.58:9443,172.20.3.97:9443 + 1 more...   4m22s
endpoints/tenant1-hl        172.20.1.234:9000,172.20.2.58:9000,172.20.3.97:9000 + 1 more...   4m22s

NAME                                           STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS      VOLUMEATTRIBUTESCLASS   AGE
persistentvolumeclaim/data0-tenant1-pool-0-0   Bound    pvc-09cb0ef1-1f35-498d-8af5-0a07552fedf6   5Gi        RWO            directpv-min-io   <unset>                 4m20s
persistentvolumeclaim/data0-tenant1-pool-0-1   Bound    pvc-ba974c12-1c13-4214-b833-857ca77b16d6   5Gi        RWO            directpv-min-io   <unset>                 4m19s
persistentvolumeclaim/data0-tenant1-pool-0-2   Bound    pvc-4195aaee-bc68-49d9-baae-24008b99e37d   5Gi        RWO            directpv-min-io   <unset>                 4m19s
persistentvolumeclaim/data0-tenant1-pool-0-3   Bound    pvc-0356c85b-ed24-4c02-ab86-1f679cbe3a14   5Gi        RWO            directpv-min-io   <unset>                 4m18s
persistentvolumeclaim/data1-tenant1-pool-0-0   Bound    pvc-cdfe3904-0aeb-4435-b8f0-63227956bbae   5Gi        RWO            directpv-min-io   <unset>                 4m20s
persistentvolumeclaim/data1-tenant1-pool-0-1   Bound    pvc-b42bf921-10eb-4552-a71b-d9f06da5a0ef   5Gi        RWO            directpv-min-io   <unset>                 4m19s
persistentvolumeclaim/data1-tenant1-pool-0-2   Bound    pvc-b6c9174e-5c42-48f0-8d5b-bfea48ac5e05   5Gi        RWO            directpv-min-io   <unset>                 4m20s
persistentvolumeclaim/data1-tenant1-pool-0-3   Bound    pvc-f62c2729-7c20-4fbb-b3a4-eda290556a7d   5Gi        RWO            directpv-min-io   <unset>                 4m18s
persistentvolumeclaim/data2-tenant1-pool-0-0   Bound    pvc-164de563-bfe9-4618-8992-b40e911e1986   5Gi        RWO            directpv-min-io   <unset>                 4m20s
persistentvolumeclaim/data2-tenant1-pool-0-1   Bound    pvc-544aad79-0ad0-4214-b852-57f937553b8e   5Gi        RWO            directpv-min-io   <unset>                 4m19s
persistentvolumeclaim/data2-tenant1-pool-0-2   Bound    pvc-9ed305a2-2be1-429d-b41b-4f8b61d1e89c   5Gi        RWO            directpv-min-io   <unset>                 4m19s
persistentvolumeclaim/data2-tenant1-pool-0-3   Bound    pvc-939bb9c0-d71c-4df6-bafc-073326b3901c   5Gi        RWO            directpv-min-io   <unset>                 4m18s
persistentvolumeclaim/data3-tenant1-pool-0-0   Bound    pvc-590bcea7-127e-4bbe-b480-6d0d35bda008   5Gi        RWO            directpv-min-io   <unset>                 4m20s
persistentvolumeclaim/data3-tenant1-pool-0-1   Bound    pvc-1ab5a9a1-0600-4196-8c50-aa7e0e842100   5Gi        RWO            directpv-min-io   <unset>                 4m20s
persistentvolumeclaim/data3-tenant1-pool-0-2   Bound    pvc-761fe37d-09b7-4e7c-9625-8a2adacec35c   5Gi        RWO            directpv-min-io   <unset>                 4m19s
persistentvolumeclaim/data3-tenant1-pool-0-3   Bound    pvc-9ee5c940-e6fd-4cea-83d5-91823ca67081   5Gi        RWO            directpv-min-io   <unset>                 4m17s

NAME                                   TYPE                 DATA   AGE
secret/sh.helm.release.v1.tenant1.v1   helm.sh/release.v1   1      4m39s
secret/tenant1-env-configuration       Opaque               1      4m39s
secret/tenant1-tls                     Opaque               2      4m23s


(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl logs -n tenant1 -l v1.min.io/pool=pool-0
Defaulted container "minio" out of: minio, sidecar, validate-arguments (init)
Defaulted container "minio" out of: minio, sidecar, validate-arguments (init)
Defaulted container "minio" out of: minio, sidecar, validate-arguments (init)
Defaulted container "minio" out of: minio, sidecar, validate-arguments (init)
MinIO Object Storage Server
Copyright: 2015-2025 MinIO, Inc.
License: GNU AGPLv3 - https://www.gnu.org/licenses/agpl-3.0.html
Version: RELEASE.2025-04-08T15-41-24Z (go1.24.2 linux/amd64)

API: https://minio.tenant1.svc.cluster.local
WebUI: https://172.20.4.224:9443 https://127.0.0.1:9443

Docs: https://docs.min.io
---------------------------
MinIO Object Storage Server
Copyright: 2015-2025 MinIO, Inc.
License: GNU AGPLv3 - https://www.gnu.org/licenses/agpl-3.0.html
Version: RELEASE.2025-04-08T15-41-24Z (go1.24.2 linux/amd64)

API: https://minio.tenant1.svc.cluster.local
WebUI: https://172.20.2.58:9443 https://127.0.0.1:9443

Docs: https://docs.min.io
---------------------------
MinIO Object Storage Server
Copyright: 2015-2025 MinIO, Inc.
License: GNU AGPLv3 - https://www.gnu.org/licenses/agpl-3.0.html
Version: RELEASE.2025-04-08T15-41-24Z (go1.24.2 linux/amd64)

API: https://minio.tenant1.svc.cluster.local
WebUI: https://172.20.1.234:9443 https://127.0.0.1:9443

Docs: https://docs.min.io
---------------------------
MinIO Object Storage Server
Copyright: 2015-2025 MinIO, Inc.
License: GNU AGPLv3 - https://www.gnu.org/licenses/agpl-3.0.html
Version: RELEASE.2025-04-08T15-41-24Z (go1.24.2 linux/amd64)

API: https://minio.tenant1.svc.cluster.local
WebUI: https://172.20.3.97:9443 https://127.0.0.1:9443

Docs: https://docs.min.io
---------------------------

(⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get secret -n tenant1 tenant1-env-configuration -o jsonpath='{.data.config\.env}' | base64 -d ; echo
export MINIO_ROOT_USER="minio"
export MINIO_ROOT_PASSWORD="minio123"

 

MinIO의 webUI를 접속해보겠습니다.

# console을 nodeport로 변경
kubectl patch svc -n tenant1 tenant1-console -p '{"spec": {"type": "NodePort", "ports": [{"port": 9443, "targetPort": 9443, "nodePort": 30001}]}}'

# k8s-ctr의 eth1 인터페이스로 노드포트 접속: 기본키(minio , minio123)
echo "https://192.168.10.100:30001"

(⎈|default:N/A) root@k3s-s:~# echo "https://$(curl -s ipinfo.io/ip):30001"
https://15.164.244.91:30001

# minio API도 nodeport로 변경
kubectl patch svc -n tenant1 minio -p '{"spec": {"type": "NodePort", "ports": [{"port": 443, "targetPort": 9000, "nodePort": 30002}]}}'

 

아래와 같이 접속이 됩니다.

 

관리를 위해서 mc 커맨드로 설치해서 살펴보겠습니다.

curl --progress-bar -L https://dl.min.io/aistor/mc/release/linux-amd64/mc \
--create-dirs \
-o $HOME/aistor-binaries/mc

chmod +x ~/aistor-binaries/mc

~/aistor-binaries/mc --help

# 간단하게 사용하기 위해서 /usr/bin으로 이동
sudo cp ~/aistor-binaries/mc /usr/bin

 

alias를 등록해서 관리해보겠습니다.

# mc alias
mc alias set k8s-tenant1 https://127.0.0.1:30002 minio minio123 --insecure
mc alias list
mc admin info k8s-tenant1 --insecure

(⎈|HomeLab:N/A) root@k8s-ctr:~# mc alias set k8s-tenant1 https://127.0.0.1:30002 minio minio123 --insecure
mc: Configuration written to `/root/.mc/config.json`. Please update your access credentials.
mc: Successfully created `/root/.mc/share`.
mc: Initialized share uploads `/root/.mc/share/uploads.json` file.
mc: Initialized share downloads `/root/.mc/share/downloads.json` file.
Added `k8s-tenant1` successfully.
(⎈|HomeLab:N/A) root@k8s-ctr:~# mc admin info k8s-tenant1 --insecure
●  tenant1-pool-0-0.tenant1-hl.tenant1.svc.cluster.local:9000
   Uptime: 10 minutes
   Version: 2025-04-08T15:41:24Z
   Network: 4/4 OK
   Drives: 4/4 OK
   Pool: 1

●  tenant1-pool-0-1.tenant1-hl.tenant1.svc.cluster.local:9000
   Uptime: 10 minutes
   Version: 2025-04-08T15:41:24Z
   Network: 4/4 OK
   Drives: 4/4 OK
   Pool: 1

●  tenant1-pool-0-2.tenant1-hl.tenant1.svc.cluster.local:9000
   Uptime: 10 minutes
   Version: 2025-04-08T15:41:24Z
   Network: 4/4 OK
   Drives: 4/4 OK
   Pool: 1

●  tenant1-pool-0-3.tenant1-hl.tenant1.svc.cluster.local:9000
   Uptime: 10 minutes
   Version: 2025-04-08T15:41:24Z
   Network: 4/4 OK
   Drives: 4/4 OK
   Pool: 1

┌──────┬──────────────────────┬─────────────────────┬──────────────┐
│ Pool │ Drives Usage         │ Erasure stripe size │ Erasure sets │
│ 1st  │ 0.0% (total: 75 GiB) │ 16                  │ 1            │
└──────┴──────────────────────┴─────────────────────┴──────────────┘

16 drives online, 0 drives offline, EC:1

# 버킷 생성
mc mb k8s-tenant1/mybucket --insecure
mc ls k8s-tenant1 --insecure

(⎈|HomeLab:N/A) root@k8s-ctr:~# mc mb k8s-tenant1/mybucket --insecure
Bucket created successfully `k8s-tenant1/mybucket`.
(⎈|HomeLab:N/A) root@k8s-ctr:~# mc ls k8s-tenant1 --insecure
[2025-09-24 00:43:05 KST]     0B mybucket/

 

아래와 같이 생성한 버킷에 테스트 파일을 업로드 했습니다.

 

이 환경에서 실제로 erasure code가 어떤식으로 동작하는지 살펴보겠습니다.

노드에서 확인해보면, directPV로 구성된 볼륨 하위에 life.txt 라는 폴더가 생성된 것으로 보입니다.

root@k8s-w1:~# find / -name "*life.txt*"
/var/lib/directpv/mnt/b3bf1d12-430f-413d-8d6d-4f2300ac7e2d/pvc-f62c2729-7c20-4fbb-b3a4-eda290556a7d/data/mybucket/life.txt
/var/lib/directpv/mnt/ad5cdb87-4dde-4ddb-ae6f-54d2b93303e7/pvc-9ee5c940-e6fd-4cea-83d5-91823ca67081/data/mybucket/life.txt
/var/lib/directpv/mnt/83242820-7ec4-4018-95ee-33d6e477c9b1/pvc-0356c85b-ed24-4c02-ab86-1f679cbe3a14/data/mybucket/life.txt
/var/lib/directpv/mnt/1ec669cd-106d-42eb-9a75-c74acace67d6/pvc-939bb9c0-d71c-4df6-bafc-073326b3901c/data/mybucket/life.txt
/var/lib/kubelet/pods/a30bd9ab-25e0-4c72-9429-858df424262a/volumes/kubernetes.io~csi/pvc-939bb9c0-d71c-4df6-bafc-073326b3901c/mount/data/mybucket/life.txt
/var/lib/kubelet/pods/a30bd9ab-25e0-4c72-9429-858df424262a/volumes/kubernetes.io~csi/pvc-f62c2729-7c20-4fbb-b3a4-eda290556a7d/mount/data/mybucket/life.txt
/var/lib/kubelet/pods/a30bd9ab-25e0-4c72-9429-858df424262a/volumes/kubernetes.io~csi/pvc-0356c85b-ed24-4c02-ab86-1f679cbe3a14/mount/data/mybucket/life.txt
/var/lib/kubelet/pods/a30bd9ab-25e0-4c72-9429-858df424262a/volumes/kubernetes.io~csi/pvc-9ee5c940-e6fd-4cea-83d5-91823ca67081/mount/data/mybucket/life.txt
/var/lib/kubelet/plugins/kubernetes.io/csi/directpv-min-io/569287ffffbda1fe8a427f2b6825c0759212f83a2f87ed21dd499f5f9674507a/globalmount/data/mybucket/life.txt
/var/lib/kubelet/plugins/kubernetes.io/csi/directpv-min-io/dc6eef8758cb51d7b81d91762471fcfb539150c5ddc54a9cce761787fc1df07d/globalmount/data/mybucket/life.txt
/var/lib/kubelet/plugins/kubernetes.io/csi/directpv-min-io/bae672d5bbdb154b57b5456419666c5d3b776f2800e560bce33ea2de3aefae53/globalmount/data/mybucket/life.txt
/var/lib/kubelet/plugins/kubernetes.io/csi/directpv-min-io/eda7cad7f5a5374662a277af713dbbb529f8c7ca447f1d053d5abcbeffc74c64/globalmount/data/mybucket/life.txt

 

아래와 같이 확인해보면 이전에 SNMD에서 확인한 것과 같이 각 노드에 분산되어 파일이 저장된 것을 알 수 있습니다.

root@k8s-w1:~# tree /var/lib/directpv/mnt/
/var/lib/directpv/mnt/
├── 1ec669cd-106d-42eb-9a75-c74acace67d6
│   └── pvc-939bb9c0-d71c-4df6-bafc-073326b3901c
│       └── data
│           └── mybucket
│               └── life.txt
│                   └── xl.meta
├── 83242820-7ec4-4018-95ee-33d6e477c9b1
│   └── pvc-0356c85b-ed24-4c02-ab86-1f679cbe3a14
│       └── data
│           └── mybucket
│               └── life.txt
│                   └── xl.meta
├── ad5cdb87-4dde-4ddb-ae6f-54d2b93303e7
│   └── pvc-9ee5c940-e6fd-4cea-83d5-91823ca67081
│       └── data
│           └── mybucket
│               └── life.txt
│                   └── xl.meta
└── b3bf1d12-430f-413d-8d6d-4f2300ac7e2d
    └── pvc-f62c2729-7c20-4fbb-b3a4-eda290556a7d
        └── data
            └── mybucket
                └── life.txt
                    └── xl.meta

21 directories, 4 files 
...
root@k8s-w4:~# tree /var/lib/directpv/mnt/
/var/lib/directpv/mnt/
├── 254b8fc1-9159-471e-9df5-8b7467149ac4
│   └── pvc-ba974c12-1c13-4214-b833-857ca77b16d6
│       └── data
│           └── mybucket
│               └── life.txt
│                   └── xl.meta
├── 3b3ef65c-42f6-4b50-a44a-44ffc28cbbac
│   └── pvc-b42bf921-10eb-4552-a71b-d9f06da5a0ef
│       └── data
│           └── mybucket
│               └── life.txt
│                   └── xl.meta
├── b305dca2-ef12-4a61-955a-9ea12db10740
│   └── pvc-544aad79-0ad0-4214-b852-57f937553b8e
│       └── data
│           └── mybucket
│               └── life.txt
│                   └── xl.meta
└── cb4ac76d-e415-440d-817d-83a0c095e249
    └── pvc-1ab5a9a1-0600-4196-8c50-aa7e0e842100
        └── data
            └── mybucket
                └── life.txt
                    └── xl.meta

21 directories, 4 files

 

파일을 확인해보면, EC:1로 설정되어 이들 중 하나가 parity block인 것을 알 수 있습니다. (tenant를 생성하면서 EC:4 정도로 했더라면 좋았을텐데 좀 아쉽네요)

for f in /var/lib/directpv/mnt/*/pvc-*/data/mybucket/life.txt/xl.meta; do
  echo "=== $f ==="
  tail "$f"
  echo ""
done

...
root@k8s-w3:~# for f in /var/lib/directpv/mnt/*/pvc-*/data/mybucket/life.txt/xl.meta; do
  echo "=== $f ==="
  tail "$f"
  echo ""
done
=== /var/lib/directpv/mnt/11679efe-44ef-4849-8755-136085abe018/pvc-09cb0ef1-1f35-498d-8af5-0a07552fedf6/data/mybucket/life.txt/xl.meta ===
33. History trains the conscience to recognize progress and mistakes.
34. Education without curiosity is empty memorization.
...

=== /var/lib/directpv/mnt/cba257bb-7576-4fb2-8c1d-1b7200e6fe03/pvc-590bcea7-127e-4bbe-b480-6d0d35bda008/data/mybucket/life.txt/xl.meta ===
Lr`nl`Ibkk���sn~saw'gq/0
             -nsW���ysJ}mD앖sf2j7�(<Bx���C4}>|#'cjS;v:c)ucjI>sL3Mpi<&Lin1k=b?6  \�ϔmd���=cs␦TnW&���Xt9  V'\kdK\i.}���k<\��?2*x?xuZM:r��Nw%Sdb$1w`Ba     9r:d8lnv,x+6@v6NW/?%=]D}4=#F{>L%*X7)8\sy<$4f0)BPtc^gVpFfO:X)!hBf3~)G,k6/(s(␦c#H%;&5*Z0l[$       }20|<IAUl?i+:35uzaVDw(6- ofYa'#���"%���v/dfc62 5*���
     s{a--Q|j4Te_5X"95*%.
                         ~qOs#ol␦ZTv ")y~7-5yRpJ6wkpJ>mJw.$2aybw#lX~0hV.{{Tcan1n
...

 

마지막으로 data block 중 하나를 삭제하고, 복구가 잘되는지 확인해보겠습니다.

# 삭제
root@k8s-w2:~# rm -rf /var/lib/directpv/mnt/f9474487-102f-4435-97a2-5a0a50fa98e8/pvc-761fe37d-09b7-4e7c-9625-8a2adacec35c/data/mybucket/
root@k8s-w2:~# tree /var/lib/directpv/mnt/
/var/lib/directpv/mnt/
├── 7f34ec30-ad93-4506-a0ca-657163eb5fc3
│   └── pvc-4195aaee-bc68-49d9-baae-24008b99e37d
│       └── data
│           └── mybucket
│               └── life.txt
│                   └── xl.meta
├── d1e0638b-b1bc-4276-a3a9-71ae8366f11b
│   └── pvc-9ed305a2-2be1-429d-b41b-4f8b61d1e89c
│       └── data
│           └── mybucket
│               └── life.txt
│                   └── xl.meta
├── e9e46095-447a-4e09-afa0-817a75e36893
│   └── pvc-b6c9174e-5c42-48f0-8d5b-bfea48ac5e05
│       └── data
│           └── mybucket
│               └── life.txt
│                   └── xl.meta
└── f9474487-102f-4435-97a2-5a0a50fa98e8
    └── pvc-761fe37d-09b7-4e7c-9625-8a2adacec35c
        └── data

19 directories, 3 files

# 복구
mc admin heal k8s-tenant1/mybucket --insecure

(⎈|HomeLab:N/A) root@k8s-ctr:~# mc admin heal k8s-tenant1/mybucket --insecure
 ◐  mybucket
    0/0 objects; 0 B in -4s
    ┌────────┬───┬─────────────────────┐
    │ Green  │ 1 │ 100.0% ████████████ │
    │ Yellow │ 0 │   0.0%              │
    │ Red    │ 0 │   0.0%              │
    │ Grey   │ 0 │   0.0%              │
    └────────┴───┴─────────────────────┘

# 재확인 (변경 안됨) _ 다만 이 상태에서도 WebUI에서 직접 파일을 다운하면 정상적으로 다운이 됩니다. 이미 parity block으로 IO처리는 가능합니다.
root@k8s-w2:~# tree /var/lib/directpv/mnt/
/var/lib/directpv/mnt/
├── 7f34ec30-ad93-4506-a0ca-657163eb5fc3
│   └── pvc-4195aaee-bc68-49d9-baae-24008b99e37d
│       └── data
│           └── mybucket
│               └── life.txt
│                   └── xl.meta
├── d1e0638b-b1bc-4276-a3a9-71ae8366f11b
│   └── pvc-9ed305a2-2be1-429d-b41b-4f8b61d1e89c
│       └── data
│           └── mybucket
│               └── life.txt
│                   └── xl.meta
├── e9e46095-447a-4e09-afa0-817a75e36893
│   └── pvc-b6c9174e-5c42-48f0-8d5b-bfea48ac5e05
│       └── data
│           └── mybucket
│               └── life.txt
│                   └── xl.meta
└── f9474487-102f-4435-97a2-5a0a50fa98e8
    └── pvc-761fe37d-09b7-4e7c-9625-8a2adacec35c
        └── data
            └── mybucket

20 directories, 3 files

# 파일명까지 넣어서 복구 요청
(⎈|HomeLab:N/A) root@k8s-ctr:~# mc admin heal k8s-tenant1/mybucket/life.txt --insecure
 ◐  mybucket/life.txt
    0/1 objects; 65 KiB in -4s
    ┌────────┬───┬─────────────────────┐
    │ Green  │ 2 │ 100.0% ████████████ │
    │ Yellow │ 0 │   0.0%              │
    │ Red    │ 0 │   0.0%              │
    │ Grey   │ 0 │   0.0%              │
    └────────┴───┴─────────────────────┘

# 재확인
root@k8s-w2:~# tree /var/lib/directpv/mnt/
/var/lib/directpv/mnt/
├── 7f34ec30-ad93-4506-a0ca-657163eb5fc3
│   └── pvc-4195aaee-bc68-49d9-baae-24008b99e37d
│       └── data
│           └── mybucket
│               └── life.txt
│                   └── xl.meta
├── d1e0638b-b1bc-4276-a3a9-71ae8366f11b
│   └── pvc-9ed305a2-2be1-429d-b41b-4f8b61d1e89c
│       └── data
│           └── mybucket
│               └── life.txt
│                   └── xl.meta
├── e9e46095-447a-4e09-afa0-817a75e36893
│   └── pvc-b6c9174e-5c42-48f0-8d5b-bfea48ac5e05
│       └── data
│           └── mybucket
│               └── life.txt
│                   └── xl.meta
└── f9474487-102f-4435-97a2-5a0a50fa98e8
    └── pvc-761fe37d-09b7-4e7c-9625-8a2adacec35c
        └── data
            └── mybucket
                └── life.txt
                    └── xl.meta

21 directories, 4 files

 

이상 로컬 쿠버네티스 환경에서 DirectPV를 통해서 로컬 디스크를 구성하고, 이후 MinIO의 MNMD를 실습해봤습니다.

실습 환경을 삭제하기 위해서 아래의 명령을 수행합니다.

vagrant destroy -f

 

마치며

지난 3주 동안 MinIO 오브젝트 스토리지를 살펴봤습니다.

 

[1] MinIO 개요에서 오브젝트 스토리지에 대한 이해와 MinIO의 주요 개념과 동작 방식 살펴봤습니다.

[2] MinIO 사용해보기를 통해 MinIO와 Docker 환경에서 SNSD, SNMD를 구성해보고, 이후 쿠버네티스 환경에서 MinIO를 배포하여 살펴보았습니다.

[3] MinIO - Direct PV에서는 DirectPV를 살펴보고, AWS EC2 환경의 로컬 디스크에서 DirectPV를 구성해보고, k3s 환경에서 MinIO를 구성했습니다.

마지막으로, 본 게시물에서 MinIO의 DirectPV와 MNMD를 구성하는 절차를 살펴봤습니다.

 

본 게시물은 CloudNet에서 진행한 MinIO 스터디를 진행하는 과정에서 학습한 내용을 작성하였음을 알려드립니다.

'MinIO' 카테고리의 다른 글

[3] MinIO - Direct PV  (0) 2025.09.20
[2] MinIO 사용해보기  (0) 2025.09.12
[1] MinIO 개요  (0) 2025.09.12