a story

kind: Calico CNI 확인1 본문

Book Study/코어 쿠버네티스

kind: Calico CNI 확인1

한명 2023. 11. 5. 13:36

이 글은 '코어 쿠버네티스'의 5장 내용을 실습한 내용입니다.

 

kind에서 Calico CNI 를 테스트 해본다.

kind 클러스터에 아래 config를 제공해 기본 CNI를 비활성화 하고 클러스터를 생성한다.

# cat kind-Calico-conf.yaml
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
networking:
        disableDefaultCNI: true
        podSubnet: 192.168.0.0/16
nodes:
- role: control-plane
- role: worker

kind 설정에 대해서는 아래 문서를 참고할 수 있다.

https://kind.sigs.k8s.io/docs/user/configuration/

책이 출간된 이후 apiVersion이 변경되었으므로 kind.sigs.k8s.io가 아닌 kind.x-k8s.io를 사용해야 한다.

# kind create cluster --name=calico --config=./kind-Calico-conf.yaml
ERROR: failed to create cluster: unknown apiVersion: kind.sigs.k8s.io/v1alpha4

정상적으로 완료되면 아래와 같이 확인 가능하다. defaultCNI가 disable 되었으므로 아래와 같은 결과가 정상이다.

# kind create cluster --name=calico --config=./kind-Calico-conf.yaml
Creating cluster "calico" ...
 ✓ Ensuring node image (kindest/node:v1.27.3) 🖼
 ✓ Preparing nodes 📦 📦
 ✓ Writing configuration 📜
 ✓ Starting control-plane 🕹️
 ✓ Installing StorageClass 💾
 ✓ Joining worker nodes 🚜
Set kubectl context to "kind-calico"
You can now use your cluster with:

kubectl cluster-info --context kind-calico

Have a nice day! 👋
# kubectl get no
NAME                   STATUS     ROLES           AGE   VERSION
calico-control-plane   NotReady   control-plane   40s   v1.27.3
calico-worker          NotReady   <none>          19s   v1.27.3
# kubectl get po -n kube-system
NAME                                           READY   STATUS    RESTARTS   AGE
coredns-5d78c9869d-5zkr4                       0/1     Pending   0          44s
coredns-5d78c9869d-t2c9x                       0/1     Pending   0          44s
etcd-calico-control-plane                      1/1     Running   0          57s
kube-apiserver-calico-control-plane            1/1     Running   0          57s
kube-controller-manager-calico-control-plane   1/1     Running   0          59s
kube-proxy-kqsfb                               1/1     Running   0          39s
kube-proxy-qqmw4                               1/1     Running   0          44s
kube-scheduler-calico-control-plane            1/1     Running   0          57s

 

 

아래와 같이 Calico CNI를 설치한다.

먼저 calico operator와 CRD를 생성한다.

# kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.26.3/manifests/tigera-operator.yaml
namespace/tigera-operator created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgpfilters.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/apiservers.operator.tigera.io created
customresourcedefinition.apiextensions.k8s.io/imagesets.operator.tigera.io created
customresourcedefinition.apiextensions.k8s.io/installations.operator.tigera.io created
customresourcedefinition.apiextensions.k8s.io/tigerastatuses.operator.tigera.io created
serviceaccount/tigera-operator created
clusterrole.rbac.authorization.k8s.io/tigera-operator created
clusterrolebinding.rbac.authorization.k8s.io/tigera-operator created
deployment.apps/tigera-operator created

위 단계에서는 operator ns와 operator가 생성된다.

# kubectl get po -A -w
NAMESPACE            NAME                                           READY   STATUS    RESTARTS   AGE
kube-system          coredns-5d78c9869d-5zkr4                       0/1     Pending   0          8m42s
kube-system          coredns-5d78c9869d-t2c9x                       0/1     Pending   0   ..
tigera-operator      tigera-operator-f6bb878c4-x4whq                1/1     Running   0          56s

 

실제로 calico를 설치한다.

# kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.26.3/manifests/custom-resources.yaml
installation.operator.tigera.io/default created
apiserver.operator.tigera.io/default created

 

아래와 같이 노드도 Ready 상태가 되고, calico 파드와 기타 네트워크 연결이 안되서 Pending이던 파드들이 보드 Running 상태가 되었다.

# kubectl get po -A
NAMESPACE            NAME                                           READY   STATUS    RESTARTS   AGE
calico-apiserver     calico-apiserver-76cc85cf7f-b2j7p              1/1     Running   0          5m24s
calico-apiserver     calico-apiserver-76cc85cf7f-lgnjs              1/1     Running   0          5m24s
calico-system        calico-kube-controllers-5f6db5bc7b-ntvhc       1/1     Running   0          9m56s
calico-system        calico-node-45w75                              1/1     Running   0          9m57s
calico-system        calico-node-s8nzv                              1/1     Running   0          9m57s
calico-system        calico-typha-5bbf9665bd-5mf7t                  1/1     Running   0          9m57s
calico-system        csi-node-driver-f4958                          2/2     Running   0          9m56s
calico-system        csi-node-driver-v54cj                          2/2     Running   0          9m56s
kube-system          coredns-5d78c9869d-5zkr4                       1/1     Running   0          21m
kube-system          coredns-5d78c9869d-t2c9x                       1/1     Running   0          21m
kube-system          etcd-calico-control-plane                      1/1     Running   0          21m
kube-system          kube-apiserver-calico-control-plane            1/1     Running   0          21m
kube-system          kube-controller-manager-calico-control-plane   1/1     Running   0          21m
kube-system          kube-proxy-kqsfb                               1/1     Running   0          21m
kube-system          kube-proxy-qqmw4                               1/1     Running   0          21m
kube-system          kube-scheduler-calico-control-plane            1/1     Running   0          21m
local-path-storage   local-path-provisioner-6bc4bddd6b-jhj7s        1/1     Running   0          21m
tigera-operator      tigera-operator-f6bb878c4-x4whq                1/1     Running   0          13m
# kubectl get no
NAME                   STATUS   ROLES           AGE   VERSION
calico-control-plane   Ready    control-plane   22m   v1.27.3
calico-worker          Ready    <none>          21m   v1.27.3

책이 출간된 이후로 변경되어 아래 링크를 참고했다. 현재는 operator 방식을 사용하는 것으로 보인다.

https://docs.tigera.io/calico/latest/getting-started/kubernetes/kind

 

그리고 daemonset로 calico-node 만 있었던 것에서 calico-typha 라는 파드가 추가로 생기는 아키텍처 변화가 있었던 것으로 보인다.

node에서 노드에 필요한 BGP와 IP 경로를 설정하고, Typha에서 API server를 watch 하면서 k8s 리소스와 caclio custom resource의 변화를 바탕으로 node를 업데이트 한다.

 

https://docs.tigera.io/calico/latest/getting-started/kubernetes/hardway/install-typha

Typha sits between the Kubernetes API server and per-node daemons like Felix and confd (running in `calico/node`). It watches the Kubernetes resources and Calico custom resources used by these daemons, and whenever a resource changes it fans out the update to the daemons.

calico에 대한 추가 테스트는 다시 작성한다.

 

테스트를 마치면 아래의 명령으로 클러스터를 삭제할 수 있다.

kind delete cluster --name calico