サービスマネージメントグループの秋元です。
前回はCSI Driver for Dell EMC Isilonのインストールの流れをご紹介しました。
CSI Driver for Dell EMC Isilonを試す(インストール編)
今回はProduct Guideにある3つのテストの内、1番目のテストがどのような動作になるかご紹介したいと思います。
Test the CSI driver for Dell EMC Isilon
- Test the CSI driver for Dell EMC Isilon
- Test creating snapshots
- Test restoring from a snapshot
Test the CSI driver for Dell EMC Isilon
このテストでは2つのPVC(PersistentVolumeClaim)とStatefulSetを作成し、StatefulSetから作成されるPod内のコンテナがPVCから作成されるPVをNFSマウントしていることを確認します。
1. test namespaceを作成
[root@node1 ~]# kubectl create namespace test
2. test用のディレクトリに移動
[root@node1 ~]# cd csi-isilon/test/helm/
3. 2つのボリュームをマウントするtest scriptを実行
[root@node1 helm]# sh ./starttest.sh -t 2vols -n test NAME: 2vols LAST DEPLOYED: Fri Jan 10 11:35:31 2020 NAMESPACE: default STATUS: DEPLOYED RESOURCES: ==> v1/PersistentVolumeClaim NAME AGE pvol0 0s pvol1 0s ==> v1/Pod(related) NAME AGE isilontest-0 0s ==> v1/ServiceAccount NAME AGE isilontest 0s ==> v1/StatefulSet NAME AGE isilontest 0s waiting 60 seconds on pod to initialize Name: isilontest-0 Namespace: test Priority: 0 PriorityClassName: Node: node3/172.16.26.26 Start Time: Fri, 10 Jan 2020 11:35:51 +0900 Labels: app=isilontest controller-revision-hash=isilontest-744dc88545 statefulset.kubernetes.io/pod-name=isilontest-0 Annotations: Status: Running IP: 172.16.26.26 Controlled By: StatefulSet/isilontest Containers: test: Container ID: docker://a61daef38de38ed850a6864701465b8bb4fe34903879d88f519be02708ddf1c1 Image: docker.io/centos:latest Image ID: docker-pullable://centos@sha256:f94c1d992c193b3dc09e297ffd54d8a4f1dc946c37cbeceb26d35ce1647f88d9 Port: Host Port: Command: /bin/sleep 3600 State: Running Started: Fri, 10 Jan 2020 11:36:02 +0900 Ready: True Restart Count: 0 Environment: Mounts: /data0 from pvol0 (rw) /data1 from pvol1 (rw) /var/run/secrets/kubernetes.io/serviceaccount from isilontest-token-vm75m (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: pvol0: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: pvol0 ReadOnly: false pvol1: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: pvol1 ReadOnly: false isilontest-token-vm75m: Type: Secret (a volume populated by a Secret) SecretName: isilontest-token-vm75m Optional: false QoS Class: BestEffort Node-Selectors: Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling 50s (x5 over 61s) default-scheduler pod has unbound immediate PersistentVolumeClaims (repeated 2 times) Normal Scheduled 41s default-scheduler Successfully assigned test/isilontest-0 to node3 Normal SuccessfulAttachVolume 41s attachdetach-controller AttachVolume.Attach succeeded for volume "k8s-df3e5dbb33" Normal SuccessfulAttachVolume 41s attachdetach-controller AttachVolume.Attach succeeded for volume "k8s-df3ebd2c33" Normal Pulling 34s kubelet, node3 Pulling image "docker.io/centos:latest" Normal Pulled 30s kubelet, node3 Successfully pulled image "docker.io/centos:latest" Normal Created 30s kubelet, node3 Created container test Normal Started 30s kubelet, node3 Started container test Name: isilontest-0 Namespace: test Priority: 0 PriorityClassName: Node: node3/172.16.26.26 Start Time: Fri, 10 Jan 2020 11:35:51 +0900 Labels: app=isilontest controller-revision-hash=isilontest-744dc88545 statefulset.kubernetes.io/pod-name=isilontest-0 Annotations: Status: Running IP: 172.16.26.26 Controlled By: StatefulSet/isilontest Containers: test: Container ID: docker://a61daef38de38ed850a6864701465b8bb4fe34903879d88f519be02708ddf1c1 Image: docker.io/centos:latest Image ID: docker-pullable://centos@sha256:f94c1d992c193b3dc09e297ffd54d8a4f1dc946c37cbeceb26d35ce1647f88d9 Port: Host Port: Command: /bin/sleep 3600 State: Running Started: Fri, 10 Jan 2020 11:36:02 +0900 Ready: True Restart Count: 0 Environment: Mounts: /data0 from pvol0 (rw) /data1 from pvol1 (rw) /var/run/secrets/kubernetes.io/serviceaccount from isilontest-token-vm75m (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: pvol0: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: pvol0 ReadOnly: false pvol1: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: pvol1 ReadOnly: false isilontest-token-vm75m: Type: Secret (a volume populated by a Secret) SecretName: isilontest-token-vm75m Optional: false QoS Class: BestEffort Node-Selectors: Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling 50s (x5 over 61s) default-scheduler pod has unbound immediate PersistentVolumeClaims (repeated 2 times) Normal Scheduled 41s default-scheduler Successfully assigned test/isilontest-0 to node3 Normal SuccessfulAttachVolume 41s attachdetach-controller AttachVolume.Attach succeeded for volume "k8s-df3e5dbb33" Normal SuccessfulAttachVolume 41s attachdetach-controller AttachVolume.Attach succeeded for volume "k8s-df3ebd2c33" Normal Pulling 34s kubelet, node3 Pulling image "docker.io/centos:latest" Normal Pulled 30s kubelet, node3 Successfully pulled image "docker.io/centos:latest" Normal Created 30s kubelet, node3 Created container test Normal Started 30s kubelet, node3 Started container test 172.16.27.157:/ifs/data/csi/k8s-df3ebd2c33 231972494336 21611520 223778394112 1% /data0 172.16.27.157:/ifs/data/csi/k8s-df3e5dbb33 231972494336 21611520 223778394112 1% /data1 172.16.27.157:/ifs/data/csi/k8s-df3ebd2c33 on /data0 type nfs (rw,relatime,vers=3,rsize=131072,wsize=524288,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=172.16.27.157,mountvers=3,mountport=300,mountproto=udp,local_lock=none,addr=172.16.27.157) 172.16.27.157:/ifs/data/csi/k8s-df3e5dbb33 on /data1 type nfs (rw,relatime,vers=3,rsize=131072,wsize=524288,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=172.16.27.157,mountvers=3,mountport=300,mountproto=udp,local_lock=none,addr=172.16.27.157) [root@node1 helm]#
Kubernetes側の状態を確認していきます。
Pods
[root@node1 helm]# kubectl get pods -n test NAME READY STATUS RESTARTS AGE isilontest-0 1/1 Running 0 2m24s [root@node1 helm]# [root@node1 helm]# kubectl describe pods -n test isilontest-0 Name: isilontest-0 Namespace: test Priority: 0 PriorityClassName: Node: node3/172.16.26.26 Start Time: Fri, 10 Jan 2020 11:35:51 +0900 Labels: app=isilontest controller-revision-hash=isilontest-744dc88545 statefulset.kubernetes.io/pod-name=isilontest-0 Annotations: Status: Running IP: 172.16.26.26 Controlled By: StatefulSet/isilontest Containers: test: Container ID: docker://a61daef38de38ed850a6864701465b8bb4fe34903879d88f519be02708ddf1c1 Image: docker.io/centos:latest Image ID: docker-pullable://centos@sha256:f94c1d992c193b3dc09e297ffd54d8a4f1dc946c37cbeceb26d35ce1647f88d9 Port: Host Port: Command: /bin/sleep 3600 State: Running Started: Fri, 10 Jan 2020 11:36:02 +0900 Ready: True Restart Count: 0 Environment: Mounts: /data0 from pvol0 (rw) /data1 from pvol1 (rw) /var/run/secrets/kubernetes.io/serviceaccount from isilontest-token-vm75m (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: pvol0: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: pvol0 ReadOnly: false pvol1: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: pvol1 ReadOnly: false isilontest-token-vm75m: Type: Secret (a volume populated by a Secret) SecretName: isilontest-token-vm75m Optional: false QoS Class: BestEffort Node-Selectors: Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling 2m50s (x5 over 3m1s) default-scheduler pod has unbound immediate PersistentVolumeClaims (repeated 2 times) Normal Scheduled 2m41s default-scheduler Successfully assigned test/isilontest-0 to node3 Normal SuccessfulAttachVolume 2m41s attachdetach-controller AttachVolume.Attach succeeded for volume "k8s-df3e5dbb33" Normal SuccessfulAttachVolume 2m41s attachdetach-controller AttachVolume.Attach succeeded for volume "k8s-df3ebd2c33" Normal Pulling 2m34s kubelet, node3 Pulling image "docker.io/centos:latest" Normal Pulled 2m30s kubelet, node3 Successfully pulled image "docker.io/centos:latest" Normal Created 2m30s kubelet, node3 Created container test Normal Started 2m30s kubelet, node3 Started container test [root@node1 helm]#
PVC
[root@node1 helm]# kubectl get pvc -n test NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE pvol0 Bound k8s-df3ebd2c33 8Gi RWO isilon 3m43s pvol1 Bound k8s-df3e5dbb33 12Gi RWO isilon 3m43s [root@node1 helm]#
PV
[root@node1 helm]# kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE k8s-df3e5dbb33 12Gi RWO Delete Bound test/pvol1 isilon 4m36s k8s-df3ebd2c33 8Gi RWO Delete Bound test/pvol0 isilon 4m46s [root@node1 helm]#
Volumeのマウント状況を確認
[root@node1 helm]# kubectl exec -n test isilontest-0 df
Filesystem 1K-blocks Used Available Use% Mounted on
overlay 36805060 3090888 33714172 9% /
tmpfs 65536 0 65536 0% /dev
tmpfs 1940168 0 1940168 0% /sys/fs/cgroup
172.16.27.157:/ifs/data/csi/k8s-df3ebd2c33 231972494336 21612544 223778393088 1% /data0
172.16.27.157:/ifs/data/csi/k8s-df3e5dbb33 231972494336 21612544 223778393088 1% /data1
/dev/mapper/centos-root 36805060 3090888 33714172 9% /etc/hosts
shm 65536 0 65536 0% /dev/shm
tmpfs 1940168 12 1940156 1% /run/secrets/kubernetes.io/serviceaccount
tmpfs 1940168 0 1940168 0% /proc/acpi
tmpfs 1940168 0 1940168 0% /proc/scsi
tmpfs 1940168 0 1940168 0% /sys/firmware
[root@node1 helm]#
Isilon側
H400-1# ls -l /ifs/data/csi total 64 drwxrwxrwx 2 root wheel 0 Jan 10 11:34 k8s-df3e5dbb33 drwxrwxrwx 2 root wheel 0 Jan 10 11:34 k8s-df3ebd2c33 H400-1# H400-1# H400-1# isi nfs exports list ID Zone Paths Description ------------------------------------------------------- 1 System /ifs Default export 2 System /ifs/data/csi/k8s-df3ebd2c33 - 3 System /ifs/data/csi/k8s-df3e5dbb33 - ------------------------------------------------------- Total: 3 H400-1# H400-1# isi nfs exports view 2 ID: 2 Zone: System Paths: /ifs/data/csi/k8s-df3ebd2c33 Description: - Clients: 172.16.26.26 Root Clients: - Read Only Clients: - Read Write Clients: - All Dirs: No Block Size: 8.0k Can Set Time: Yes Case Insensitive: No Case Preserving: Yes Chown Restricted: No Commit Asynchronous: No Directory Transfer Size: 128.0k Encoding: DEFAULT Link Max: 32767 Map Lookup UID: No Map Retry: Yes Map Root Enabled: True User: nobody Primary Group: - Secondary Groups: - Map Non Root Enabled: False User: nobody Primary Group: - Secondary Groups: - Map Failure Enabled: False User: nobody Primary Group: - Secondary Groups: - Map Full: Yes Max File Size: 8192.00000P Name Max Size: 255 No Truncate: No Read Only: No Readdirplus: Yes Readdirplus Prefetch: 10 Return 32Bit File Ids: No Read Transfer Max Size: 1.00M Read Transfer Multiple: 512 Read Transfer Size: 128.0k Security Type: unix Setattr Asynchronous: No Snapshot: - Symlinks: Yes Time Delta: 1.0 ns Write Datasync Action: datasync Write Datasync Reply: datasync Write Filesync Action: filesync Write Filesync Reply: filesync Write Unstable Action: unstable Write Unstable Reply: unstable Write Transfer Max Size: 1.00M Write Transfer Multiple: 512 Write Transfer Size: 512.0k H400-1# H400-1# isi nfs exports view 3 ID: 3 Zone: System Paths: /ifs/data/csi/k8s-df3e5dbb33 Description: - Clients: 172.16.26.26 Root Clients: - Read Only Clients: - Read Write Clients: - All Dirs: No Block Size: 8.0k Can Set Time: Yes Case Insensitive: No Case Preserving: Yes Chown Restricted: No Commit Asynchronous: No Directory Transfer Size: 128.0k Encoding: DEFAULT Link Max: 32767 Map Lookup UID: No Map Retry: Yes Map Root Enabled: True User: nobody Primary Group: - Secondary Groups: - Map Non Root Enabled: False User: nobody Primary Group: - Secondary Groups: - Map Failure Enabled: False User: nobody Primary Group: - Secondary Groups: - Map Full: Yes Max File Size: 8192.00000P Name Max Size: 255 No Truncate: No Read Only: No Readdirplus: Yes Readdirplus Prefetch: 10 Return 32Bit File Ids: No Read Transfer Max Size: 1.00M Read Transfer Multiple: 512 Read Transfer Size: 128.0k Security Type: unix Setattr Asynchronous: No Snapshot: - Symlinks: Yes Time Delta: 1.0 ns Write Datasync Action: datasync Write Datasync Reply: datasync Write Filesync Action: filesync Write Filesync Reply: filesync Write Unstable Action: unstable Write Unstable Reply: unstable Write Transfer Max Size: 1.00M Write Transfer Multiple: 512 Write Transfer Size: 512.0k H400-1#
Kubernetes上でのオペレーションだけでIsilon側にディレクトリが作成され、NFS exportが設定されることが確認できました。
データの永続性の確認
ファイルを作成して、podをscale –replicas=0 → 1で再作成して確認してみます。
[root@node1 helm]# kubectl exec -n test isilontest-0 -- touch /data0/aaa [root@node1 helm]# kubectl exec -n test isilontest-0 -- ls -l /data0 total 24 -rw-r--r-- 1 nobody nobody 0 Jan 10 02:56 aaa [root@node1 helm]# kubectl exec -n test isilontest-0 -- touch /data1/bbb [root@node1 helm]# kubectl exec -n test isilontest-0 -- ls -l /data1 total 24 -rw-r--r-- 1 nobody nobody 0 Jan 10 02:57 bbb [root@node1 helm]#
[root@node1 helm]# kubectl get pods -n test -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES isilontest-0 1/1 Running 0 24m 172.16.26.26 node3 [root@node1 helm]# kubectl scale -n test statefulset isilontest --replicas=0 statefulset.apps/isilontest scaled [root@node1 helm]# kubectl get pods -n test -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES isilontest-0 1/1 Terminating 0 24m 172.16.26.26 node3 [root@node1 helm]# kubectl get pods -n test -o wide No resources found. [root@node1 helm]# kubectl scale -n test statefulset isilontest --replicas=1 statefulset.apps/isilontest scaled [root@node1 helm]# kubectl get pods -n test -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES isilontest-0 0/1 ContainerCreating 0 6s 172.16.26.26 node3 [root@node1 helm]# kubectl get pods -n test -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES isilontest-0 1/1 Running 0 30s 172.16.26.26 node3 [root@node1 helm]#
ファイルに影響はありません。
[root@node1 helm]# kubectl exec -n test isilontest-0 -- ls -l /data0 total 24 -rw-r--r-- 1 nobody nobody 0 Jan 10 02:56 aaa [root@node1 helm]# kubectl exec -n test isilontest-0 -- ls -l /data1 total 24 -rw-r--r-- 1 nobody nobody 0 Jan 10 02:57 bbb [root@node1 helm]#
Isilon側のNFS exportも変化はありません。
H400-1# isi nfs exports list ID Zone Paths Description ------------------------------------------------------- 1 System /ifs Default export 2 System /ifs/data/csi/k8s-df3ebd2c33 - 3 System /ifs/data/csi/k8s-df3e5dbb33 - ------------------------------------------------------- Total: 3 H400-1#
では、podを別のnodeに移動させた場合はどうなるでしょうか。Isilon側のNFS exportにはclientのIP制限が入っているのでここも併せて確認します。
H400-1# isi nfs export view 2 | grep Client Clients: 172.16.26.26 Root Clients: - Read Only Clients: - Read Write Clients: - H400-1# isi nfs export view 3 | grep Client Clients: 172.16.26.26 Root Clients: - Read Only Clients: - Read Write Clients: - H400-1#
drainでpodを移動させます。
[root@node1 helm]# kubectl get node NAME STATUS ROLES AGE VERSION node1 Ready master 21d v1.14.10 node2 Ready 21d v1.14.10 node3 Ready 21d v1.14.10 [root@node1 helm]# [root@node1 helm]# kubectl get pods -n test --show-labels NAME READY STATUS RESTARTS AGE LABELS isilontest-0 1/1 Running 0 10m app=isilontest,controller-revision-hash=isilontest-744dc88545,statefulset.kubernetes.io/pod-name=isilontest-0 [root@node1 helm]# [root@node1 helm]# kubectl get pods -n test -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES isilontest-0 1/1 Running 0 10m 172.16.26.26 node3 [root@node1 helm]# [root@node1 helm]# [root@node1 helm]# kubectl drain node3 --pod-selector='app=isilontest' node/node3 cordoned evicting pod "isilontest-0" pod/isilontest-0 evicted node/node3 evicted [root@node1 helm]# kubectl get pods -n test -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES isilontest-0 0/1 ContainerCreating 0 7s 172.16.26.25 node2 [root@node1 helm]# kubectl get pods -n test -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES isilontest-0 1/1 Running 0 19s 172.16.26.25 node2 [root@node1 helm]# [root@node1 helm]# kubectl get nodes NAME STATUS ROLES AGE VERSION node1 Ready master 21d v1.14.10 node2 Ready 21d v1.14.10 node3 Ready,SchedulingDisabled 21d v1.14.10 [root@node1 helm]# [root@node1 helm]# kubectl uncordon node3 node/node3 uncordoned [root@node1 helm]# [root@node1 helm]# kubectl get nodes NAME STATUS ROLES AGE VERSION node1 Ready master 21d v1.14.10 node2 Ready 21d v1.14.10 node3 Ready 21d v1.14.10 [root@node1 helm]#
isilontest-0はnode2に移動したようです。Isilon側はどうでしょう。
H400-1# isi nfs exports list ID Zone Paths Description ------------------------------------------------------- 1 System /ifs Default export 2 System /ifs/data/csi/k8s-df3ebd2c33 - 3 System /ifs/data/csi/k8s-df3e5dbb33 - ------------------------------------------------------- Total: 3 H400-1# isi nfs export view 2 | grep Client Clients: 172.16.26.25 Root Clients: - Read Only Clients: - Read Write Clients: - H400-1# isi nfs export view 3 | grep Client Clients: 172.16.26.25 Root Clients: - Read Only Clients: - Read Write Clients: - H400-1#
NFS exportのclientのIP制限はnode2のIPに自動的に変更されています。
ファイルの永続性についても問題ありません。
[root@node1 helm]# kubectl exec -n test isilontest-0 -- ls -l /data0 total 24 -rw-r--r-- 1 nobody nobody 0 Jan 10 02:56 aaa [root@node1 helm]# kubectl exec -n test isilontest-0 -- ls -l /data1 total 24 -rw-r--r-- 1 nobody nobody 0 Jan 10 02:57 bbb [root@node1 helm]#
scale outさせた場合はどうなるでしょうか。見ていきます。
[root@node1 helm]# kubectl scale -n test statefulset isilontest --replicas=2 statefulset.apps/isilontest scaled [root@node1 helm]# kubectl get pods -n test -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES isilontest-0 1/1 Running 0 8m41s 172.16.26.25 node2 isilontest-1 0/1 ContainerCreating 0 83s 172.16.26.26 node3 [root@node1 helm]#
isilontest-1はContainerCreatingで止まってしまいます。describeを見てみます。
[root@node1 helm]# kubectl describe pods -n test isilontest-1
Name: isilontest-1
Namespace: test
Priority: 0
PriorityClassName:
Node: node3/172.16.26.26
Start Time: Fri, 10 Jan 2020 12:20:17 +0900
Labels: app=isilontest
controller-revision-hash=isilontest-744dc88545
statefulset.kubernetes.io/pod-name=isilontest-1
Annotations:
Status: Pending
IP: 172.16.26.26
Controlled By: StatefulSet/isilontest
Containers:
test:
Container ID:
Image: docker.io/centos:latest
Image ID:
Port:
Host Port:
Command:
/bin/sleep
3600
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Environment:
Mounts:
/data0 from pvol0 (rw)
/data1 from pvol1 (rw)
/var/run/secrets/kubernetes.io/serviceaccount from isilontest-token-vm75m (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
pvol0:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: pvol0
ReadOnly: false
pvol1:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: pvol1
ReadOnly: false
isilontest-token-vm75m:
Type: Secret (a volume populated by a Secret)
SecretName: isilontest-token-vm75m
Optional: false
QoS Class: BestEffort
Node-Selectors:
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 2m6s default-scheduler Successfully assigned test/isilontest-1 to node3
Warning FailedAttachVolume 2m6s attachdetach-controller Multi-Attach error for volume "k8s-df3ebd2c33" Volume is already used by pod(s) isilontest-0
Warning FailedAttachVolume 2m6s attachdetach-controller Multi-Attach error for volume "k8s-df3e5dbb33" Volume is already used by pod(s) isilontest-0
Warning FailedMount 3s kubelet, node3 Unable to mount volumes for pod "isilontest-1_test(203a17a7-3358-11ea-a609-0050569cf7a7)": timeout expired waiting for volumes to attach or mount for pod "test"/"isilontest-1". list of unmounted volumes=[pvol0 pvol1]. list of unattached volumes=[pvol0 pvol1 isilontest-token-vm75m]
[root@node1 helm]#
Multi-Attach errorの為マウントができないようです。これはPVCのACCESS MODESがRWO(ReadWriteOnce)で設定されている為です。
[root@node1 helm]# kubectl get pvc -A NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE test pvol0 Bound k8s-df3ebd2c33 8Gi RWO isilon 48m test pvol1 Bound k8s-df3e5dbb33 12Gi RWO isilon 48m [root@node1 helm]#
ACCESS MODESをRWX(ReadWriteMany)に変更すると解消できそうです。対象となるHelm Chartは下記の2つの様です。
[root@node1 helm]# cat 2vols/templates/pvc0.yaml kind: PersistentVolumeClaim apiVersion: v1 metadata: name: pvol0 namespace: {{ .Values.namespace }} spec: accessModes: - ReadWriteOnce volumeMode: Filesystem resources: requests: storage: 8Gi storageClassName: isilon [root@node1 helm]# cat 2vols/templates/pvc1.yaml kind: PersistentVolumeClaim apiVersion: v1 metadata: name: pvol1 namespace: {{ .Values.namespace }} spec: accessModes: - ReadWriteOnce volumeMode: Filesystem resources: requests: storage: 12Gi storageClassName: isilon [root@node1 helm]#
Helm Chartを変更して試してみます。変更の前に現在のテストを終了させ関連リソースを削除します。
[root@node1 helm]# sh ./stoptest.sh -t 2vols release "2vols" deleted NAME READY STATUS RESTARTS AGE isilontest-0 1/1 Terminating 0 37m waiting for persistent volumes to be cleaned up No resources found. deleting... No resources found. [root@node1 helm]# [root@node1 helm]# kubectl get pods -n test No resources found. [root@node1 helm]# [root@node1 helm]# kubectl get pvc -A No resources found. [root@node1 helm]# [root@node1 helm]# kubectl get pv No resources found. [root@node1 helm]#
テストを終了させるとPod,PVC,PVが削除されます。
Helm Chart内のReadWriteOnceをReadWriteManyに変更します。
[root@node1 helm]# sed -i 's/ReadWriteOnce/ReadWriteMany/' 2vols/templates/pvc0.yaml [root@node1 helm]# sed -i 's/ReadWriteOnce/ReadWriteMany/' 2vols/templates/pvc1.yaml
再度テストスクリプトを実行します。
[root@node1 helm]# sh ./starttest.sh -t 2vols -n test [root@node1 helm]# kubectl get pods -n test NAME READY STATUS RESTARTS AGE isilontest-0 1/1 Running 0 81s [root@node1 helm]# kubectl get pvc -A NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE test pvol0 Bound k8s-dfab9ff933 8Gi RWX isilon 85s test pvol1 Bound k8s-dfabab2733 12Gi RWX isilon 85s [root@node1 helm]# kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE k8s-dfab9ff933 8Gi RWX Delete Bound test/pvol0 isilon 92s k8s-dfabab2733 12Gi RWX Delete Bound test/pvol1 isilon 82s [root@node1 helm]#
PVCのACCESS MODESがRWXに変更されました。
ファイルの状況を確認します。
[root@node1 helm]# kubectl exec -n test isilontest-0 -- ls -l /data0 total 0 [root@node1 helm]# kubectl exec -n test isilontest-0 -- ls -l /data1 total 0 [root@node1 helm]#
PVCのRECLAIM POLICYがDeleteの為Volumeは削除、再作成されファイルは残っていない状態です。
Isilon側もディレクトリ、NFS exportが削除、再作成されています。
H400-1# ls -l /ifs/data/csi total 64 drwxrwxrwx 2 root wheel 0 Jan 10 12:52 k8s-dfab9ff933 drwxrwxrwx 2 root wheel 0 Jan 10 12:53 k8s-dfabab2733 H400-1# isi nfs export list ID Zone Paths Description ------------------------------------------------------- 1 System /ifs Default export 4 System /ifs/data/csi/k8s-dfab9ff933 - 5 System /ifs/data/csi/k8s-dfabab2733 - ------------------------------------------------------- Total: 3 H400-1#
ファイルを作成してpodをscale outさせてみます。
[root@node1 helm]# kubectl exec -n test isilontest-0 -- touch /data0/AAA [root@node1 helm]# kubectl exec -n test isilontest-0 -- ls -l /data0 total 24 -rw-r--r-- 1 nobody nobody 0 Jan 10 04:00 AAA [root@node1 helm]# kubectl exec -n test isilontest-0 -- touch /data1/BBB [root@node1 helm]# kubectl exec -n test isilontest-0 -- ls -l /data1 total 24 -rw-r--r-- 1 nobody nobody 0 Jan 10 04:00 BBB [root@node1 helm]#
[root@node1 helm]# kubectl get node NAME STATUS ROLES AGE VERSION node1 Ready master 21d v1.14.10 node2 Ready 21d v1.14.10 node3 Ready 21d v1.14.10 [root@node1 helm]# [root@node1 helm]# kubectl get pods -n test -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES isilontest-0 1/1 Running 0 8m27s 172.16.26.26 node3 [root@node1 helm]# [root@node1 helm]# kubectl scale -n test statefulset isilontest --replicas=2 statefulset.apps/isilontest scaled [root@node1 helm]# [root@node1 helm]# kubectl get pods -n test -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES isilontest-0 1/1 Running 0 8m48s 172.16.26.26 node3 isilontest-1 0/1 ContainerCreating 0 7s 172.16.26.25 node2 [root@node1 helm]# kubectl get pods -n test -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES isilontest-0 1/1 Running 0 9m9s 172.16.26.26 node3 isilontest-1 1/1 Running 0 28s 172.16.26.25 node2 [root@node1 helm]#
scale outして作成されたpod(isilontest-1)がRunningとなりました。ファイルを確認してみます。
[root@node1 helm]# kubectl exec -n test isilontest-1 -- ls -l /data0 total 24 -rw-r--r-- 1 nobody nobody 0 Jan 10 04:00 AAA [root@node1 helm]# kubectl exec -n test isilontest-1 -- ls -l /data1 total 24 -rw-r--r-- 1 nobody nobody 0 Jan 10 04:00 BBB [root@node1 helm]#
同じvolumeをマウントできています。
Isilon側はNFS exportのclientのIP制限にscale outで作成されたpodが稼働しているnodeのIPが追加されています。
H400-1# isi nfs export view 4 | grep Client Clients: 172.16.26.25, 172.16.26.26 Root Clients: - Read Only Clients: - Read Write Clients: - H400-1# isi nfs export view 5 | grep Client Clients: 172.16.26.25, 172.16.26.26 Root Clients: - Read Only Clients: - Read Write Clients: - H400-1#
まとめ
前回と今回の記事でCSI Driver for Dell EMC Isilonのインストールの流れと、付属のテストの動作をご紹介いたしました。このCSI Driverをインストールすることで、高可用性や拡張性を備えたDell EMC IsilonをKubernetesの永続性のあるストレージとして簡単に利用できるようになることが分かりました。CSI Driver for Dell EMC Isilonはすでにproduction-gradeとなっていますので、Dell EMC Isilonの新しい利用用途としてご検討頂ければと思います。
本ブログの情報につきましては、自社の検証に基づいた結果からの情報提供であり、
品質保証を目的としたものではございません。