The Container Storage Interface (CSI) is a standard for exposing file storage systems to containerized workloads on Container Orchestration Systems, like Kubernetes.
A CSI driver is responsible for provisioning, attaching, mounting, and managing storage volumes. It enables dynamic volume provisioning and allows workloads to consume persistent storage.
By integrating CTERA as a CSI provider for filesystems, Kubernetes workloads can seamlessly consume persistent, globally accessible, and versioned file storage across distributed environments.
Using the CTERA CSI driver, Kubernetes users can:
- Provision Persistent Volumes (PVs) – Storage administrators can define storage classes, allowing workloads to dynamically request volumes that map to the global filesystem.
- Mount CTERA storage into pods – Applications running in Kubernetes can mount volumes backed by CTERA’s global namespace, ensuring unstructured data accessibility across multiple locations and clusters.
- Enable Hybrid Cloud Workflows – Data stored in CTERA is globally accessible, making it ideal for hybrid cloud deployments where workloads span on-premises and cloud environments.
- Improve Data Availability and Performance – With edge caching, Kubernetes workloads in remote locations can access frequently used data with low latency while maintaining centralized control in the cloud.
This article is for the CSI SMB driver, which has been tested and certified by CTERA. The CSI NFS driver has not been tested.
This article outlines the steps to deploy and configure the CSI SMB driver on a Kubernetes cluster. The driver allows mounting an SMB share from the CTERA Edge Filer. The setup includes installing the CSI driver and configuring Persistent Volumes (PVs) and Persistent Volume Claims (PVCs) for containerized applications.
The shares on the edge filer that you want Kubernetes workloads to consume are created on the edge filer, as described in Configuring CTERA Edge Filer Shares, for example for edge filer version 7.8.x in Configuring CTERA Edge Filer Shares.
The procedure ensures that pods can mount the SMB share, write files, and read them back, verifying successful integration.
Step-by-step guide
The steps in the following procedure require you to install Helm on a machine that has access to the Kubernetes cluster. For details, see https://helm.sh/docs/intro/install/.
The SMB CSI driver is installed using Helm. For details, see [csi-driver-smb/charts at master · kubernetes-csi/csi-driver-smb](http://csi-driver-smb/chartsat master · kubernetes-csi/csi-driver-smb){target=_blank
}.
To install the CSI SMB Driver:
- On a machine that has access to the Kubernetes cluster, run the following Helm commands:
helm repo add csi-driver-smb https://raw.githubusercontent.com/kubernetes-csi/csi-driver-smb/master/charts
helm install csi-driver-smb csi-driver-smb/csi-driver-smb --namespace kube-system --version v1.17.0
- Verify the installation was successful by running the following command:
kubectl get pods -n kube-system | grep smb
The expected output should be similar to the following:csi-smb-controller-ffdb55686-9s24d 4/4 Running 2 (25h ago) 25h csi-smb-node-88pbk 3/3 Running 1 (25h ago) 25h csi-smb-node-x4jmw 3/3 Running 1 (25h ago) 25h
- Create Kubernetes Resources using YAML configurations:
- Create SMB credentials (cred.yaml)
Where the username and the password are for the user that can acess the shares.apiVersion: v1 kind: Secret metadata: name: smbcreds namespace: default type: Opaque data: username: *********= # Base64-encoded "*****" password: ********== # Base64-encoded "*******"
- Create a persistent volume (pv.yaml)
Where:apiVersion: v1 kind: PersistentVolume metadata: annotations: pv.kubernetes.io/provisioned-by: smb.csi.k8s.io name: pv-smb-one spec: capacity: storage: 1Gi accessModes: - ReadWriteMany persistentVolumeReclaimPolicy: Retain storageClassName: smb mountOptions: - dir_mode=0777 - file_mode=0777 - uid=1001 - gid=1001 - noperm - mfsymlinks - cache=strict - noserverino # required to prevent data corruption csi: driver: smb.csi.k8s.io # volumeHandle format: {smb-server-address}#{sub-dir-name}#{share-name} # make sure this value is unique for every share in the cluster volumeHandle: "192.168.9.169#cloudelinor" volumeAttributes: source: "//192.168.9.169/cloud-elinor" nodeStageSecretRef: name: smbcreds namespace: default
volumeHandle: "192.168.9.169#cloudelinor"
is a unique name.
source: "//192.168.9.169/cloud-elinor"
includes the IP address of the edge filer, 192.168.9.169, and the cloud folder name, cloud-elinor. - Create a Persistent Volume Claim (pvc.yaml)
kind: PersistentVolumeClaim apiVersion: v1 metadata: name: pvc-smb-one namespace: default spec: accessModes: - ReadWriteMany resources: requests: storage: 1Gi volumeName: pv-smb-one storageClassName: smb
- Optionally, create a test pod (pod.yaml)
pod.yaml tests the integration using the PVC declared in pvc.yaml.apiVersion: v1 kind: Pod metadata: name: test-smb-pod-one namespace: default spec: restartPolicy: Never nodeSelector: "kubernetes.io/os": linux containers: - name: test-container image: busybox command: - "/bin/sh" - "-c" - | echo "Testing SMB PVC" > /mnt/testfile.txt cat /mnt/testfile.txt ls /mnt volumeMounts: - name: smb mountPath: "/mnt/" readOnly: false volumes: - name: smb persistentVolumeClaim: claimName: pvc-smb-one
- Create SMB credentials (cred.yaml)
- Deploy the Kubernetes resources.
After creating the YAML files, apply them in the following order:kubectl apply -f pv.yaml kubectl apply -f pvc.yaml kubectl apply -f cred.yaml kubectl apply -f pod.yaml
- Verify the deployment.
- Check if PV and PVC are bound by running the following commands:
kubectl get pv
kubectl get pvc
The expected output should be similar to the following:
NAME CAPACITY ACCESS MODES STORAGECLASS STATUS pv-smb 100Gi RWX smb Available pvc-smb 100Gi RWX smb Bound
- Check the pod logs to verify read/write permissions by running the following command:
kubectl logs test-smb-pod
The expected output should be similar to the following:Testing SMB PVC
- Check if PV and PVC are bound by running the following commands: