Integrate CTERA as a Container Storage Interface (CSI) Provider

Prev Next

The Container Storage Interface (CSI) is a standard for exposing file storage systems to containerized workloads on Container Orchestration Systems, like Kubernetes.

A CSI driver is responsible for provisioning, attaching, mounting, and managing storage volumes. It enables dynamic volume provisioning and allows workloads to consume persistent storage.

By integrating CTERA as a CSI provider for filesystems, Kubernetes workloads can seamlessly consume persistent, globally accessible, and versioned file storage across distributed environments.

Using the CTERA CSI driver, Kubernetes users can:

  • Provision Persistent Volumes (PVs) – Storage administrators can define storage classes, allowing workloads to dynamically request volumes that map to the global filesystem.
  • Mount CTERA storage into pods – Applications running in Kubernetes can mount volumes backed by CTERA’s global namespace, ensuring unstructured data accessibility across multiple locations and clusters.
  • Enable Hybrid Cloud Workflows – Data stored in CTERA is globally accessible, making it ideal for hybrid cloud deployments where workloads span on-premises and cloud environments.
  • Improve Data Availability and Performance – With edge caching, Kubernetes workloads in remote locations can access frequently used data with low latency while maintaining centralized control in the cloud.
Note

This article is for the CSI SMB driver, which has been tested and certified by CTERA. The CSI NFS driver has not been tested.

This article outlines the steps to deploy and configure the CSI SMB driver on a Kubernetes cluster. The driver allows mounting an SMB share from the CTERA Edge Filer. The setup includes installing the CSI driver and configuring Persistent Volumes (PVs) and Persistent Volume Claims (PVCs) for containerized applications.

The shares on the edge filer that you want Kubernetes workloads to consume are created on the edge filer, as described in Configuring CTERA Edge Filer Shares, for example for edge filer version 7.8.x in Configuring CTERA Edge Filer Shares.

The procedure ensures that pods can mount the SMB share, write files, and read them back, verifying successful integration.

Step-by-step guide

Notes

The steps in the following procedure require you to install Helm on a machine that has access to the Kubernetes cluster. For details, see https://helm.sh/docs/intro/install/.

The SMB CSI driver is installed using Helm. For details, see [csi-driver-smb/charts at master · kubernetes-csi/csi-driver-smb](http://csi-driver-smb/chartsat master · kubernetes-csi/csi-driver-smb){target=_blank}.

To install the CSI SMB Driver:

  1. On a machine that has access to the Kubernetes cluster, run the following Helm commands: helm repo add csi-driver-smb https://raw.githubusercontent.com/kubernetes-csi/csi-driver-smb/master/charts
    helm install csi-driver-smb csi-driver-smb/csi-driver-smb --namespace kube-system --version v1.17.0
  2. Verify the installation was successful by running the following command:
    kubectl get pods -n kube-system | grep smb
    The expected output should be similar to the following:
    csi-smb-controller-ffdb55686-9s24d        4/4     Running     2 (25h ago)    25h
    csi-smb-node-88pbk                        3/3     Running     1 (25h ago)    25h
    csi-smb-node-x4jmw                        3/3     Running     1 (25h ago)    25h
    
  3. Create Kubernetes Resources using YAML configurations:
    1. Create SMB credentials (cred.yaml)
      apiVersion: v1
      kind: Secret
      metadata:
        name: smbcreds
        namespace: default
      type: Opaque
      data:
        username: *********=   # Base64-encoded "*****"
        password: ********==   # Base64-encoded "*******"
      
      Where the username and the password are for the user that can acess the shares.
    2. Create a persistent volume (pv.yaml)
      apiVersion: v1
      kind: PersistentVolume
      metadata:
        annotations:
          pv.kubernetes.io/provisioned-by: smb.csi.k8s.io
        name: pv-smb-one
      spec:
        capacity:
          storage: 1Gi
        accessModes:
          - ReadWriteMany
        persistentVolumeReclaimPolicy: Retain
        storageClassName: smb
        mountOptions:
          - dir_mode=0777
          - file_mode=0777
          - uid=1001
          - gid=1001
          - noperm
          - mfsymlinks
          - cache=strict
          - noserverino  # required to prevent data corruption
        csi:
          driver: smb.csi.k8s.io
          # volumeHandle format: {smb-server-address}#{sub-dir-name}#{share-name}
          # make sure this value is unique for every share in the cluster
          volumeHandle: "192.168.9.169#cloudelinor"
          volumeAttributes:
            source: "//192.168.9.169/cloud-elinor"
          nodeStageSecretRef:
            name: smbcreds
            namespace: default
      
      Where:
      volumeHandle: "192.168.9.169#cloudelinor" is a unique name.
      source: "//192.168.9.169/cloud-elinor" includes the IP address of the edge filer, 192.168.9.169, and the cloud folder name, cloud-elinor.
    3. Create a Persistent Volume Claim (pvc.yaml)
      kind: PersistentVolumeClaim
      apiVersion: v1
      metadata:
        name: pvc-smb-one
        namespace: default
      spec:
        accessModes:
          - ReadWriteMany
        resources:
          requests:
            storage: 1Gi
        volumeName: pv-smb-one
        storageClassName: smb
      
    4. Optionally, create a test pod (pod.yaml)
      pod.yaml tests the integration using the PVC declared in pvc.yaml.
      apiVersion: v1
      kind: Pod
      metadata:
        name: test-smb-pod-one
        namespace: default
      spec:
        restartPolicy: Never
        nodeSelector:
          "kubernetes.io/os": linux
        containers:
          - name: test-container
            image: busybox
            command:
              - "/bin/sh"
              - "-c"
              - |
                echo "Testing SMB PVC" > /mnt/testfile.txt
                cat /mnt/testfile.txt
                ls /mnt
            volumeMounts:
              - name: smb
                mountPath: "/mnt/"
                readOnly: false
        volumes:
          - name: smb
            persistentVolumeClaim:
              claimName: pvc-smb-one
      
  4. Deploy the Kubernetes resources.
    After creating the YAML files, apply them in the following order:
    kubectl apply -f pv.yaml
    kubectl apply -f pvc.yaml
    kubectl apply -f cred.yaml
    kubectl apply -f pod.yaml
    
  5. Verify the deployment.
    1. Check if PV and PVC are bound by running the following commands:
      kubectl get pv
      kubectl get pvc
      The expected output should be similar to the following:
    NAME      CAPACITY   ACCESS MODES   STORAGECLASS   STATUS    
    pv-smb     100Gi      RWX            smb            Available
    pvc-smb   100Gi      RWX            smb            Bound
    
    1. Check the pod logs to verify read/write permissions by running the following command:
      kubectl logs test-smb-pod
      The expected output should be similar to the following:
      Testing SMB PVC