Using NFS in Amazon Web Services ​
This tutorial explains how to use Network File System (NFS) to create ReadWriteMany (RWX) volumes in Amazon Web Services (AWS). You can use the created RWX volume from multiple workloads.
Prerequisites ​
You have the Cloud Manager module added.
Steps ​
Create a namespace and export its value as an environment variable. Run:
shellexport NAMESPACE={NAMESPACE_NAME} kubectl create ns $NAMESPACECreate an AwsNfsVolume resource.
shellcat <<EOF | kubectl -n $NAMESPACE apply -f - apiVersion: cloud-resources.kyma-project.io/v1beta1 kind: AwsNfsVolume metadata: name: my-vol spec: capacity: 100G EOFWait for the AwsNfsVolume to be in the
Readystate.shellkubectl -n $NAMESPACE wait --for=condition=Ready awsnfsvolume/my-vol --timeout=300sOnce the newly created AwsNfsVolume is provisioned, you should see the following message:
consoleawsnfsvolume.cloud-resources.kyma-project.io/my-vol condition metObserve the generated PersistentVolume:
shellkubectl -n $NAMESPACE get persistentvolume my-volYou should see the following details:
consoleNAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS my-vol 100G RWX Retain Bound test-mt/my-volThe
RWXaccess mode allows the volume to be readable and writable from multiple workloads. TheBoundstatus which means the PersistentVolumeClaim claiming this PV is created.Observe the generated PersistentVolumeClaim:
shellkubectl -n $NAMESPACE get persistentvolumeclaim my-volYou should see the following message:
consoleNAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS my-vol Bound my-vol 100G RWXSimilarly to PV, it should have the
RWXaccess mode andBoundstatus.Create two workloads that both write to the volume:
shellcat <<EOF | kubectl -n $NAMESPACE apply -f - apiVersion: v1 kind: ConfigMap metadata: name: my-script data: my-script.sh: | #!/bin/bash for xx in {1..20}; do echo "Hello from \$NAME: \$xx" | tee -a /mnt/data/test.log sleep 1 done echo "File content:" cat /mnt/data/test.log sleep 864000 --- apiVersion: apps/v1 kind: Deployment metadata: name: awsnfsvolume-demo spec: selector: matchLabels: app: awsnfsvolume-demo replicas: 2 template: metadata: labels: app: awsnfsvolume-demo spec: containers: - name: my-container image: ubuntu command: - "/bin/bash" args: - "/script/my-script.sh" env: - name: NAME valueFrom: fieldRef: fieldPath: metadata.name volumeMounts: - name: my-script-volume mountPath: /script - name: data mountPath: /mnt/data volumes: - name: my-script-volume configMap: name: my-script defaultMode: 0744 - name: data persistentVolumeClaim: claimName: my-vol EOFThis workload should print a sequence of 20 lines to stdout and a file on the NFS volume. Then it should print the content of the file.
Print the logs of one of the workloads, run:
shellkubectl logs -n $NAMESPACE `kubectl get pod -n $NAMESPACE -l app=awsnfsvolume-demo -o=jsonpath='{.items[0].metadata.name}'`The command should print something like:
console... Hello from awsnfsvolume-demo-869c89df4c-dsw97: 19 Hello from awsnfsvolume-demo-869c89df4c-dsw97: 20 File content: Hello from awsnfsvolume-demo-869c89df4c-8z9zl: 20 Hello from awsnfsvolume-demo-869c89df4c-l8hrb: 1 Hello from awsnfsvolume-demo-869c89df4c-dsw97: 1 Hello from awsnfsvolume-demo-869c89df4c-l8hrb: 2 Hello from awsnfsvolume-demo-869c89df4c-dsw97: 2 Hello from awsnfsvolume-demo-869c89df4c-l8hrb: 3 ...Note that the content after
File content:contains prints from both workloads. This demonstrates the ReadWriteMany capability of the volume.
Next Steps ​
To clean up, follow these steps:
Remove the created workloads:
shellkubectl delete -n $NAMESPACE deployment awsnfsvolume-demoRemove the created ConfigMap:
shellkubectl delete -n $NAMESPACE configmap my-scriptRemove the created AwsNfsVolume:
shellkubectl delete -n $NAMESPACE awsnfsvolume my-volRemove the created default IpRange:
NOTE
If you have other cloud resources using the default IpRange, skip this step, and do not delete the default IpRange.
shellkubectl delete -n kyma-system iprange defaultRemove the created namespace:
shellkubectl delete namespace $NAMESPACE