31

This may be a dumb question but I haven't found much online and want to clarify this.

Given two deployments A and B, both with different container images:

  • They're deployed in two different pods(different rc, svc etc.) in a K8/OpenShift cluster.
  • They both need to access the same volume to read files (let's leave locking out of this for now) or at least the same directory structure in that volume.
  • Mounting this volume using a PVC (Persistent Volume Claim) backed by a PV (Persistent Volume) configured against a NFS share.

Can I confirm that the above would actually be possible? I.e. two different pods connected to the same volume with the same PVC. So they both are reading from the same volume.

Hope that makes sense...

3 Answers 3

44

TL;DR You can share PV and PVC within the same project/namespace for shared volumes (nfs, gluster, etc...), you can also access your shared volume from multiple project/namespaces but it will require project dedicated PV and PVCs, as a PV is bound to single project/namespace and PVC is project/namespace scoped.

Below I've tried to illustrate the current behavior and how PV and PVCs are scoped within OpenShift. These are simple examples using NFS as the persistent storage layer.

the accessModes at this point are just labels, they have no real functionality in terms of controlling access to PV. Below are some examples to show this

the PV is global in the sense that it can be seen/accessed by any project/namespace, HOWEVER once it is bound to a project, it can then only be accessed by containers from the same project/namespace

the PVC is project/namespace specific (so if you have multple projects you would need to have a new PV and PVC for each project to connect to the shared NFS volume - can not reuse the PV from first project)

Example 1:
I have 2 distinct pods running in "default" project/namespace, both accessing the same PV and NFS exported share. Both mount and run fine.

[root@k8dev nfs_error]# oc get pv
NAME      LABELS    CAPACITY   ACCESSMODES   STATUS    CLAIM  REASON    AGE
pv-nfs    <none>    1Gi        RWO           Bound default/nfs-claim             3m


[root@k8dev nfs_error]# oc get pods    <--- running from DEFAULT project, no issues connecting to PV
NAME              READY     STATUS    RESTARTS   AGE
nfs-bb-pod2-pvc   1/1       Running   0          11m
nfs-bb-pod3-pvc   1/1       Running   0          10m

Example 2:
I have 2 distinct pods running in "default" project/namespace and attempt to create another pod using the same PV but from a new project called testproject to access the same NFS export. The third pod from the new testproject will not be able to bind to the PV as it is already bound by default project.

[root@k8dev nfs_error]# oc get pv
NAME      LABELS    CAPACITY   ACCESSMODES   STATUS    CLAIM  REASON    AGE
pv-nfs    <none>    1Gi        RWO           Bound default/nfs-claim             3m


[root@k8dev nfs_error]# oc get pods    <--- running from DEFAULT project, no issues connecting to PV
NAME              READY     STATUS    RESTARTS   AGE
nfs-bb-pod2-pvc   1/1       Running   0          11m
nfs-bb-pod3-pvc   1/1       Running   0          10m

** Create a new claim against the existing PV from another project (testproject) and the PVC will fail

[root@k8dev nfs_error]# oc get pvc 
NAME        LABELS    STATUS    VOLUME    CAPACITY   ACCESSMODES   AGE
nfs-claim   <none>    Pending                                      2s

** nfs-claim will never bind to the pv-nfs PV because it can not see it from it's current project scope

Example 3:

I have 2 distinct pods running in the "default" project and then create another PV and PVC and Pod from testproject. Both projects will be able to access the same NFS exported share but I need a PV and PVC in each of the projects.

[root@k8dev nfs_error]# oc get pv
NAME      LABELS    CAPACITY   ACCESSMODES   STATUS     CLAIM                    REASON    AGE
pv-nfs    <none>    1Gi        RWX           Bound     default/nfs-claim                  14m
pv-nfs2   <none>    1Gi        RWX           Bound     testproject/nfs-claim2             9m



[root@k8dev nfs_error]# oc get pods --all-namespaces
NAMESPACE     NAME              READY     STATUS    RESTARTS   AGE
default       nfs-bb-pod2-pvc   1/1       Running   0          11m
default       nfs-bb-pod3-pvc   1/1       Running   0          11m
testproject   nfs-bb-pod4-pvc   1/1       Running   0          15s

** notice, I now have three pods running to the same NFS shared volume across two projects, but I needed two PV's as they are bound to a single project, and 2 PVC's, one for each project and the NFS PV I am trying to access

Example 4:

If I by-pass PV and PVC, I can connect to the shared NFS volumes directly from any project using the nfs plugin directly

volumes:
- name: nfsvol
  nfs:
    path: /opt/data5
    server: nfs1.rhs

Now, the volume security is another layer on top of this, using supplementalGroups (for shared storage, i.e. nfs, gluster, etc...), admins and devs should further be able to manage and control access to the shared NFS system.

Hope that helps

7
  • Thanks this helps allot. Feb 12, 2016 at 19:21
  • @DonovanMuller : I am also trying to use the same PV for multiple pods , it works fine, but I think data is also shared between pods in this case. Here my main concern is if PV contains lets say 2 GB of data , will all data be available to pods which are using this PV which is what we dont want right. Pod should have only its data , not others'. I asked this question here too stackoverflow.com/questions/36624034/… , but no response. Would be very helpful if you can clear this. Thanks in advance!
    – priyank
    Apr 22, 2016 at 11:58
  • @screenlay : would appreciate your thoughts too for my above query. thanks a ton !
    – priyank
    Apr 22, 2016 at 11:59
  • @priyank - I think if you want to restrict data/directories on your shared storage, you could pass in supplementalGroups from the securityContext and then set up the ownership and groups on the NFS server i.e. dir1 open to groups A and B and then dir1/dirA only open to podA and dir1/dirB only open podB - so all pods have access to dir1 but then only podA has access to dirA and podB has access to dirB
    – screeley
    Apr 25, 2016 at 14:22
  • 2
    Example 4 was exactly what I needed to get around having to create multiple PersistentVolumes and claims across many namespaces Jun 1, 2018 at 17:47
0

I came across this article Learn how to recreate an existing PVC in a new namespace, reusing the same PV with no data losses. I haven't tested it but worth a try. However, k8s docs say PV-to-PVC relationship is one-to-one.

A Note on Namespaces PersistentVolumes binds are exclusive, and since PersistentVolumeClaims are namespaced objects, mounting claims with "Many" modes (ROX, RWX) is only possible within one namespace. Reference: https://kubernetes.io/docs/concepts/storage/persistent-volumes/#a-note-on-namespaces

-3

AFAIK, binding a PV multiple times is not supported. You can use volume source (NFS in your case) directly for your use case.

3
  • Based on this ( kubernetes.io/v1.1/examples/nfs ) it actually seems possible? In the example there are two rc's using the same pvc. Feb 12, 2016 at 15:42
  • You can bind PVs anywhere you want, but the volume provider itself can reject an attach request for simultaneous access (for Ceph, EBS, or GCE). NFS has no guarantees - if you want to prevent NFS from being used from two pods simultaneously you'll need your own fencing / locking.
    – Clayton
    Feb 13, 2016 at 15:22
  • @Clayton: I am also trying to use the same PV for multiple pods , it works fine, but I think data is also shared between pods in this case. Here my main concern is if PV contains lets say 2 GB of data , will all data be available to pods which are using this PV which is what we dont want right. Pod should have only its data , not others'. I asked this question here too stackoverflow.com/questions/36624034/… , but no response. Would be very helpful if you can clear this. Thanks in advance!
    – priyank
    Apr 22, 2016 at 12:01

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Not the answer you're looking for? Browse other questions tagged or ask your own question.