2017-01-05 12:27 PM
An application has the requirement to access 100 folders / shares within one UNC path, they also need to access a text file from the same UNC path.
Following is the example:
The storage requirement is over 70 TB, and we like to split it into multiple volumes due to backup / recovery requirements. But if we split into multiple volumes (let say 10 volumes, and each volume has 10 shares), then the 100 shares will not be under the same UNC path. it will be like \\verserve1\vol1\00... \\vserver1\vol2\10
We did try to map the shares from each volume to root (e.g. \\vserver1\00; \\vserver1\01...), but I don't know how to map to the network volume without the share name. In addition, I don't know how to allow the application to create / read / write a text file on the UNC path \\vserver1.
The only way we can think of is to create 100 volumes which is quite a bit to manage.
Any suggestion other than creating 101 volumes??
2017-01-08 02:44 PM
Have you considered implementing DFS? Does your application support accessing a DFS link?
Using a DFS namespace to access the data will enable you to abstract the actual UNC Path of the CIFS share.
So if i understand your requirement, folder 00-09 would be created in volume1, folders 10-19 in volume2 etc. You would then configure DFS to something like:
This approach would require you to setup and configure DFS for the 100 folders (DFS Links\Targets) which is a bit of work but it can easily be scripted using dfsutil.exe or powershell cmdlets. Hope that helps
2017-01-09 01:48 AM
The 100 volumes approach is probably the cleanest option. more like 101 volumes:
1 vol called audio, then 00-99
Mount audio in the root of the namespace and share it out, then junction the rest under /audio. You get the namespace layout your are looking for under a single unc path, with volume level granularity for backup/restore/vol moves, etc.
You could also see if flexgroups will fit your requirements. If so, that's even better.
2017-01-09 03:47 PM
I agree with Sean's approach on this, it is the optimal solution from a storage perspective. You might consider DFS if there is a specific reason why you don't want to create the 100 volumes? Either way you will have to manage 100 volumes or you will have to manage 100 DFS links.
Another point you might consider from an application perspective is abstracting the vserver name to ensure the application accesses the data using a DNS CName rather than the actual name of the vserver. This will enable flexibility if you ever need to migrate the application\data in future.
EG: DNS CName = VserverName
application1 = vserver1 (IE your DNS CName record "application1" should point to the DNS A Record for "vserver1")
If you do this and your application is using CIFS to access the data then ensure you should also set an SPN on the vservers AD computer object to enable Kerberos authentication.
Let me know if you have any questions.