Network and Storage Protocols

How do you scale NAS shares?

jgebhart2
3,948 Views

I'm encountering an issue where I currently work in that we have this "Common" SMB share and the default behavior is that Everyone has access to view the root of this share. Requests for shared storage are satisfied by creating a folder within this Common share and assigning NTFS permissions to grant access.

 

This process has been in place for years, very few new shares have been created, but there are over 1000 folders within the share now.

 

This seems to be unsustainable because the volume is now over 20TB in size, so we are lacking restore granularity and snapshot deltas for replication purposes are fairly large as we even have applications using that share. (For example, we recently had a large portion of this share go missing and due to this lack of granularity, we couldn't perform SnapRestore as it would have affected too many other people not impacted by the data loss and it ended up taking days to restore from snapshots using "previous versions")

 

One of the most obvious solutions to this sprawling share is to use junction paths and mount additional volumes within the namespace, but that introduces other issues, such as not being able to browse previous versions as seamlessly. This feels "dirty" to me and seems like it could potentially cause problems with backup & recovery or data governance tools we use in the future possibly not being able to traverse the junction paths properly. It also opens up the possibility of having different protection policies applied to volumes mounted within the same share and causing confusion in regard to what level of data protection a user actually has.

 

While I have more NetApp experience than most of my peers in my current position, a large portion of my experience is in SAN as opposed to NAS, so I haven't seen many large NAS environments and I'm wondering what people do to manage this.

 

What would you do in this case? Do I need to get over my "fear" of junction paths and just start using them? If this is what people do, do you "promote" a folder to a volume with a junction path at some point or is a folder always a folder? If a folder is always a folder, how do you plan for that? Users don't often understand the full extent of their needs and may initially ask for 100 GB of storage and end up needing 5TB as their data grows.

 

2 REPLIES 2

mbeattie
3,912 Views

Hi,

 

I'd agree with you, having a 20TB CIFS share that is replicated and potentially has a high rate of file change for snapshots\snapmirror lag is less than ideal.

Here are some options for you to consider:

 

  • Migrate data (root folder in the existing share) to a new volume and mount it to the namespace using a junction path
  • Abstract the UNC path from the users and implement DFS in which case it could be on an idependant volume on the same vserver but logically accessible as a DFS link within the DFS namespace.
  • Split the share into multiple shares by migrating data into different volume (perhaps accross multiple vservers) and potentially use multiple DFS namespaces based on logical business unit access

Are you using ABE to limit visibility at the root of your share? I guess it really depends on your RTO\RPO but i think you might struggle to meet that during a DR failover event given your current configuration and I'd be suggest that could be the driving factor for you to remediate the issue before you a DR failover is required.

 

P.S...I sure hope you have removed the default NTFS permissions of Everyone full control from the root of that volume\share!

 

/Matt

If this post resolved your issue, help others by selecting ACCEPT AS SOLUTION or adding a KUDO.

GidonMarcus
3,889 Views

Hi

 

SymbolicLinks works well with previous versions and i used it constantly for my 7-mode to Cdot migrations

 

G

Gidi Marcus (Linkedin) - Storage and Microsoft technologies consultant - Hydro IT LTD - UK
Public