Accepted Solution

Solution to provide 400TB single CIFS volume


One of of the Distributors I support has been asked if we can provide a solution based on NetApp for a Tender, but we are struggling to find something that fits.

Basic requirement is to provide a 400TB single Volume CIFS share which can be expanded, I have pasted other requirements below.

From what I can see a FAS6xxxx can only scale to 100TB Volumes, Infinite Volumes only support NFS, and a solution using the E-Series with Lustre/Stornext does not offer CIFS either.

I would assume at some point with SMB3 that Infinite Volumes will have CIFS support, but what would we recommend for anything based on the current requirement below;

==> The main System must provide atleast 400TB usable storage.

Systems should offer all file storagein a single volume, Please state the

largest supported single volume.

Systems should support Unicode file names up to 254 characters long.

Systems must support files larger than 4GB

Systems should support folder chains at least 128 levels deep.

On client systems that natively support case sensitive operation (e.g.

Linux) system should support renaming a file to a differently capitalized version of itself.

Folders should be able to contain at least 512 items.

Main System must serve clients by SMB v1 and v2 simultaneously.

System must fully support Windows XP SP3, Windows 7, Windows Server 2008 R2, Windows Server 2003 as an SMB client.

System must fully support MacOS 10.5, 10.6, 10.7 as an SMB client.

System must fully support Linux with kernel > 2.6 as an SMB client.

Any suggestions are welcom,


Re: Solution to provide 400TB single CIFS volume

Just to throw out an idea, how about using DFS to provide the root namespace and then link in an appropriate number of CIFS shares (ie volumes) under that root? You'd need a Windows box to front the DFS root, unless it changed recently, Netapp filers can't be DFS roots.

To actually have a 400 TB NetApp volume is not something I'd want. Just think about managing this beast with regards to snapshots, snapvault etc.

Re: Solution to provide 400TB single CIFS volume

Like c_morrall, I think DFS is about the only way you're going to be able to do this, This works quite well, and makes things a lot more manageable. We've implemented something similar although not with that sort of size, we got sick of a small minority of users filling up a shared area with junk (ripped music / personal photos) and stopping other people saving genuine work related files. We map users a specific drive letter to what is the root of a DFS namespace, under there are DFS folders which point to separate volumes.

I suspect fulfilling this requirement simply is going to be difficult with most system, may be the customer has written the requirement to specifically exclude most storage providers and they have a specific solution in mind (who?).

Also I'd question how some of those OS's will react to seeing a 400TB volume.

An out there crazy way of doing it could be to front end it with a windows server and have some large luns, bonded together via software raid.

Re: Solution to provide 400TB single CIFS volume

easy. Cluster on Ontap with SMB 2.1

Re: Solution to provide 400TB single CIFS volume

The issue is around replication, if using DFS to provide the replication for the Linux and MAC OS clients, these would need to authenticate with the AD in order to support replication via DFS?

If not using DFS and using Clustered ONTAP and SMB2.1 with NetApp providing the single namespace, how will SnapMirror handle replcation in the backend of the multiple volumes to keep them consistent?

Re: Solution to provide 400TB single CIFS volume

point taken, however the 8.2 C mode with the SMB3.0 look promissing and should handle the requirement. however if i step back for a moment, the requirement should be an object storage requirement, it have storage grid written all over it and why does the customer want to use CIFS for such a huge volume. number of files or size of those files. when the requirement says something like handling file that can reach 4GB per file its sounds like sizemic data and the way we do it here is via object storage solutions(storage grid) that will give you the performance, protection and expansion required... but that's just my view