Effective December 3, NetApp adopts Microsoft’s Business-to-Customer (B2C) identity management to simplify and provide secure access to NetApp resources.For accounts that did not pre-register (prior to Dec 3), access to your NetApp data may take up to 1 hour as your legacy NSS ID is synchronized to the new B2C identity.To learn more, read the FAQ and watch the video.Need assistance? Complete this form and select “Registration Issue” as the Feedback Category.
We're building out our newest Netapp cluster and what we currently have is a sas hybrid and we also have some more sata disks and some more ssd's.
I'm wondering if there are any good experiences out there of perhaps the size of a sata aggr and how big the flash pool was for it if you even used flash pools on your sata or if you just use them on sas.
We think that we don't have as much data that would just benefit non hybrid sata volumes so we're thinking we want to make it all our sata disks with a flash pool in front of it but it will be around 90tb of storage space and around 500gb of ssd disks.
Our sas hybrid which is performing wonderfully is only around 50tb and has 500gb worth of ssd's in front of it (5 disks).
We don't get much in the way of a testing environment to churn numbers and see what's best for our data but what we are looking to put on these aggr's are the following
VDI environments both linked clones and persistant
VM ware vms windows, linux, vmware appliances (database servers(Oracle MSSQL), web servers(apache, IIS), file servers, etc)
I'm the NetApp technical marketing engineer (TME) for Flash Pool, Flash Cache and All-Flash FAS. I'll share guidance about configuring Flash Pool and have asked my virtualization expert colleagues to add VDI-specific guidance in another reply.
To assure good performance it is important to properly size (i.e. not undersize) Flash Pool cache. The best method of sizing Flash Pool cache is the Automated Workload Analyzer (AWA), which is available in Data ONTAP starting with the 8.2.1 release. AWA samples the workload on an aggregate and then calculates projected cache hit rates and recommends a cache size to deploy. AWA can be used on a Flash Pool aggregates to determine if there is enough cache, as well as on standard HDD-only aggregates with or wirhtou Flash Cache installed on the controller. More information about AWA and Flash Pool best practices can be found in TR-4070 Flash Pool Design and Implementation Guide (http://www.netapp.com/us/media/tr-4070.pdf).
You wrote that you've had success with SAS HDD Flash Pool aggregates where 50TB of HDD storage is accelerated by 500GB of cache. Since I don't know how you calculated the 500GB of cache, I'll provide a short explanation here. Assuming the cache is provided with 200GB SSDs, 4 data SSDs (i.e. not including parity drives) enable caching 588 GiB of data. Each 200GB SSD provides 186 GiB of capacity. [GB is a decimal statement of capacity (1 GB = 10^30 bytes) used mostly for marketing purposes; GiB is a binary capacity statement (1 GiB = 2^30 bytes) which is how all storage systems actually use storage capacity. 1 GiB is approximately 7% more bytes than 1 GB.]
Thus, 4 data SSDs provide a total usable capacity of 744 GiB. Flash Pool reserves 25% of that capacity for metadata and to assure that enough SSD media is available to accept new data. 75% of 744 GiB is 558 GiB.
Assuming that is the amount of cache space available for the roughly 50 TB of capacity in the SAS Flash Pool aggregates, the implication is only ~1% of the 50 TB of data is actively being randomly read or overwritten on a steady-state basis. If the active dataset was signficantly larger, the drives (SSDs and HDDs) in the Flash Pool aggregate and one or more CPUs in the controller would be working less efficiently than optimal. As a rule of thumb, a 1% ratio of cache to actual dataset size is at the low end; I recommend running AWA on the aggregates to confirm that they have enough cache.
The same amount of cache on a larger (90TB) SATA aggregate also concerns me since the cache-to-storage ratio is <1%. However, if what you are actually doing is provisioning an aggregate with 90TB of capacity but there is no data in the aggregate yet, then this is not a concern. If that is the case, you can start the Flash Pool aggregate with 558 GiB of addressable cache (i.e. 4x 200GB data SSDs), then begin monitoring the workloads and recommended cache size with AWA after the aggregate is 5-10% full. If as you do this the recommended cache size continues to increase, you can add SSDs to the Flash Pool cache to avoid an impact to performance. Note: The cache size AWA recommends includes the 25% reserve, so 744 GiB is equivalent to 4x 200GB data SSDs in AWA cache size.
One more piece of advice: TR-4070 includes recommendations for the minimum number of data SSDs per Flash Pool aggregate to assure the cache can provide maximum performance. For mid-range controllers like the FAS3200 or FAS8040, the minimum is 2 data SSDs; for high-end controllers like the FAS6200 or FAS8060 or FAS8080 EX, the minimum recommendation is 5 data SSDs.
I just wanted to know if we could run the awa on multiple aggr at one time to collect the perofrmance values. I would like to assign the flash pools to aggr but then we have multiple aggregates across the nodes, so we need to checks first how the aggregates are working currently in terms of performance.
This is a great start for what I'm looking for. Thank you for mentioning AWA as a tool to use for sizing. I'm wondering if the toolsets are integrated allowing me to push my collection straight into SPM/Synergy for modeling and configuration as well? What I'm looking for is collecting enough information to present back to a client to demonstrate the metrics we've collected will provide predictable results.