ONTAP Discussions

How do you determine which level of model of filers you want to adapt?

netappmagic
7,879 Views

In determining which level of filers you want to adapt, for instance, whether we should adapt FAS 3XXX or FAS 6XXX, what are those aspects should be taken into your considerations? any tools can we use to measure?

The easiest part is the capacity, if the application/projects requires a large amount of data, and so large FAS 3XXX may not able to handle, then FAS 6XXX of cause would have to be used. Also, response time, and ports... etc. Could you please give some in-depth analysis?

....

Thank you in advance for your inputs.

10 REPLIES 10

ekashpureff
7,821 Views

NetAppMagic -

Yes, there are tools we use to size solutions for customers.

The Sizer tool is available for partners on the Field Portal.

Capacity is not an easy part.

IO density is the big question on the back end.

This is a factor of IOs/TB.

Reads v writes, and randomness of the IO also factor in.

Rate of change and retention are another factor to be considered.

'What is the DR strategy?' is another question to be asked.

Networking on the front end needs to take fail over into consideration.

DR replication also needs to be considered for network.

CPU to drive the disks and the network is the last question I usually take into account.

It's not an easy question to answer.

We teach advanced classes on 7-Mode and Cluster Mode performance that cover these topics.

I hope this response has been helpful to you.

At your service,

Eugene E. Kashpureff, Sr.

Independent NetApp Consultant, K&H Research http://www.linkedin.com/in/eugenekashpureff

Senior NetApp Instructor, IT Learning Solutions http://sg.itls.asia/netapp

(P.S. I appreciate points for helpful or correct answers.)

netappmagic
7,822 Views

IO density , or IOPS/TB is more to do with disks, or raid level, less to do with CPU, or model of FAS. Please correct me, if I a wrong.

Thanks!

ekashpureff
7,822 Views

NetAppMagic -

You're very welcome.

Yes, IO density is about disks.

NetApp controllers are all about the disks they're driving.

It's easy to spin up a stack of disk shelves, it's another thing to be able to drive them with a lot of IO.

All of the factors I mentioned are of concern when sizing a solution for a given set of workloads.

They're all questions your sales team should be asking you, or you should be asking the customer if you're the sales engineer.

Your original question was whether there were tools available.

I had mentioned the Sizer tool that is available on the Field Portal for this use.

I hope this response has been helpful to you.

At your service,

Eugene E. Kashpureff, Sr.

Independent NetApp Consultant, K&H Research http://www.linkedin.com/in/eugenekashpureff

Senior NetApp Instructor, IT Learning Solutions http://sg.itls.asia/netapp

(P.S. I appreciate points for helpful or correct answers.)

nicholaf
7,821 Views

You can find a lot of benchmark data & tested performance information at: https://communities.netapp.com/community/netapp-blogs/sanbytes/blog/2012/06/20/data-ontap-unified-nas-and-san-cluster-benchmark-performance

It really comes down to scalability & your environments needs now & into the future. You can max out the performance of the FAS3XXX storage controller & simply scale out to another FAS3XXX. Where instead maybe a single FAS6XXXX could handle your environments needs for quite some time.

Here is the FAQ on tuning your system: https://kb.netapp.com/support/index?page=content&cat=TUNING&channel=FAQ

Technical Specifications for the FAS6200: http://www.netapp.com/us/products/storage-systems/fas6200/fas6200-tech-specs.aspx

Technical Specifications for the FAS6000: http://www.netapp.com/us/products/storage-systems/fas6000/fas6000-tech-specs.aspx

Technical Specifications for the FAS3200: http://www.netapp.com/us/products/storage-systems/fas3200/fas3200-tech-specs.aspx

Regards,

Nicholas Lee Fagan

rmatsumoto
7,821 Views

I'm not in charge of our team's budget, but I'm often asked for my input, as are other members of my team.  Here are the points that are often brought up by myself or others:

1. Anticipated slot requirements(need XOM module?  Need more cards for 10G or disks?) 

2. # of cores.  For the busiest filers we have, which are production SQL(data warehouse type of workload), Oracle, then virtualization clusters, that's what we care about more than the max # of spindles or even slot availability.  We will always run out of the processing headroom before we reach the disk count limit.  If you get the biggest head, then it also takes care of the max flash size and aggr size, although the aggr size part hasn't bit us.   We have been limited by the max amount of flash cache/pool in a controller in the past.  

netappmagic
7,821 Views

Good. this is a type answer I am looking for.

I can determine or calculate IOS/GB, based on disk configurations on either FAS3XXXX or FAS6XXX, but, how do I know what application requires how many IOS/GB, how do I find out what is the requirement on response time?

ekashpureff
7,821 Views

NetAppMagic -

For existing environs we would usually ask the customer.

Host based tools that can be used are SAR or IOstat.

Some SAR references:

https://access.redhat.com/solutions/21584

http://docs.oracle.com/cd/E23824_01/html/821-1451/spmonitor-8.html

IOstat:

http://en.wikipedia.org/wiki/Iostat

I hope this response has been helpful to you.

At your service,

Eugene E. Kashpureff, Sr.

Independent NetApp Consultant, K&H Research http://www.linkedin.com/in/eugenekashpureff

Senior NetApp Instructor, IT Learning Solutions http://sg.itls.asia/netapp

(P.S. I appreciate points for helpful or correct answers.)

rmatsumoto
7,821 Views

but, how do I know what application requires how many IOS/GB,

The short answer for us:

"We don't worry _that_ much and use as few disks as possible and add disks or move if needed because we(with and without NetApp's assistance) have often failed at sizing" - Read below for longer answer.


     -We have never been able to get this information reliably.  It's also unreliable when someone gives you a number.  So you may get some numbers your NetApp rep asks you to help you with your sizing, and you might be feeding your NetApp rep garbage numbers yourself.  I'll give you a couple of observations:

          1. When someone who's not a storage admin tells you that his/her application will do 5000 IOPs and is 500GB in size(thus 10 IOs/GB in your formula), it doesn't necessarily mean                5000 IOPs at the filer volume level, LUN, or aggregate.  I have no idea if it is that the estimate provided is wrong, or if what is an op to an application(take a database for example) isn't

               necessarily an op to a filer, but too many times I've heard that an application X will do Y number of ops and I find that it doesn't for storage. 

          2. That's not to say NetApp(I'm a NetApp customer, so YMMV with your account team) can't do sizing for you.  They can, at the minimum, take your Oracle database's AWR and do its own sizing.  They have also done sizing work for Exchange and SQL for us.  I don't remember how SQL sizing went, but for Exchange it appeared that they take the number of your users (and probably mailbox size, but I'm not 100% sure) and a few other parameters and size that way.  I'm operating from fairly old information and we don't do this often so YMMV.

          3.  So, none of this is probably particularly helpful in sizing.  To be quite honest, for most applications, with the exception of probably Exchange, I have found sizing to be largely a guess-work with varying degrees of success.  Sizing for one, or any small number of, database/application is not that helpful for us in a number of ways.  One, we are a heavily shared infrastructure with a large number of apps, VMs and databases, and sizing an aggregate/controller for any one of those is not that productive because almost no one app/VM/DB has a controller(or controllers) to itself; We have dedicated aggregatest to specific applications/databases in the past and we have both oversized and undersized them in terms of perf.   Two, we have enough controllers/aggregates of each type at this point and have had them long enough that a lot of the new applications are either incremental updates to what's already there(therefore we have historical performance data and that's more actionable data) OR we can ask the DBAs/app devs how they anticipate the new app to behave in comparison to existing apps.  We certainly ask for hard IOP numbers anyway, but that is typically no more accurate than their estimate that it'd behave like an existing application named 'ABC'.  Three, we keep enough HW on hand that if we undersize an aggr, we can add more disks.  Or move it some place else.  We do have a protocol in place to add a full raid group whenever we add anything, and at the max raid size.  Historically, most any data we keep around grows in terms of space and/or I/O that nothing we create remains oversized from either perspective for very long.  We do sometimes VolMove data in and out in both 7-mode/CDOT(or use a more traditional migration approach), but that's rarer than the data just growing organically where it sits.  Four, we have a large number of apps/DBs AND add a large number of DBs over time that sizing for any one DB(or all) becomes old information fairly quickly.  We actually tried to size them all once and that's what happened to us.  So, yeah, size away, but be ready to adapt to the actual space and/or I/O growth. 

how do I find out what is the requirement on response time?

     -We ask the DBA/App dev, though this isn't always all that helpful.  Not to throw people under the bus, but if you ask this question then the answer is often "the fastest you've got," and today that means some kind of storage with lots and lots of flash.  A lot of DBs/apps do not take advantage of the excess resources for one reason or another.  We have certainly had the opposite happen, but asking this question has, more often than not, led us to oversize at the aggregate level.  We are going to a different approach where we will offer different tiers of storage at different price points in a more standardized manner. 


ekashpureff
7,821 Views

Matsumoto -

Thank you for your insight as a customer.

There is a lot to said for your comments about being able to respond to change.

It's one of the big advantages of using NetApp and Cluster Mode.

To me, our best customer is an educated customer.

It's why I spend time here on communities to answer these questions.

It is also why I enjoy teaching customers, partners, and NetApp employees about performance.

It isn't guess work.

From using the internal Sizer tool, host tools like IOstat and SAR, and collecting data from client apps like Oracle AWR, sizing can be done accurately.

There are the other factors like DR strategy, retention, replication frequency, fail over, and budget (all mentioned above) to be taken into consideration.

Growth planning (mentioned above) is another concern.

Given a good sales engineer and taking the time to answer all of these questions for them, NetApp can offer a solution to deliver the expected performance, without over selling the solution for the customer needs.

I hope this response has been helpful to you.

At your service,

Eugene E. Kashpureff, Sr.

Independent NetApp Consultant, K&H Research http://www.linkedin.com/in/eugenekashpureff

Senior NetApp Instructor, IT Learning Solutions http://sg.itls.asia/netapp

(P.S. I appreciate points for helpful or correct answers.)

netappmagic
6,633 Views

I just wanted to add my 2 cents on iostat and SAR, how accutely to use these number on sizing NetApp filers? Let say, iostat on current DAS system, the output on DAS will be quite different from where the application is moving to, ex, NA or EMC, right?

Public