Has anyone else noticed that many customer RFPs don't have performance criteria in recent RFPs? Often it is an opportunity for an engagement to calculate and estimate performance, but if brought in later on an opportunity (like Larry's thread one of the "other" vendors brought in after the fact) there are many assumptions to make for mb/sec, iops (and what kind of iops seq/rnd/read/write/working set size, % of each), how much fc/sas/sata, etc.
Performance is very dangerous and very difficult to add in a RFP. All the "objective" figures are not interesting for most of the applications: I'm not interested in IO/sec, mb/sec, .. The only thing what's (most of the time) important is latencies under a specific load. But this is very hard to put in a RFP. You can only measure it after setup of the whole infrastructure and than you will have the discussion: it's the network, not the correct driver of your HBA, you have not told that you also run some cifs clients on that same system, ...
We have done it once for our storage for our clinical database:
write latencies always (don't forget the word always) under the 2 ms and stable for the logs
always read 1000 IO's/s of 2k (yes, our database is raw device with 2K blocks) on the data
Not very hard but the always part is a nice one.
This was combined with a POC.
ps: with NetApp, it was a piece of cake (with flexshare) without tuning the whole box and without hiring 3 intelligent guys who had to give the box oil and fuel.
I agree latency is also important...but to respond to an RFP without performance is not feasible... even with FlexShare a FAS2040 might not have enough total IOPs to meet the requirement. The problem is that one responder may quote a FAS2040 and another a FAS3170... and then spindle count and type to meet the criteria. If the RFP states 100 15K SAS drives, that would be useful. But often we just get 100TB usable and not much other info to work off of... We won't respond without more info...
I agree with Dimitris that many times performance is very vague. Many times I have seen RFPs that want to replace aging or strained environments. Given this has been the case, the data from the existing environment may be a good indicator of what is required, but not necessarily the perfect data. What other types of data has been used when creating requests to scope “Apples to Apples” comparisons?
Good point... sometimes we can create our own paralysis analysis... especially for legacy systems. If the current solution is 10 old 10k disks and we are providing a response with 12 15k disks we know we will beat current performance of their existing array with a legacy controller and old disks.