Subscribe
Accepted Solution

Cloud ONTAP / Perfromance

[ Edited ]

Hi All,

I want to know about Cloud ONTAP's performance.
I ran performance test and the following is result.

Sequential read 32K : about 114MB/s, 3600 iops
Sequential write 32K : about 27MB/s, 900 ips   *****
Random read 32K : about 111MB/s, 3500 iops
Random write 32K : about 16MB/s, 540 iops   *****

Sequential read 4K : about 112MB/s, 28000 iops
Sequential write 4K : about 19MB/s, 4800 iops   *****
Random read 4K : about 111MB/s, 28000 iops
Random write 4K : about 10MB/s, 3100 iops   *****


I think that write performance is very low.
The characteristics seen by "write" are high Latency. (Max 80ms over)   


<Test Environment>
------------------------------
Cloud ONTAP / m3.2xlarge, standard hourly
aggregate / EBS 1TB x 3, 500GB x 6
FlexVol / 3 vols
Protocol / CIFS
tool / Iometer 2006.07.27

AD/ Windows 2012R2 (c4.xlarge)
Client / Windows 2012R2,Windows 2008DC x2 (c3.8xlarge)
------------------------------

Is the write-performance of CloudONTAP normal-performance?

Where should I pay attention to if it is not normal-performance?
I pay attention to boot volume (P-IOPS EBS of 42GB, 1250 iops).

Please give me advice.


Best regards

--kaneko

Re: Cloud ONTAP / Perfromance

The IOPS numbers you are seeing is inline with your expectations. However, the ~ 80ms latency is high. Is that the average latency or the max ? Can your share the min, avg, max latencies observed. ?   Further, a perfstat would help to drill down further to understand what is going on.  A single slow EBS volume can bring down the overall performance.

 

With the upcomming  COT support for R3 instances we expect much lower latencies when compared to m3 instances. However, in general, please note that COT platform is not appropriate for 100% write workloads. For write intensive workloads we recommend our NPS platform.

Re: Cloud ONTAP / Perfromance

Hi Prashand-san,

 

Thank you for your response.


The ~80ms latency is maybe average.
I got latency-number by "statistics" commnad, but I couldn't find detail of cifs latency from internal site or documents.
     > statistics start -object cifs -vserver svm_ntap1 -sample-id 1

 

I have already broken a test environment because there was a problem of the cost.

Therefore, I can't run perfstat.
I welcome a release of R3 instance of COT.


But I want to know the reason that R3 instance resolves this write performance issue.

CPU and Network were not bottleneck in write performance, and there was not difference in performance by one disk and two disks.

 

Best regards

 

--kaneko