2012-09-07 04:10 PM - last edited on 2016-06-30 03:58 PM by Li-Jacques
I have some questions on ONTAP Edge Eval software that was given out at VMworld
First I am wondering what are the differences in terms of functionality between ONTAP Edge Eval and full version of Data ONTAP-V you can download from the NOW site. It appears that ONTAP Edge Eval is a -T version since I can license SnapMirror, True? Am I limited on the amount of TBs I can put behind the Eval version? Will the Eval version shutdown in 72 hours if I don't set up a SnapVault relationship?
Also....If I need to eval the full version of Data ONTAP-V how do I go about getting a platform license?
2012-09-07 04:58 PM
Not sure but it is publicy available to all!
or direct from NetApp
90 days to play with and learn.
2012-09-11 04:47 AM
It's true that this link (http://virtualstorageguy.com/2012/09/06/data-ontap-edge-vsa-is-available-to-download/) and other blogs give technical informations on Data Ontap Edge Evaluation VSA (limited to 4 GB of RAM, 2 vCPU, e1000 drivers,...).
But, what i'd like to know is the purpose of this evaluation VSA. Is it the real Data Ontap-Edge (with limitations) or is it a Data Ontap Edge simulator-like ?
After installation, a little benchmark shows, on my plateform, bad write performances and lightweight read performances.
My plateform :
2 ESX 4.1 U2 2 x AMD Opteron 2220 2.8 GHz Processors (4 cores)
Local disks were too slow, so i use an iSCSI Array with hardware iSCSI initiators to home the Data Ontap Edge (Eval.) VSA.
VSA uses NFS VMkernel with 2 attached network adapters.
for this simple benchmark, i use vCenter performance view during Storage VMotion (between physical Array and VSA) actions, dd with 1024k block size, sqlio
Read performances on iSCSI Array : 82 MB/s
Storage VMotion :
Actions from VSA (deduplication) : 75 MB/s (seen in vCenter performances view) - Good
ESX1 (with VSA) Read performances : 42 MB/s max (storage VMotion from VSA to physical Array)
ESX2 (without VSA) Read performances : 19 MB/s max (1)
ESX1 (with VSA) Write performances : 14 MB/s max
ESX2 (without VSA) Write performances : 13 MB/s max
dd from : time sh -c "dd if=/dev/zero of=ddfile bs=128k count=70000 && sync"
On native VMFS (physical iSCSI Array), Write performances : 38 MB/s
On VSA (which is on the same datastore), Write performances : 12 MB/s (2)
sqlio : test inside a windows 2008 VM / sqlio -k(W or R) -s60 -BN -frandom -o8 -b64 -LS -Fparam.txt // with 3GB test file.
On ESX1 (with the VSA), Read Performances : 134 MB/s
On ESX2 (without the VSA), Read Performances : 83 MB/s
On ESX1 (with the VSA), Write Performances : 13 MB/s
On ESX2 (without the VSA), Write Performances : 13 MB/s
On iSCSI Array, Read Performances : 1000 MB/s (cache!)
On iSCSI Array, write Performances : 43 MB/s
(1) It's look like network access to VSA reduces performances.
(2) On the physical array, about 32 MB/s of reads, so 12 MB/s on VSA need 32 MB/s on physical array.
For me, Write performances are not enough (average latences ~160ms). But it could be due to my plateform and some one who could try it on an ESXi with a good RAID controller (lot of cache, performance model) could test to give us the result.
I'm waiting for white papers to optimize performances (virtual disk size, number, dispatch,...)
2012-09-11 05:49 AM
ONTAP edge and ONTAP-v are essentially the same thing, though if I'm going to be pedantic I'd say that ONTAP edge is one of the implementations of the ONTAP-v technology. As with most NetApp products, different functionality can be enabled via different license keys, though for the moment there will be limits as to what funcitonality will be exposed as we make the ONTAP-v technology available for more use cases. For example, in this release, we are only allowing 7-mode for ONTAP edge.
I'd also be careful about extrapolating too much from the results of single threaded performance benchmarks, while they're interesting, they don't represent the majority of workloads we'd expect to see running on ONTAP edge and as a result the software has not been optimised for those workloads. What it has been optimised for is multiple simultaneous requests with a high percentage of random reads and writes, some OLTP, some replication traffic, some CIFS requests, security checks, snapshot creation and all the other things Unified Storage is used for. Doing that well and responding fairly to multiple competing requests for resources is hard to do, though I'll admit that measuring it with a benchmark that does all of this and rolls it up into a single easily understood metric that everyone agrees is representative is going to be a bit of a challenge. If you do want to take a crack at this, I'd suggest using a tool like Iozone to do your testing and measurements (Iometer has some interesting quirks in visualized environments), and also use something with a reasonably large working set size so you're not just testing your array cache.
If you dont care about data managment, replication, unified storage and all the other good things that ONTAP gives you, and you really are interested in single stream large block sequential I/O, then check out the E-5400, I doubt you'll find a faster solution on the market.
2012-09-12 02:46 AM
Thank you, Martinj.
I don't need a powerful E5400 but minimum performances and of course data ontap functionalities (i've registred on the good community), notably snapvault. Ok for random reads, but servers always need sequential reads/writes (local dump, backup, copy, Storage VMotion, provisionning from templates..). By experience, 13 MBPS with sequential writes is not enough to be comfortable with the storage for administrative tasks, but my platform doesn't respect prerequisites. I'd like to know if someone can test on an ESXi with good raid controller to compare native performances (VMFS direct on disk) with Data Ontap Edge ones.
At least, i hope, by choosing the good hardware, to be close to (old) FAS2020 performances to be able to propose this solution.
NetApp Certified SAN Implementation Engineer