Simulator Discussions

Simulator disk limitation

daniel_kaiser
12,657 Views

Hi,

I was wondering if NetApp will ever revisit the disk size or disk quantity limitation for the simulator.  Right now, I feel that the number of disks and the size is very limiting on what you can do with it, especially if you want to play around with vmware ESX and windows host snapdrive (6.3 which supports NFS) and other features.  It would be nice if we could bump up the total size to something more useful in a test/learning environment to something greater than 50gb (i think) total limit of useable space, to say 150gb.

32 REPLIES 32

daniel_kaiser
10,249 Views

Mirosolav,

Thanks for the detailed instructions on how to add additional disks (1GB) to the Data ONTAP 8 simulator.  Do you know if the following questions have been addressed in 8 and/or when it will be addressed.

1.  Per srnicholls, Celerra sim doesn't have a max of 56GB (1GB x 56 disks) limitation?  Are there any plans to bump this up a bit to be a bit more useful in today's testing environments and different scenarios mentioned by other NetApp community members?  Your instructions detailed 56 1GB drives so I assume that the disk limitations haven't changed.

2.  Is the 8 sim still limited by 2m NVRAM?

3.  Do you have any suggestions on how to best leverage 8 sim with it's current limitation for implementing a demo lab/proof of concept with vmware esx, netapp storage, replication, smvi, snapmirror, srm, etc...you know everything that NetApp typically offers and touts about without building multiple POC environments.

Thanks.

miroslav
9,436 Views

Hi Daniel,

I'll address your questions below, and have a couple of questions of my own:

1. I can say that we are definitely discussing increasing the capacity limit on the simulator to match modern capacities and requirements. I can't discuss discuss details or dates on this forum right now, but will say that making the simulator available with larger capacity will be tied to a future release of Data ONTAP. We are very unlikely to release an updated simulator off-cycle from Data ONTAP. Now for my question: I'm not familiar with the Celerra simulator; what are its capacity limits? That might help us better determine where we should move our limits.

2. The virtual NVRAM has been increased 1600% to a whopping 32MB. You can see some of that in some of the console messages when the simulator boots. I say "whopping" with a bit of humor, but that level of vNVRAM is sufficient for very decent performance from a simulator. While we haven't done a formal performance timing profile for the simulator, I doubt the vNVRAM is the bottleneck for the DOT8 sim. In my experience, the DOT8 simulator runs as fast as the disk on which the VM lives. When it's on an SSD, the simulator performance is very good.

3. I have some suggestions, but as you imply it would be much easier to do with increased capacity limits. We understand that, and wish to make the simulator a great tool for demonstrating, learning, and testing the entire solution as you described. For now, I would suggest:

    1. Run through the capacity increate process to max out the capacity on the simulator
    2. Do the calculations to see if using RAID4 would help relative to number of RAID groups and parity disks. Consider dumping everything into a single aggregate and maxing out the RAID group sizes. This goes against best practices for resiliency, but for the simulator that isn't a primary factor.
    3. If you don't need the extra snapshot reserve, consider turning it down to make more of the capacity available to the active volumes.
    4. Use the new DeDupe and Compression licenses to get more effective capacity out the volumes that you create. You may get a bit more capacity by putting everything into a single volume with dedupe and compression enabled, but that might not be what you want to test out and certainly isn't best practice for the solution areas you describe. If you know that dedupe or compression wouldn't work for your dataset, then consider splitting out that data into a separate volume and using the space efficiency features only on the data where they help.
    5. Thin provision everything you can and keep an eye on the actual free space in the aggregate(s).
    6. You may just need to create multiple POC simulated environments where you test only a portion of the entire solution set within each environment. Consider creating a VM team or vApp that you can clone or deploy from template to make that easier.

So, what do you think?

Take care,

Miroslav

kborovik
9,438 Views

Miroslav,

I would suspect that having 14 large disks would be better than having 28 smaller. I have run tests on SIM801 and got the following results (see screenshots)


As you can see, SnapMirror replication betweentwo SIM801 requires approximately ~4.5K IOPS. I would guess this is becauseDataONTAP needs to commit every block written to “virtual disk”. I was able toget 10Mb/s out of SIM801, but underlying physical storage needs to deliver 4.5K IOPS with very low latency (in my case it is 0.3ms).

srm-na-01> aggr status -v

           Aggr State           Status            Options
          aggr0 online          raid_dp, aggr     root, diskroot, nosnap=on,
                                32-bit            raidtype=raid_dp, raidsize=16,
                                                  ignore_inconsistent=off,
                                                  snapmirrored=off,
                                                  resyncsnaptime=60,
                                                  fs_size_fixed=off,
                                                  snapshot_autodelete=on,
                                                  lost_write_protect=on,
                                                  ha_policy=cfo
                Volumes: vol0
                Plex /aggr0/plex0: online, normal, active
                    RAID group /aggr0/plex0/rg0: normal
          aggr1 online          raid_dp, aggr     nosnap=on, raidtype=raid_dp,
                                64-bit            raidsize=25,
                                                  ignore_inconsistent=off,
                                                  snapmirrored=off,
                                                  resyncsnaptime=60,
                                                  fs_size_fixed=off,
                                                  snapshot_autodelete=on,
                                                  lost_write_protect=on,
                                                  ha_policy=cfo
                Volumes: nfs_thin_01, nfs_sis_01, nfs_zip_01, vmfs_thin_01,
                         vmfs_sis_01
                Plex /aggr1/plex0: online, normal, active
                    RAID group /aggr1/plex0/rg0: normal
srm-na-01>
srm-na-01> aggr status -r
Aggregate aggr0 (online, raid_dp) (block checksums)
  Plex /aggr0/plex0 (online, normal, active, pool0)
    RAID group /aggr0/plex0/rg0 (normal)
      RAID Disk Device  HA  SHELF BAY CHAN Pool Type  RPM  Used (MB/blks)    Phys (MB/blks)
      --------- ------  ------------- ---- ---- ---- ----- --------------    --------------
      dparity   v5.16   v5    ?   ?   FC:B   0  FCAL 15000 1020/2089984      1027/2104448
      parity    v5.17   v5    ?   ?   FC:B   0  FCAL 15000 1020/2089984      1027/2104448
      data      v5.18   v5    ?   ?   FC:B   0  FCAL 15000 1020/2089984      1027/2104448
Aggregate aggr1 (online, raid_dp) (block checksums)
  Plex /aggr1/plex0 (online, normal, active, pool0)
    RAID group /aggr1/plex0/rg0 (normal)
      RAID Disk Device  HA  SHELF BAY CHAN Pool Type  RPM  Used (MB/blks)    Phys (MB/blks)
      --------- ------  ------------- ---- ---- ---- ----- --------------    --------------
      dparity   v4.16   v4    ?   ?   FC:B   0  FCAL 15000 1020/2089984      1027/2104448
      parity    v5.19   v5    ?   ?   FC:B   0  FCAL 15000 1020/2089984      1027/2104448
      data      v4.17   v4    ?   ?   FC:B   0  FCAL 15000 1020/2089984      1027/2104448
      data      v5.20   v5    ?   ?   FC:B   0  FCAL 15000 1020/2089984      1027/2104448
      data      v4.18   v4    ?   ?   FC:B   0  FCAL 15000 1020/2089984      1027/2104448
      data      v5.21   v5    ?   ?   FC:B   0  FCAL 15000 1020/2089984      1027/2104448
      data      v4.19   v4    ?   ?   FC:B   0  FCAL 15000 1020/2089984      1027/2104448
      data      v5.22   v5    ?   ?   FC:B   0  FCAL 15000 1020/2089984      1027/2104448
      data      v4.20   v4    ?   ?   FC:B   0  FCAL 15000 1020/2089984      1027/2104448
      data      v5.24   v5    ?   ?   FC:B   0  FCAL 15000 1020/2089984      1027/2104448
      data      v4.21   v4    ?   ?   FC:B   0  FCAL 15000 1020/2089984      1027/2104448
      data      v5.25   v5    ?   ?   FC:B   0  FCAL 15000 1020/2089984      1027/2104448
      data      v4.22   v4    ?   ?   FC:B   0  FCAL 15000 1020/2089984      1027/2104448
      data      v5.26   v5    ?   ?   FC:B   0  FCAL 15000 1020/2089984      1027/2104448
      data      v4.24   v4    ?   ?   FC:B   0  FCAL 15000 1020/2089984      1027/2104448
      data      v5.27   v5    ?   ?   FC:B   0  FCAL 15000 1020/2089984      1027/2104448
      data      v4.25   v4    ?   ?   FC:B   0  FCAL 15000 1020/2089984      1027/2104448
      data      v5.28   v5    ?   ?   FC:B   0  FCAL 15000 1020/2089984      1027/2104448
      data      v4.26   v4    ?   ?   FC:B   0  FCAL 15000 1020/2089984      1027/2104448
      data      v5.29   v5    ?   ?   FC:B   0  FCAL 15000 1020/2089984      1027/2104448
      data      v4.27   v4    ?   ?   FC:B   0  FCAL 15000 1020/2089984      1027/2104448
      data      v5.32   v5    ?   ?   FC:B   0  FCAL 15000 1020/2089984      1027/2104448
      data      v4.28   v4    ?   ?   FC:B   0  FCAL 15000 1020/2089984      1027/2104448
      data      v4.29   v4    ?   ?   FC:B   0  FCAL 15000 1020/2089984      1027/2104448
      data      v4.32   v4    ?   ?   FC:B   0  FCAL 15000 1020/2089984      1027/2104448
Pool1 spare disks (empty)
Pool0 spare disks (empty)
srm-na-01>

ken_foster
9,438 Views

I actually like the idea of 28 to 56 10GB disks as opposed to fewer larger ones.   from a training perspective more disks gives me the ability to demonstrate different configurations.  having only 14 disks means i have to change the default raid group size to show newbies how DataONTAP automatically creates new raidgroups when an AGGR is expanded.   or what if i want to demonstrate the adding of one more disk when a raidgroup is completely full (to demonstrate a failure).  these things, while doable, become a bit more staged by having to change the default raidgroup size up front.  I'm all for increasing the size of each spindle, but i like the idea of leaving the maximum number of spindles at 28/56.

kborovik
9,438 Views

I like idea of having many options as well. But, my primary concern is inability to run complicated simulations.

In most of my runs, with SIM734, SMSP was not able to complete full backup due to SIM slow performance. It is a big problem when SharePoint backup fails during product demonstration. After awkward silence, it is very difficult to explain why demo has failed.

Perhaps, we can find some “computer enthusiasts” that might consider running SIM as a production storage, but there is more harm done by inability to properly train SEs and reliably demonstrate Netapp's product then by “perceived revenue loss” from “computer enthusiasts”.

At the end of the day, I would like to reasonably comfortable emulate the following products with Netapp SIM:

  1. VMware SRM
  2. SnapManager for SharePoint
  3. SnapManager for SQL
  4. SnapManager for Exchange
  5. SnapManager for Oracle
  6. Operations Manager

daniel_kaiser
9,438 Views

Hey Miroslav,

The user by the name of srnicolls mentioned something about no limitations on Celerra sim in this message thread.  I don't have access to EMCs website to download Celerra sim, but this is the closest info that i've found.  http://virtualgeek.typepad.com/virtual_geek/2010/09/emc-celerra-vsa-uberv3-dart-60-now-available.html

Dan

adamvh
9,436 Views

Miroslav,

Many companies release virtual appliances of there products with minimal restrictions and I don't buy into people  using an ontap simulator in production, or lost sales becuase the simulator had no restrictions.

The EMC Celerra virtual appliance or as it use to be known as the EMC UBER VSA was updated to run the DART 7 code.I don't belive it has any limits.

The hardware appliance is always going to perform better becuase things have to be emulated in a virtual appliance.

I have done deuplication testing with the EMC VNX virtual appliance on volumes with 10 terrabytes of data.

http://nickapedia.com/2011/04/08/new-uber-model-uber-vnx-nfs-v1/

When I want to compare deplication ratio's between netapp and emc to see which is the better product to continue to invest money into I can't use your simulator in that way.

I think you do more harm in limiting the simulator than good, the emc vsa is heavily used among the blogging community for being a virtual san appliance for home labs.

EMC gets a lot of exposure that way non customers use the Unisphere interface. Play with the virtual san experiance, enjoy it then go purchase the real thing.

NetApp fail on this for 2 reasons.

1. Simulator is not public available.

2. Stupid limits.

Incase you missed it the EMC Celerra virtual appliance has been free to the public for over 2 years.

You can get the download links off Nicholas Weaver's website or here it is for those that are lazy

http://ftp.vspecialists.com/public/vApps/VNXVSA/UBER_VNX(NFS)_v1.ova <- vmware esx image 2.2gb

adamvh
9,438 Views

hah well noticed Miroslav moved on to CORAID.

would be nice if someone from the ontap simulator would care to comment anyway

SIJUKVARGHESE
9,438 Views

..

SIJUKVARGHESE
8,743 Views

..

paulhar
8,743 Views

The ftp site says "220-This is a private system - No anonymous login"

> Simulator is not public available.

It doesn't appear that EMC make theirs public either. Nicolas may have permission to publish one back in 2011, but it certainly doesn't look current. E.g. it's NFS/CIFS only, not block. It seems you need a powerlink account to go and access the current ones, just like we require a "support" account.

> 2. Stupid limits.

The limits are there mostly because for the purposes we originally intended it there is more than enough space - internal testing, building workflows, experimenting with features that have specific special requirements (e.g. snaplock), etc.

I'm not saying I wouldn't like a much bigger simulator as well... just that it's not seen as a high priority.

adamvh
5,263 Views

Ok how about you just try clicking on the download link. It's a http download not a ftp download link regardless of the dns name of the host.

I just did a wget on the link to double check it works and it's fine. It's 100% free I assure you.

The emc celerra appliance is mentioned in various vmware training videos included those produced from train signal.

there is plenty of threads on the vmware forums where people are using the emc celerra vsa with iscsi/nfs datastores to host there lab enviroments.

72.15.252.40 is the vspecialists server (run by Chad Sakac) they where originally hosted on emc servers.

They always have been 100% free many 1000's of users are running them in the community.

At one of the recent VMUG we talked about what virtual san appliances people used in labs and Celerra VSA was #1.

Here is the previous Celerra version. http://72.15.252.40/public/vApps/CelerraVSA/CelerraVSA_6.0.36.4-UBERv3.2.ova

Public