I was wondering if NetApp will ever revisit the disk size or disk quantity limitation for the simulator. Right now, I feel that the number of disks and the size is very limiting on what you can do with it, especially if you want to play around with vmware ESX and windows host snapdrive (6.3 which supports NFS) and other features. It would be nice if we could bump up the total size to something more useful in a test/learning environment to something greater than 50gb (i think) total limit of useable space, to say 150gb.
Trying to create a scenario which demonstrates NetApp ONTAP, with SRM on VirtualCenter, Failing over and Failing back (multi sync volumes) using iSCSI and NFS (so block and file). With a demonstration of OTV from Cisco making it close to a real world turnkey architecture (IE - show me the money and this can be rolled out asap).
The problem of course is space. Getting a 3 AD server heirachy, with SQL, file servers and myriad other test VM's to make the simulation as real as possible just won't fit on 50G. Even deduped. Then having to tear that down to do more snapmirror testing or such means that I have to keep multiple sims archived and bring them back online everytime a new question is raised or new change to the architecture is planned. Add to this, Win2k3 and Win2k8 and it's bloated requirements means nothing really fits. And then do you want to use VSC to test a few things out? Well make another sim then....
Not saying an unlimited license be given, but having the option to size to say 36GB disks (for RAID sets to be even closer to RL comparisons) and allowing up to say 360G means more options to make cached cross site volumes, LUNs and shares within the same sim to better emulate the actual real world designs and possibilites.
I can see the challenge NetApp has with such a robust design being surreptitiously used in production, but at some point the lack of options to customers to better mirror environments and features for testing holds back the uptake on NetApps technology.
A 28GB limit for active/active virtual sims is good if you want to play around with NFS and CIFS and other basic configurations for testing or to spruce up your skillset while not playing around on your production systems, but for all the other nifty software/tools, VSC, SDfor Windows, ESX datastores, SMVI, etc. I really feel that it's impossible to integrate and test on two active/active sims with a limit of 28GB. To be honest, I just fired up the simulator yesterday and still trying to do the basics, which 28GB should be sufficient, but if i wanted to do all the other cool stuff then i think 28G is limiting. Like Srnicholls stated, I find it very difficult to manage multiple sims for different test/config/POC scenarios.
Another question, I didn't see a sim license for SD for Windows. Did I overlook it or is it not included.
What use cases for the simulator are eliminated or made much more difficult due to the limitations? What Srnicholls stated.
What capacity limits would be sufficient? What Srnicholls stated or 200~300GB
If you needed to choose, would it be better to allow more disks or larger capacity disks? I'm not sure if it really matters since it's all virtual, but i would lean with more disks.
What would be a good mechanism to keep simulators from being used for production?
1. You can't call in for support.
2. In terms of performance, it's dependent on the physical machine the sim is hosted on.
3. Even at 300GB, i find it hard to actually use it in a normal production environment. People are storing way more data now then they did in the past.
To me the limiting factor is the 2m NVRAM backing that the sim has, trying to copy anything to the sim over CIFS/iSCSI is painfully slow. Granted, I run the SIM inside a VM, but hours to copy off a couple of gigabyte to a CIFS share seems a little pokey to me. I tried to use it once to make a virtual machine with iSCSI, gave up with that real quick. Mainly all I've been able to use it for is verifying command syntax for things I don't like to do on the real hardware without knowing the end result (and since most of my storage is FC attached, I don't have a lot of things to verify against on the sim).
Yep, and I even tried to be sneaky and cat the contents of ,nvram a couple of times onto a new file. The sim executable has a check for it, and the output is logged, then the runsim.sh checks the logs, and if it sees complaints about the nvram size, it deletes the nvram file for you. Guess they've met my type before.
To your SnapDrive question: I'm not 100% sure about this, but I think there are licensing and royalties to parties outside of NetApp tied to SD for Windows. So NetApp can't give away those licenses like we do with many of the other Simulator licenses. I'm not a lawyer, don't play one on TV, and haven't been directly involved with anything related to SD Windows royalties...this is just based on some hallway conversations and speculation on my part.
Thanks everybody for writing up your reasons for needing more capacity on the simulator. Please keep the additional reasons coming if you have a particular use case, even if something similar has already been stated. Multiple requests for the same use case would help us guage demand and better prioritize.
I do some infrastructure architecture work and would like to be able to set up a demo environment where I can show customers how to design their it environment with regards to backup/restore/disaster recovery. VMware is almost always used to run servers, while storage can differ.
Setting up a demo environment with lab vm's/windows dc/vmware vc+db/ESX'es/netapp storage/netapp backup+replication software, typically SMVI, snapmirror, snaprestore, ossv, SRM needs a lot more space than what's available today in the simulator. 200 GB usable would probably be enough (today..)
Preferably the simulator should be unlimited in all ways. (= as limited as hardware is today. It would be nice to be able to configure what hardware the simulator simulates, and have it behave like that hardware would)
I also understand that you need some way of preventing people from running production workloads on the simulator. Here are a few suggestions to do that:
1. Forcibly power-off the simulator after one week of uptime.
2. disconnect all network interfaces after one week, requiring a reboot to restore connectivity
3. randomly ask a simle question that the user has to answer, if left unanswered for more than 24 hours, the filer will halt.
I use the simulator to demonstrate SMSQL, SME, SMO, Snapdrive, SMVI, VSC, and many other of the advanced tools on the simulator. I see two main issues with the current simulator.
1: Lack of disk space, caused by the small disks
2: Lack of performance, caused by the 2MB NVRAM.
i understand the need to keep people from using it as a production device but with these two limitations it sometimes causes demonstrations to go negative. i don't know how many times i've heard comments about the speed of the simulator while demonstrating the ability of SME or SMSQL to migrate databases from local storage to SAN storage. For the most part you can joke about it and laught it off, but the negative feeling is there. So what are the possible fixes.
For Item number 1 you could increase the size of each disk to 5 or 10G/disk. At the most, even with the sim hack, it would give someone 560GB of raw space. in todays day and age no one is going to use that for production. its barely big enough to hold a decent MP3 collection, but it would let us demonstrate more advanced features and add more virtual machines into the environment, for example when demonstrating a virtualized SMMOSS server farm. Think about the dog and pony show you could give. virtualized vcenter, virtualized vsphere servers, virtualized sql servers, virtualized simulator. I could do a complete demo from my laptop for a customer on how to set up a virtual SMMOSS/SMVI/VSC environment. thats some power. Can't do it now because i can't really set up a good SMMOSS farm in VM, in an NFS datastore, running on the simulator. not enough room.
For Item number 2 you could increase the NVRAM. the 8.0 simulator is 32MB now and the performance is MUCH better than the 7.3 simulator. I believe 64 to 128MB would be very nice in the simulator (just a guess as i have nothing to base this on). it still wouldn't be so fast as to endanger the physical storage system market, but it would be AWESOME for demo's. Imagine demonstrating the things mentioned in the item 1 example with this new performance boost. customers would love it. Students would love it. and nerds like me would love it. but in the end it wouldn't be anywhere as fast as the cheapest NAS box i could buy, nor as fast any i could build using popular (unmentionable) software thats available.
lets recap this, with a modest increase in disk size we gain the ability to do more complex customer demos with higher end NetApp software solutions. Because its only a modest increase, the unapproved usage risk is negligible but the potential market penetration is fantastic. With a modest increase in NVRAM you can double or triple the current performance, allowing more complex demonstrations, allowing customers to see more of that incredible software, again with negligible risk to unapproved usage. From my, limited, perspective all i can see is upside.