I was wondering if NetApp will ever revisit the disk size or disk quantity limitation for the simulator. Right now, I feel that the number of disks and the size is very limiting on what you can do with it, especially if you want to play around with vmware ESX and windows host snapdrive (6.3 which supports NFS) and other features. It would be nice if we could bump up the total size to something more useful in a test/learning environment to something greater than 50gb (i think) total limit of useable space, to say 150gb.
It's not likely !
In the old days there were 10g disks for the sim, but NetApp found that folks were using the sim in production situations. Hence, the current limitations
of using 1g simulated drives and only 28 simulated disks.
Given that you're qouting a 50g size, I'd guess you've already found the hack to change maxdisks to 56. But you won't be able to do cluster failovers.
I don't find it to be a limitation for snapdrive or any other software you want to use with the sim. Just create smaller LUNs.
At your service,
So I'm wondering if the people who are having issues with the current disk limitations could write up a bit more detail about what they are trying to do and can't due to the limits:
What use cases for the simulator are eliminated or made much more difficult due to the limitations?
What capacity limits would be sufficient?
If you needed to choose, would it be better to allow more disks or larger capacity disks?
What would be a good mechanism to keep simulators from being used for production?
If we can get enough useful feedback, then it'll be easier to make a case for changes in future versions of the simulator.
Thanks and take care,
I haven't seen any cases where the simulator can't be implemented to do test/lab/demos of NetApp technology.
We've been using simulators very extensively over the past two years. We use simulators for classroom labs for
many of the NetApp classes we teach. We implement Snapmirror, Snapvault, SMO, SME, SMSQL, SMVI,
and Operations Mgr. in VM environments using the simulator...
At your service,
I have often wondered why NetApp would not release an more unlimited version of this software, maybe with a fee? I see HP making money with the LeftHand software to create very rudimentary DR sites...
A great example is SRM proof of concept.
Trying to create a scenario which demonstrates NetApp ONTAP, with SRM on VirtualCenter, Failing over and Failing back (multi sync volumes) using iSCSI and NFS (so block and file). With a demonstration of OTV from Cisco making it close to a real world turnkey architecture (IE - show me the money and this can be rolled out asap).
The problem of course is space. Getting a 3 AD server heirachy, with SQL, file servers and myriad other test VM's to make the simulation as real as possible just won't fit on 50G. Even deduped. Then having to tear that down to do more snapmirror testing or such means that I have to keep multiple sims archived and bring them back online everytime a new question is raised or new change to the architecture is planned. Add to this, Win2k3 and Win2k8 and it's bloated requirements means nothing really fits. And then do you want to use VSC to test a few things out? Well make another sim then....
Not saying an unlimited license be given, but having the option to size to say 36GB disks (for RAID sets to be even closer to RL comparisons) and allowing up to say 360G means more options to make cached cross site volumes, LUNs and shares within the same sim to better emulate the actual real world designs and possibilites.
I can see the challenge NetApp has with such a robust design being surreptitiously used in production, but at some point the lack of options to customers to better mirror environments and features for testing holds back the uptake on NetApps technology.
Srnicholls, great write-up.
A 28GB limit for active/active virtual sims is good if you want to play around with NFS and CIFS and other basic configurations for testing or to spruce up your skillset while not playing around on your production systems, but for all the other nifty software/tools, VSC, SDfor Windows, ESX datastores, SMVI, etc. I really feel that it's impossible to integrate and test on two active/active sims with a limit of 28GB. To be honest, I just fired up the simulator yesterday and still trying to do the basics, which 28GB should be sufficient, but if i wanted to do all the other cool stuff then i think 28G is limiting. Like Srnicholls stated, I find it very difficult to manage multiple sims for different test/config/POC scenarios.
Another question, I didn't see a sim license for SD for Windows. Did I overlook it or is it not included.
What use cases for the simulator are eliminated or made much more difficult due to the limitations? What Srnicholls stated.
What capacity limits would be sufficient? What Srnicholls stated or 200~300GB
If you needed to choose, would it be better to allow more disks or larger capacity disks? I'm not sure if it really matters since it's all virtual, but i would lean with more disks.
What would be a good mechanism to keep simulators from being used for production?
1. You can't call in for support.
2. In terms of performance, it's dependent on the physical machine the sim is hosted on.
3. Even at 300GB, i find it hard to actually use it in a normal production environment. People are storing way more data now then they did in the past.
To me the limiting factor is the 2m NVRAM backing that the sim has, trying to copy anything to the sim over CIFS/iSCSI is painfully slow. Granted, I run the SIM inside a VM, but hours to copy off a couple of gigabyte to a CIFS share seems a little pokey to me. I tried to use it once to make a virtual machine with iSCSI, gave up with that real quick. Mainly all I've been able to use it for is verifying command syntax for things I don't like to do on the real hardware without knowing the end result (and since most of my storage is FC attached, I don't have a lot of things to verify against on the sim).
Ah, NVRAM! - well that also makes more sense now - since a 15G VM copy over NFS takes 4+ hours. Add this to the list of 'Yes, please' on the next Sim.
Why cripple the performance of something that is used by (and accessible only to) current customers, and may influence their view of the software in the long term?
I can only assume that the Sim development resources are pretty thin, and thus why it's not under constant update (c'mon, I'd love a VAAI enabled 8.0.1 sim!).
Hope the higher ups see that releasing a better simulator will only increase the core competencies and architectural designs of the community that work with and endorse NetApp products.
Yep, and I even tried to be sneaky and cat the contents of ,nvram a couple of times onto a new file. The sim executable has a check for it, and the output is logged, then the runsim.sh checks the logs, and if it sees complaints about the nvram size, it deletes the nvram file for you. Guess they've met my type before.
No, I added that because we changed the size of the NVRAM requirements between one of the sim versions and the next and without that it was even harder for end users to upgrade. The setup.sh and runsim.sh scripts aren't really the simulator, those are just the pieces I added around it to make it usable without knowing all the magical cli foo that is needed for it.
To your SnapDrive question: I'm not 100% sure about this, but I think there are licensing and royalties to parties outside of NetApp tied to SD for Windows. So NetApp can't give away those licenses like we do with many of the other Simulator licenses. I'm not a lawyer, don't play one on TV, and haven't been directly involved with anything related to SD Windows royalties...this is just based on some hallway conversations and speculation on my part.
Thanks everybody for writing up your reasons for needing more capacity on the simulator. Please keep the additional reasons coming if you have a particular use case, even if something similar has already been stated. Multiple requests for the same use case would help us guage demand and better prioritize.
Thanks and take care,
I do some infrastructure architecture work and would like to be able to set up a demo environment where I can show customers how to design their it environment with regards to backup/restore/disaster recovery. VMware is almost always used to run servers, while storage can differ.
Setting up a demo environment with lab vm's/windows dc/vmware vc+db/ESX'es/netapp storage/netapp backup+replication software, typically SMVI, snapmirror, snaprestore, ossv, SRM needs a lot more space than what's available today in the simulator. 200 GB usable would probably be enough (today..)
Preferably the simulator should be unlimited in all ways. (= as limited as hardware is today. It would be nice to be able to configure what hardware the simulator simulates, and have it behave like that hardware would)
I also understand that you need some way of preventing people from running production workloads on the simulator. Here are a few suggestions to do that:
1. Forcibly power-off the simulator after one week of uptime.
2. disconnect all network interfaces after one week, requiring a reboot to restore connectivity
3. randomly ask a simle question that the user has to answer, if left unanswered for more than 24 hours, the filer will halt.
Now that ONTAP 8.0.1 sim is released to the masses, do you know if it still has the same sim limitations as 7.3.x? Any improvements or new features in terms of sim functionality? I haven't had a chance to download 8.0.1, but I will be doing so shortly. Thanks.
To your direct question, the limitation of 56x1GB disks is still part of the Data ONTAP 8 simulator. We're working to increase this in the future, but any updates would need to be in concert with a new Data ONTAP release. We didn't have enough time to work through the processes needed to make that happen before the DOT8.0.1 code was locked down.
As far as enhancements in the DOT8.0.1 simulator, there are two major ones that come to mind:
There are a huge list of other bug fixes and enhancements that are part of Data ONTAP 8.0.1, but I'm not sure what most of those are. The best place to find out are the Data ONTAP 8.0.1 Release Notes.
Take care and hope that helps,
I use the simulator to demonstrate SMSQL, SME, SMO, Snapdrive, SMVI, VSC, and many other of the advanced tools on the simulator. I see two main issues with the current simulator.
1: Lack of disk space, caused by the small disks
2: Lack of performance, caused by the 2MB NVRAM.
i understand the need to keep people from using it as a production device but with these two limitations it sometimes causes demonstrations to go negative. i don't know how many times i've heard comments about the speed of the simulator while demonstrating the ability of SME or SMSQL to migrate databases from local storage to SAN storage. For the most part you can joke about it and laught it off, but the negative feeling is there. So what are the possible fixes.
For Item number 1 you could increase the size of each disk to 5 or 10G/disk. At the most, even with the sim hack, it would give someone 560GB of raw space. in todays day and age no one is going to use that for production. its barely big enough to hold a decent MP3 collection, but it would let us demonstrate more advanced features and add more virtual machines into the environment, for example when demonstrating a virtualized SMMOSS server farm. Think about the dog and pony show you could give. virtualized vcenter, virtualized vsphere servers, virtualized sql servers, virtualized simulator. I could do a complete demo from my laptop for a customer on how to set up a virtual SMMOSS/SMVI/VSC environment. thats some power. Can't do it now because i can't really set up a good SMMOSS farm in VM, in an NFS datastore, running on the simulator. not enough room.
For Item number 2 you could increase the NVRAM. the 8.0 simulator is 32MB now and the performance is MUCH better than the 7.3 simulator. I believe 64 to 128MB would be very nice in the simulator (just a guess as i have nothing to base this on). it still wouldn't be so fast as to endanger the physical storage system market, but it would be AWESOME for demo's. Imagine demonstrating the things mentioned in the item 1 example with this new performance boost. customers would love it. Students would love it. and nerds like me would love it. but in the end it wouldn't be anywhere as fast as the cheapest NAS box i could buy, nor as fast any i could build using popular (unmentionable) software thats available.
lets recap this, with a modest increase in disk size we gain the ability to do more complex customer demos with higher end NetApp software solutions. Because its only a modest increase, the unapproved usage risk is negligible but the potential market penetration is fantastic. With a modest increase in NVRAM you can double or triple the current performance, allowing more complex demonstrations, allowing customers to see more of that incredible software, again with negligible risk to unapproved usage. From my, limited, perspective all i can see is upside.
I can confirm that the performance limitations in the 7G sim are not caused by the amount, or lack, of NVRAM. Many years ago I spent time with one of the developers trying to figure what the issues with the sim performance were since I was running the entire thing out of a RAMDISK and still getting hardly any performance. The fake disk code added for the 7G version is just not able to cope particularly well with many IOPs, serially makes requests, has an extremely short disk queue, etc.
Performance is mostly not an issue for the ONTAP 8 sims.
Hi Daniel, folks,
This doesn't completely address the topic, but we've had frequent requests to document the procedure for adding more simulated disks to the Data ONTAP 8 simulator. I've finally written up a document with the procedure:
Please let me know if you have any problems with the procedures!