Recipes for adding disks to a Data ONTAP 8 Simulator

by miroslav Former NetApp Employee on ‎2011-01-29 07:22 AM

Recipes for adding disks to a Data ONTAP 8 Simulator


Miroslav Klivansky
NetApp Technical Marketing Engineer


Part 1: Background

Data ONTAP 8 provides a user mode system shell for rare diagnostic tasks. While most of the CPU cycles are spent in various Data ONTAP kernel modules, a user space exists and is used to run some processes and for diagnostics. The simulator takes advantage of this user space to implement simulated disks. These disks are kept as files in a special directory. The default simulator comes with 28 simulated disks of 1GB each.  It is possible to increase the simulated disk count to 56 simulated disks. Any disk files above the first 56 are ignored. The following procedures will provide step-by-step instructions for doubling the disk count to 56 disks and making the disks available for use.

During these procedures we will unlock the diagnostic user account to gain access to the system shell, use the diag account to create the new simulated disks, and then reboot the simulator to have the new disks recognized.  The high-level process is same for both 7-Mode and Cluster-Mode, but the command syntax differs.  You can perform the procedure either through the console or through SSH.

Part 2: Adding Disks to a 7-Mode Simulator

1. We need to unlock the diag user and assign it a password:

       priv set advanced
       useradmin diaguser unlock
       useradmin diaguser password

2. Now log in to the system shell using the diag user account:

       systemshell
       login: diag
       password: <password>

3. First, we need to fix a glitch in how one of the utility programs was compiled. The following set of commands create a symbolic link to a shared library that's needed by the utility.

       cd /lib
       sudo mount -u -o rw /
       sudo ln -s libc.so.6 libc.so.7
       sudo mount -u -o ro /

4. Add the directory with the simulator disk tools to the path:

       setenv PATH "${PATH}:/sim/bin"
       echo $PATH

5. Go to the simulated devices directory:

       cd /sim/dev
       ls ,disks/

6. At this point you will see a number of files which represent the simulated disks.  Notice that these files start with "v0." and "v1.". That means the disk are attached to adapters 0 and 1, and if you count the disk files you'll  see that there are 14 of them on each adapter. This is similar to the DS14 shelf topology with each shelf attached to its own adapter. We will now add two more sets of 14 disks to the currently unused adapters 2 and 3:

       makedisks.main -h
       sudo makedisks.main -n 14 -t 23 -a 2
       sudo makedisks.main -n 14 -t 23 -a 3
       ls ,disks/

The first invocation of the command prints usage information. The remaining two commands tell the simulated disk creation tool to create 14 additional disk ("-n 14") of type 23 ("-t 23") on adapters 2 and 3 (e.g., "-a 2"). Data ONTAP 8.0.1 supports simulated disks 1GB or smaller. Even if you see larger disks listed in the usage information, please resist the temptation to add them to the simulator. It will only cause Data ONTAP to panic on boot and force you to recreate the simulator from scratch.

7. Now we're done with the system shell. We need to reverse some of the earlier steps and reboot the simulator so that it sees the new disks:

       exit
       useradmin diaguser lock
       priv set admin
       reboot

8. After the reboot complete, log back in and take ownership of all the new disks:

       disk show -n
       disk assign all
       disk show -v

You should now see 56 disks of 1GB each listed in the simulator. The new disks should be listed as already zeroed and ready to use inside an aggregate.


Part 3: Adding Disks to a Cluster-Mode Simulator

These are the steps for adding disks to a single Cluster-Mode simulator.  For a system with multiple nodes, you will need to perform this sequence for each node.

1. We need to unlock the diag user and assign it a password:

       security login unlock -username diag
       security login password -username diag

2. Now log in to the system shell using the diag user account:

       set -privilege advanced
       systemshell local
       login: diag
       password: <password>

3. First, we need to fix a glitch in how one of the utility programs was compiled. The following set of commands create a symbolic link to a shared library that's needed by the utility.

       cd /lib
       sudo mount -u -o rw /
       sudo ln -s libc.so.6 libc.so.7
       sudo mount -u -o ro /

4. Add the directory with the simulator disk tools to the path:

       setenv PATH "${PATH}:/sim/bin"
       echo $PATH

5. Go to the simulated devices directory:

       cd /sim/dev
       ls ,disks/

6. At this point you will see a number of files which represent the simulated disks.  Notice that these files start with "v0." and "v1.". That means the disk are attached to adapters 0 and 1, and if you count the disk files you'll  see that there are 14 of them on each adapter. This is similar to the DS14 shelf topology with each shelf attached to its own adapter. We will now add two more sets of 14 disks to the currently unused adapters 2 and 3:

       makedisks.main -h
       sudo makedisks.main -n 14 -t 23 -a 2
       sudo makedisks.main -n 14 -t 23 -a 3
       ls ,disks/

The first invocation of the command prints usage information. The remaining two commands tell the simulated disk creation tool to create 14 additional disk ("-n 14") of type 23 ("-t 23") on adapters 2 and 3 (e.g., "-a 2"). Data ONTAP 8.0.1 supports simulated disks 1GB or smaller. Even if you see larger disks listed in the usage information, please resist the temptation to add them to the simulator. It will only cause Data ONTAP to panic on boot and force you to recreate the simulator from scratch.

7. Now we're done with the system shell. We need to reverse some of  the earlier steps and reboot the simulator so that it sees the new disks:

       exit
       security login lock -username diag
       system node reboot local

8. After the reboot complete, log back in and take ownership of all the disks. The example below is for a brand new system where all but disks in the root aggregate are unowned:

       storage disk show
       storage disk modify -disk <NODENAME>:v4.* -owner <NODENAME>
       storage disk modify -disk <NODENAME>:v5.* -owner <NODENAME>
       storage disk modify -disk <NODENAME>:v6.* -owner <NODENAME>
       storage disk modify -disk <NODENAME>:v7.* -owner <NODENAME>
       storage disk show

You should now see 56 disks of 1GB each listed in the simulator. The disks should be listed as already zeroed and ready to use inside an aggregate.

Comments
arthursc0

Miroslav,

This is an excellent "hack" but I just wanted some ellaboration on the disk types you see when you run the command ls ,disks/.

You see a whole list of different disk types. So me beeing the typical size hungry admin, select disk type 31 (4000GB FC 15K). the type you specify in the instructions are SATA.

Shortly after completing the instructions and rebooting the sim paniced and dumped the core and referred to the disk type used.

What is the reason for this?

What disk types CAN we use?

Regards

Arthursc0

miroslav Former NetApp Employee

Hi Arthur,

The reason for the panic is that the simulator for Data ONTAP 8.0.1 and earlier only supports simulated disks of 1GB or smaller. I made a reference to that in Step 6 of the Cluster-Mode instructions, but forgot to copy the same note to the 7-Mode instructions. I've just edited the document to make sure the following appears for both modes:

Data ONTAP 8.0.1 supports simulated disks 1GB or smaller. Even if you see larger disks listed in the usage information, please resist the temptation to add them to the simulator. It will only cause Data ONTAP to panic on boot and force you to recreate the simulator from scratch.

Thanks for pointing it out and sorry if it caused some wasted effort. This whole procedure is a bit of a kludge, and the makedisks.main program includes disk types that we're still testing internally and planning to support in a future version of the simulator. When Data ONTAP sees a disk larger than the allowed types, it panics as a form of "protection".

One other thing: the new disk types I recommend adding are type 23, which are 1GB SAS/SCSI disks at 15K RPM. That should be the same types of disks that were already present on the two existing adapters.

Take care and hope this helps,

Miroslav

Hi Morislav

Nice document, thanks a lot!

I've got the additional disks, but after adding the "local_syncmirror" license (and reboot), I get lots of errors and all disks are still in Pool0. Any idea whats missing or wrong?

Tue Mar  8 13:00:03 GMT [monitor.shelf.accessError:CRITICAL]: Enclosure services has detected an error in access to shelves on channel v0.
Tue Mar  8 13:00:03 GMT [monitor.shelf.accessError:CRITICAL]: Enclosure services has detected an error in access to shelves on channel v1.
Tue Mar  8 13:00:03 GMT [monitor.shelf.accessError:CRITICAL]: Enclosure services has detected an error in access to shelves on channel v2.
Tue Mar  8 13:00:03 GMT [monitor.shelf.accessError:CRITICAL]: Enclosure services has detected an error in access to shelves on channel v3.

The disks are:

txwx-801> disk show
  DISK       OWNER                      POOL   SERIAL NUMBER         HOME 
------------ -------------              -----  -------------         ------------- 
v4.16        txwx-801  (080335005)    Pool0  11650900              txwx-801  (080335005)
v4.17        txwx-801  (080335005)    Pool0  11650901              txwx-801  (080335005)
v4.18        txwx-801  (080335005)    Pool0  11650902              txwx-801  (080335005)
v4.19        txwx-801  (080335005)    Pool0  11650903              txwx-801  (080335005)
v4.20        txwx-801  (080335005)    Pool0  11651004              txwx-801  (080335005)
v4.21        txwx-801  (080335005)    Pool0  11651005              txwx-801  (080335005)
v4.22        txwx-801  (080335005)    Pool0  11651006              txwx-801  (080335005)
v4.24        txwx-801  (080335005)    Pool0  11651007              txwx-801  (080335005)
v4.25        txwx-801  (080335005)    Pool0  11651008              txwx-801  (080335005)
v4.26        txwx-801  (080335005)    Pool0  11651009              txwx-801  (080335005)
v4.27        txwx-801  (080335005)    Pool0  11651010              txwx-801  (080335005)
v4.28        txwx-801  (080335005)    Pool0  11651011              txwx-801  (080335005)
v4.29        txwx-801  (080335005)    Pool0  11651012              txwx-801  (080335005)
v4.32        txwx-801  (080335005)    Pool0  11651113              txwx-801  (080335005)
v5.16        txwx-801  (080335005)    Pool0  13891700              txwx-801  (080335005)
v5.17        txwx-801  (080335005)    Pool0  13891701              txwx-801  (080335005)
v5.18        txwx-801  (080335005)    Pool0  13891702              txwx-801  (080335005)
v5.19        txwx-801  (080335005)    Pool0  13891703              txwx-801  (080335005)
v5.20        txwx-801  (080335005)    Pool0  13891804              txwx-801  (080335005)
v5.21        txwx-801  (080335005)    Pool0  13891805              txwx-801  (080335005)
v5.22        txwx-801  (080335005)    Pool0  13891806              txwx-801  (080335005)
v5.24        txwx-801  (080335005)    Pool0  13891807              txwx-801  (080335005)
v5.25        txwx-801  (080335005)    Pool0  13891808              txwx-801  (080335005)
v5.26        txwx-801  (080335005)    Pool0  13891809              txwx-801  (080335005)
v5.27        txwx-801  (080335005)    Pool0  13891810              txwx-801  (080335005)
v5.28        txwx-801  (080335005)    Pool0  13891811              txwx-801  (080335005)
v5.29        txwx-801  (080335005)    Pool0  13891912              txwx-801  (080335005)
v5.32        txwx-801  (080335005)    Pool0  13891913              txwx-801  (080335005)
v6.16        txwx-801  (080335005)    Pool0  42104200              txwx-801  (080335005)
v6.17        txwx-801  (080335005)    Pool0  42104201              txwx-801  (080335005)
v6.18        txwx-801  (080335005)    Pool0  42104202              txwx-801  (080335005)
v6.19        txwx-801  (080335005)    Pool0  42104203              txwx-801  (080335005)
v6.20        txwx-801  (080335005)    Pool0  42104204              txwx-801  (080335005)
v6.21        txwx-801  (080335005)    Pool0  42104205              txwx-801  (080335005)
v6.22        txwx-801  (080335005)    Pool0  42104206              txwx-801  (080335005)
v6.24        txwx-801  (080335005)    Pool0  42104207              txwx-801  (080335005)
v6.25        txwx-801  (080335005)    Pool0  42104208              txwx-801  (080335005)
v6.26        txwx-801  (080335005)    Pool0  42104209              txwx-801  (080335005)
v6.27        txwx-801  (080335005)    Pool0  42104210              txwx-801  (080335005)
v6.28        txwx-801  (080335005)    Pool0  42104211              txwx-801  (080335005)
v6.29        txwx-801  (080335005)    Pool0  42104212              txwx-801  (080335005)
v6.32        txwx-801  (080335005)    Pool0  42104313              txwx-801  (080335005)
v7.16        txwx-801  (080335005)    Pool0  44204500              txwx-801  (080335005)
v7.17        txwx-801  (080335005)    Pool0  44204501              txwx-801  (080335005)
v7.18        txwx-801  (080335005)    Pool0  44204502              txwx-801  (080335005)
v7.19        txwx-801  (080335005)    Pool0  44204503              txwx-801  (080335005)
v7.20        txwx-801  (080335005)    Pool0  44204504              txwx-801  (080335005)
v7.21        txwx-801  (080335005)    Pool0  44204505              txwx-801  (080335005)
v7.22        txwx-801  (080335005)    Pool0  44204506              txwx-801  (080335005)
v7.24        txwx-801  (080335005)    Pool0  44204507              txwx-801  (080335005)
v7.25        txwx-801  (080335005)    Pool0  44204508              txwx-801  (080335005)
v7.26        txwx-801  (080335005)    Pool0  44204509              txwx-801  (080335005)
v7.27        txwx-801  (080335005)    Pool0  44204510              txwx-801  (080335005)
v7.28        txwx-801  (080335005)    Pool0  44204511              txwx-801  (080335005)
v7.29        txwx-801  (080335005)    Pool0  44204512              txwx-801  (080335005)
v7.32        txwx-801  (080335005)    Pool0  44204513              txwx-801  (080335005)

Pools:

txwx-801> aggr status -s                                

Pool1 spare disks (empty)

Pool0 spare disks

RAID Disk       Device  HA  SHELF BAY CHAN Pool Type  RPM  Used (MB/blks)    Phys (MB/blks)
---------       ------  ------------- ---- ---- ---- ----- --------------    --------------
Spare disks for block or zoned checksum traditional volumes or aggregates
spare           v4.16   v4    ?   ?   FC:B   0  FCAL 15000 1020/2089984      1027/2104448
spare           v4.17   v4    ?   ?   FC:B   0  FCAL 15000 1020/2089984      1027/2104448
spare           v4.18   v4    ?   ?   FC:B   0  FCAL 15000 1020/2089984      1027/2104448
spare           v4.19   v4    ?   ?   FC:B   0  FCAL 15000 1020/2089984      1027/2104448
spare           v4.20   v4    ?   ?   FC:B   0  FCAL 15000 1020/2089984      1027/2104448
spare           v4.21   v4    ?   ?   FC:B   0  FCAL 15000 1020/2089984      1027/2104448
spare           v4.22   v4    ?   ?   FC:B   0  FCAL 15000 1020/2089984      1027/2104448
spare           v4.24   v4    ?   ?   FC:B   0  FCAL 15000 1020/2089984      1027/2104448
spare           v4.25   v4    ?   ?   FC:B   0  FCAL 15000 1020/2089984      1027/2104448
spare           v4.26   v4    ?   ?   FC:B   0  FCAL 15000 1020/2089984      1027/2104448
spare           v4.27   v4    ?   ?   FC:B   0  FCAL 15000 1020/2089984      1027/2104448
spare           v4.28   v4    ?   ?   FC:B   0  FCAL 15000 1020/2089984      1027/2104448
spare           v4.29   v4    ?   ?   FC:B   0  FCAL 15000 1020/2089984      1027/2104448
spare           v4.32   v4    ?   ?   FC:B   0  FCAL 15000 1020/2089984      1027/2104448
spare           v5.19   v5    ?   ?   FC:B   0  FCAL 15000 1020/2089984      1027/2104448
spare           v5.20   v5    ?   ?   FC:B   0  FCAL 15000 1020/2089984      1027/2104448
spare           v5.21   v5    ?   ?   FC:B   0  FCAL 15000 1020/2089984      1027/2104448
spare           v5.22   v5    ?   ?   FC:B   0  FCAL 15000 1020/2089984      1027/2104448
spare           v5.24   v5    ?   ?   FC:B   0  FCAL 15000 1020/2089984      1027/2104448
spare           v5.25   v5    ?   ?   FC:B   0  FCAL 15000 1020/2089984      1027/2104448
spare           v5.26   v5    ?   ?   FC:B   0  FCAL 15000 1020/2089984      1027/2104448
spare           v5.27   v5    ?   ?   FC:B   0  FCAL 15000 1020/2089984      1027/2104448
spare           v5.28   v5    ?   ?   FC:B   0  FCAL 15000 1020/2089984      1027/2104448
spare           v5.29   v5    ?   ?   FC:B   0  FCAL 15000 1020/2089984      1027/2104448
spare           v5.32   v5    ?   ?   FC:B   0  FCAL 15000 1020/2089984      1027/2104448
spare           v6.16   v6    ?   ?   FC:B   0  FCAL 15000 1020/2089984      1027/2104448
spare           v6.17   v6    ?   ?   FC:B   0  FCAL 15000 1020/2089984      1027/2104448
spare           v6.18   v6    ?   ?   FC:B   0  FCAL 15000 1020/2089984      1027/2104448
spare           v6.19   v6    ?   ?   FC:B   0  FCAL 15000 1020/2089984      1027/2104448
spare           v6.20   v6    ?   ?   FC:B   0  FCAL 15000 1020/2089984      1027/2104448
spare           v6.21   v6    ?   ?   FC:B   0  FCAL 15000 1020/2089984      1027/2104448
spare           v6.22   v6    ?   ?   FC:B   0  FCAL 15000 1020/2089984      1027/2104448
spare           v6.24   v6    ?   ?   FC:B   0  FCAL 15000 1020/2089984      1027/2104448
spare           v6.25   v6    ?   ?   FC:B   0  FCAL 15000 1020/2089984      1027/2104448
spare           v6.26   v6    ?   ?   FC:B   0  FCAL 15000 1020/2089984      1027/2104448
spare           v6.27   v6    ?   ?   FC:B   0  FCAL 15000 1020/2089984      1027/2104448
spare           v6.28   v6    ?   ?   FC:B   0  FCAL 15000 1020/2089984      1027/2104448
spare           v6.29   v6    ?   ?   FC:B   0  FCAL 15000 1020/2089984      1027/2104448
spare           v6.32   v6    ?   ?   FC:B   0  FCAL 15000 1020/2089984      1027/2104448
spare           v7.16   v7    ?   ?   FC:B   0  FCAL 15000 1020/2089984      1027/2104448
spare           v7.17   v7    ?   ?   FC:B   0  FCAL 15000 1020/2089984      1027/2104448
spare           v7.18   v7    ?   ?   FC:B   0  FCAL 15000 1020/2089984      1027/2104448
spare           v7.19   v7    ?   ?   FC:B   0  FCAL 15000 1020/2089984      1027/2104448
spare           v7.20   v7    ?   ?   FC:B   0  FCAL 15000 1020/2089984      1027/2104448
spare           v7.21   v7    ?   ?   FC:B   0  FCAL 15000 1020/2089984      1027/2104448
spare           v7.22   v7    ?   ?   FC:B   0  FCAL 15000 1020/2089984      1027/2104448
spare           v7.24   v7    ?   ?   FC:B   0  FCAL 15000 1020/2089984      1027/2104448
spare           v7.25   v7    ?   ?   FC:B   0  FCAL 15000 1020/2089984      1027/2104448
spare           v7.26   v7    ?   ?   FC:B   0  FCAL 15000 1020/2089984      1027/2104448
spare           v7.27   v7    ?   ?   FC:B   0  FCAL 15000 1020/2089984      1027/2104448
spare           v7.28   v7    ?   ?   FC:B   0  FCAL 15000 1020/2089984      1027/2104448
spare           v7.29   v7    ?   ?   FC:B   0  FCAL 15000 1020/2089984      1027/2104448
spare           v7.32   v7    ?   ?   FC:B   0  FCAL 15000 1020/2089984      1027/2104448
txwx-801>

thanX

Peter

This is a nice way to add disks, it works...

BUT: as soon as your total datasize on the filer exceeds 30GB the linuxhost will hang with the message filesystem full.

I mannaged to resize the vmdk disk but the /sim volume cannot be extended. (this is: we couldn't :-) )

Is there a way to actualy be able to use the full capacity of 56 disks?

Kind regards,

Sjef Gielen

Found it out... They were "auto" assigned to pool0 and after reassigning them to pool1 everything was fine. Have a LocalSyncmirror Sim now ...

Cool stuff!

Thanks for your effort.  This should help with SRM testing.

We are also experiencing the issue with /sim reporting that it is out of space when we give it a full compliment of fifty six 1GB drives. See the screencap. Is there a resolution or a workaround?

sim.crash.jpg

-Tim-

We are experiencing the same linux hang issue as Sjef, indicating that the file system is full when using the full compliment of 56 disks. See the screencap below. Is there a workaround or solution for this?

sim.crash.jpg

-Tim-

Hi @sjefgielen,

There's no easy way to grow the filesystem holding the /sim directory and the simulated disks. The filesystems are formatted with UFS, and that's one of the filesystems that GPartEd has trouble with. In the past I hacked around with GPartEd trying to do just that with limited results. I think the main trick that worked was to create a second VMDK that's larger than the original, boot the VM using the GPartEd ISO, and then copy the partitions from the smaller VMDK to the larger one. I don't recall the exact results, but something along those lines was able to work.

Just googling right now, I found this How-To which might also be useful: http://bsdbased.com/2009/11/30/grow-freebsd-ufs-filesystem-on-vmware-hdds

You might be able to use a FreeBSD live CD to boot and attach the larger VMDK, then edit the labels as described. I haven't tried that but it's another approach that might work. And if it doesn't just make sure you're trying this on a copy of the sim that you can just throw away.

Take care and hope that helps,

Miroslav

I really think the howto could use clear bold flashing warning that this WILL eventually cause the simulator to run out of space, shit itself, and break beyond repair. Once the simulator has touched (written to) ~30Gigs(*) worth of simulated disks it'll be over. From what (I think) I know about WAFL you don't even have to have ~30Gigs on the disks at the same time, a total of ~30Gigs "lifetime-writes" could be enough. Without expanding virtual disk (and filesystem on it) you're not just adding shelfs/disks, you're also planting a time-bomb.

That said, I've just given the expand-disk-and-growfs-using-bsd-live-cd a first try and I'm confident it'll work. (Just examined the free space from the 'systemshell' and it turns out I'm still a gig or two short, so I'm gonna rinse and repeat).

If anyone's interrested I could try to write up a how-to on how to do it?

Regards,

Mark.

(*) Took the 30gigs from one of the other post, can't remember at which point my simulator died, but it did.

Ran into an issue while trying to add disks.

Node02*> version

NetApp Release 8.1X45 7-Mode: Sat Sep 10 02:31:17 PDT 2011

Node02% cd /lib

Node02% sudo mount -u -o rw /

mount: /dev/md1 : No such file or directory

Node02% mount

/dev/md0 on / (ufs, local, read-only)

devfs on /dev (devfs, local)

/dev/ad1 on /cfcard (msdosfs, local)

/dev/md1 on / (ufs, local, read-only, union)

/dev/md2 on /platform (ufs, local, read-only)

/dev/ad0s4b on /sim (ufs, local, noclusterr, noclusterw)

/dev/ad0s2 on /var (ufs, local, synchronous)

procfs on /proc (procfs, local)

/dev/md3 on /tmp (ufs, local, soft-updates)

localhost:0x80000000,0x317f4fb1 on /mroot (spin)

clusfs on /clus (clusfs, local)

/mroot/etc/cluster_config/vserver on /mroot/vserver_fs (vserverfs, union)

Hi,

Many thanks for the info,

However when I added my extra disks they came up as "Broken" under Storage>Disk on filer view.

I resolved it by unfailing the disks:-

priv set diag

disk unfail -s v6.16

(did this command for every disk that was broken)

priv set admin (when finished)

I can now add them to my aggregate

Regards,

Hi,

I'm starting to use this recipe for adding disks for a training environment. It worked very fine, when I create an aggregate including these 28 additional disks and I work with this aggregate. Later I can also destroy this aggregate without problems. But when I do a "disk zero spares" (directly or implicitly by creating another aggregate) later on, it always comes to a panic (WAFL hung) after doing some percents of this zeroing.

After the panic I can retry the zeroing, but it will panic again at roughly the same percentage numbers.

This behaviour could be reproduced on several already existing simulators.

Regards

New Contributor

For the new Simulate ONTAP 8.1.1 image you need to use a different script to add disks.  See

I had the suspicion that there is not enough space in the simulator for 56 disks with size 1 GB, because the disk of the virtual machine has only 48 GB capacity (additionally I found somewhere that the BSD partition for the disks has only a size of 44 GB). Therefore I tried to add only 14 additional disk (this makes a total size of 42 GB for all the disks).But I run into the same problems as described with 56 disks (panic after a now highter percentage of the disk zeoring).

Not only the disk zeroing is causing problems, after some days of running with a certain workload, all my Ontap8 simulators with additional disks have crashed.

I use release 8.0.1RC3X16 7-mode of the Ontap simulator.

Has anyone an idea what could be the problem in my configuration or is this a more general problem?

Is it worth to try the version 8.1 or version 8.1.1 of the simulator?

Warning!

This NetApp Community is public and open website that is indexed by search engines such as Google. Participation in the NetApp Community is voluntary. All content posted on the NetApp Community is publicly viewable and available. This includes the rich text editor which is not encrypted for https.

In accordance to our Code of Conduct and Community Terms of Use DO NOT post or attach the following:

  • Software files (compressed or uncompressed)
  • Files that require an End User License Agreement (EULA)
  • Confidential information
  • Personal data you do not want publicly available
  • Another’s personally identifiable information
  • Copyrighted materials without the permission of the copyright owner

Files and content that do not abide by the Community Terms of Use or Code of Conduct will be removed. Continued non-compliance may result in NetApp Community account restrictions or termination.