I ran an 8.2.2 7mode edge install against an ESX5.5GA with the VC6 RC, and it worked. Don't have a 5.5U2 instance to try it against at the moment but I know 5.5GA is working with 8.2.2.
... View more
Things work differently in 8.3. In 8.3 your choices are: system-defined local-only sfo-partner-only ipspace-wide disabled broadcast-domain-wide You may be following a guide for an older release of Clustered Data ONTAP.
... View more
Not sure what you are trying to accomplish, but this may be a good time to dig in on the documentation. The express guides are a good place to start: http://mysupport.netapp.com/documentation/docweb/index.html?productID=61885&language=en-US And for more detail you can go to the full documentation map: http://mysupport.netapp.com/documentation/docweb/index.html?productID=61998&language=en-US In particular, have a look at the network management guide: https://library.netapp.com/ecm/ecm_download_file/ECMP1636021
... View more
I've seen cases wheere an API call to change vm resources fails if the host is managed by vcenter, but the call is not placed through vcenter. I've also seen Edge deploy correctly but fail to boot on newer builds of esx. On the IMT, 5.5 is listed as supported but the updates are not listed. So this could be an version compatability issue, or something peculiar about the values you are giving vm setup are exposing an issue in the script.
... View more
Ok. What edition of ESX is this on? Enterprise? And is it managed by vCenter? If so, what version. It looks like something goes wrong right around the time it starts adjusting setting on the vm. Then it can't clean up after itself. I suspect version/API incompatibilities, though permissions are still a candidate.
... View more
That makes more sense now. So Autoassign left you with a 50/50 split. Which in this case leaves 4 data partitions on each node, not enough to create the aggr. With only 8 drives to play with, I would reassign ownership of all the data partitions over to one node, then create the aggregate on that node. in advanced priv level, use disk removeowner and disk assign with the -data true flag to juggle ownership of the data slices. Then aggr create should succeed. If you really want a 50/50 split, you can override the 5 disk minimum with the -force-small-aggregate true flag in advanced priviledge level.
... View more
Like the error says, you don't have enough spare disks to create an aggregate. The 4 you do have are should be the SSDs. Typically you'd add those to a disk pool and assign cache capacity to your HDD aggregates. Before you go there however, look at the existing aggr layout. Make sure its laid out as you intended before you commit the SSDs.
... View more
Right, which is why in a 2 node cluster you set cluster ha -configured true. That disables the epsilon mechanism and uses good old HA. A closer functional model would be a 4 node vsim cluster. There you can lose any 1 node, even the one with epsilon, and maintain quorum. The vserver root vols would also need to be on surviving nodes, because we don't have SFO either in this type of simulation. The downloads are still RC1. I've tried it on RC2 and ssd pools still didn't work. I would be really surprised if the vha code gets any attention before GA.
... View more
What version of edge and what version/edition of ESX? I've seen problems deploying to 5.5 and newer updates of 5.1, but not the same errors you are seeing. Those look more like the API calls are failing. Could be a permissions issue with the account you are using, or maybe its managed by a vcenter you aren't connecting to, or maybe you are trying to run it against a free edition of ESX. Rather than grabbing a screenshot, you can ssh to the dvadmin instance. username/pw both netapp. Then you can capture the text of the entire session. Whatever the problem is, its already scrolled offscfreen.
... View more
Who's got epsilon? If you halt the one with epsilon, or neither have epsilon, thats the behaviour I would expect. Back on the ADP front, I tried partitioning a simdisk by hand. Partitioning succeeds but then it gets marked as unowned, and ownership assignment fails. Which leads back to thinking the vha simulated disk code doesn't know how to cope with a partitioned disk file.
... View more
Nevermind. Still crashed, just took a while. Recovered by deleting the affected simdisk and the ,reservations file. Working theory is VHA diskmodel simdisks don't survive being partitioned.
... View more
I tried it in 8.3RC2, and it still doesn't work, but it doesn't crash the sim either. Now it errors out sharing the first disk in the list, and leaves a phantom record in the storage pool list. Better than a panic.
... View more
It may just not like your tftp server. Try pulling it from an http server. If that doesn't work, md5 the copy on the filer to see if its intact. set diag systemshell * md5 /mroot/etc/software/83RC2_q_image.tgz.downloading set admin
... View more
1.6g isn't enough to boot ontap anymore, however with a little tweaking you can pare it down to just 2gb/node. First shut it down and set the ram to 2gb. then boot to the loader and enter the following: setenv bootarg.init.low_mem 512 setenv bootarg.vnvram.size 64 setenv nvram_discard true autoboot Its stable enough for a lightweight laptop demo, but probably won't hold up if you prush it.
... View more
See if you have the package locally: system node image package show -node * If you do, run update from the local package. system node image update -node * -package 83RC2_q_image.tgz -set-default true If not, try not specifying the -replace-package parameter
... View more
ipspace list vfiler status -r Is the vfiler running? Can you Ping its ip address? Is cifs running in the vfiler? vfiler run <vfiler> cifs restart Is anything shared? vfiler run <vfiler> cifs shares Is it a member of a domain? vfiler run <vfiler> cifs domaininfo If so, can it talk to the DC? vfiler run <vfiler> cifs testdc were those options from the vfiler? vfiler run <vfiler> options cifs
... View more
CDOT stores its logs in the root volume. If you leave the simulator running for an extended period of time, it will need a larger root aggr/vol. It seems reasonably stable with 4gb.
... View more
oh, thats a fun scenario. I see 4 options: 1: Hope your typo happens to match another published serial number. Find that license key set and use it instead. 2: treat it like a headswap/motherboard replacement. Fix the loader values, then follow the regular procedures aside from the ,reservations panic. 3: remove the bad node from the cluster and recreate it 4: start over with fresh clean sims Option 2 is the most interesting, at least to me. Would require some work in the systemshell and at least one panic. Option 3 is more straightforward. Vol migrate everything to the good node, then cluster unjoin the bad one. delete that vm and build a replacement with the correct serial number.
... View more
You are probably adding the license key for the wrong node's serial number. Get the list of serial numbers for your nodes: system node show -fields serialnumber and compare it to the list of installed cifs licenses: system license show -package cifs Figure out which one you are missing and add it back in from the license sheets.
... View more