Is this 7mode or CDOT? And are you able to get to a command prompt (not maintenance mode) or is it going into a panic before it gets to the logiin? As you noticed, you can't add disks, resize the volume, or delete snaps from the maintenance prompt. If you can get to a CLI, delete vol snaps, aggr snaps, and add disks. If you can't get to the ontap cli, try boot_snap_delete at the boot menu. This special boot mode allows you to interactively delete snapshot from the root aggr and the volumes it contains. If there are no snaps to delete, and you have a spare disk, you can create_temp_root from the boot menu. Once you can boot the system from the temp root, you can fix the normal root aggregate then set the original back to root and reboot.
... View more
HWU lists the recommended cluster ports for that model as e0e and e0f. The reason the default ports share the same ASIC, is that they are UTA ports. If there was a requirement for FC, and a port had to be flipped to FC mode, both ports on the ASIC flip together. On larger systems, 8040 and up for example, there are sufficient ports to implement things like split-asic connectivity and 4 links/node to the cluster network.
... View more
So you are running a qtree consolidation to a staging volume on a 7mode controller, then using TDP volume snapmirror to migrate the staging volume to the cluster, but there is some issue with the TDP relationship. Correct?
... View more
Simple concept. Two storage controllers working as a HA Pair, each able to "take over" its partner's workload during a planned or unplanned outage, which it can later "give back" when the partner is ready.
... View more
Snapmirror from 7-mode to clustered Data ONTAP is possible for transition purposesly only. For details and procedures, see this document: 7-Mode Data Transition Using SnapMirror
... View more
Users don't see volumes, they see shares and/or exports. Check that the volume is online, the type is RW, the security style is correct (likely NTFS for a windows share), the junction path is not "-", and the junction path is active: show -vserver * -volume * -fields junction-path,state,security-style,type,junction-active Check that cifs is up on the SVM (vserver) that owns the volume: vserver cifs show Check that the volume's junction path is shared, the share is browsable, and the ACL includes the user or a group to which they belong vserver cifs share show Check that the date and timezone are correct on the cluster date Check that the LIF the user is connecting to is up, and that the port it is currently homed on is correct net int show -role data -fields curr-port,curr-node,address,netmask,status-oper Verify that the time and date on the cluster and the domain controller(s) are all wihtin 5 minutes of each other Verify that a DNS entry was created for the SVM's server name (as shown in the output of "vserver cifs show") Verify the user can ping the SVM by ip and by name
... View more
You can upgrade, but you have to make some adjustments to the sim prior to the upgrade so it has the resources to run 8.3. That should just involve more ram (3-4gb), a larger root vol (~4gb), and persistent VNVRAM in full mode.
... View more
That looks dicey. But I suspect odds of success would improve if you did a takeover/giveback between step 1 and step 2 so each node gets to boot in a dual stack single path ha config. Interesting lab experiment, but wouldn't try it on a production filer. If you do try it, tell us if it panics on you.
... View more
I tried to reproduce this scenario, but when I moved root it also moved the mailbox disks. I ran through it on 8.2.1, so I am wondering: What version of ONTAP was the node running when root was moved?
... View more
Thats an interesting scenario. I've hot removed many times and not hit this case, fortunately. We can get the list of mailbox disks with storage failover mailbox-disk show, then its a question of finding the most elegent way to encourage ontap to pick a more appropriate disk. I suspect if ownership of one mailbox disk at a time were removed ontap would pick another disk, then you could repeat until the shelf was clear. Need some lab time to repro.
... View more
Not sure. Deployment in my lab was uneventful. My environment consists of VCSA 6, ESX6, and NFS datastores hosted on CDOT 8.3p2. I've seen references to that error message on older ESX versions but they don't appear to correlate. Any other info in the logs?
... View more
Its about 220gb. The old tgz files had a 250gb drive to hold the simdisks, the ova has a 230gb drive. You can build them with 56x9gb disks, but you need to replace the default IDE drive with one large enough to hold them all.
... View more
Thats more disk than the sim can hold by default. Did you replace ide1:1 with a larger disk? The panic looks more like ram/vnvram. Did you adjust the ram allocation? You could try flushing the nvram at boot by setting this in the vloader: setenv nvram_discard true
... View more
I see the GA versions are up, and they are now in OVA format. To whoever made that change, thank you. That should fix the long standing multiextent issue, the problems with adding nics, and other assorted weirdness with the old vmx.
... View more
That's much too large for the sim. The one we can download is good to about 220gb raw, but it could be configured to about 0.5t raw. Edge would be a much better fit, but you would need a full version key for FDvM200. The evaluation version has a 2tb limit but the full version can take 10tb raw. Even after WAFL reserve and peeling off a root vol it should comfortably hold your 6tb archive. Cloud ONTAP wouldn't work for you anyway since it can't run anything older than 8.3RC1.
... View more
Running the nodes in an HA pair did allow both to be selected, but timeouts during cluster creation prevented me from completing setup. I was able complete cluster creation manually, but I discovered system setup had set the MTU on the cluster network ports to 9000, which doesn't work in fusion, so the 2nd node was never able to join, hence the timeouts. If that can be overcome it might work, but its a pretty demanding mini lab. My 8gb i7 MBA could barely run it. Cluster83::*>
Cluster83::*> cluster show
Node Health Eligibility Epsilon
-------------------- ------- ------------ ------------
Cluster83-01 true true false
Cluster83-02 true true false
2 entries were displayed.
Cluster83::*> cluster ha show
High Availability Configured: true
High Availability Backend Configured (MBX): true
Cluster83::*> storage failover show
Takeover
Node Partner Possible State Description
-------------- -------------- -------- -------------------------------------
Cluster83-01 Cluster83-02 true Connected to Cluster83-02
Cluster83-02 Cluster83-01 true Connected to Cluster83-01
2 entries were displayed.
Cluster83::*> node show
Node Health Eligibility Uptime Model Owner Location
--------- ------ ----------- ------------- ----------- -------- ---------------
Cluster83-01
true true 00:58:18 SIMBOX
Cluster83-02
true true 00:09:39 SIMBOX
2 entries were displayed.
Cluster83::*> run * cf monitor
2 entries were acted on.
Node: Cluster83-01
current time: 23Jun2015 00:00:19
UP 00:55:35, partner 'Cluster83-02', CF monitor enabled
RDMA Interconnect is up (Link up), takeover capability on-line
partner update TAKEOVER_ENABLED (23Jun2015 00:00:19)
Node: Cluster83-02
current time: 23Jun2015 00:00:24
UP 00:10:15, partner 'Cluster83-01', CF monitor enabled
RDMA Interconnect is up (Link up), takeover capability on-line
partner update TAKEOVER_ENABLED (23Jun2015 00:00:23)
Cluster83::*>
... View more
I've been able to construct an HA pair running locally in fusion 7.1.1, using the host's nfsd for shared sim disk access. I'll see if system setup works against it. In the past its been tempermental running against ha sims but if its reliable enough for LoD things may have improved since then.
... View more
"Direct-attached configurations are not supported." See pg 11: https://library.netapp.com/ecm/ecm_download_file/ECMP1636036 I don't think you'll get very far with direct attached NPIV.
... View more