That’s how it should be. “diskroot” means it will become root on next reboot. Keep in mind that the only way to clear “diskroot” is to set root for some other volume and the longer you delay reboot the more different current and future root become. Also any changes done on current root will not be present on copy, so if you will be forced to reboot you get inconsistent configuration. So do not delay rebooting or reset root volume back to original if you do not plan to reboot in next hours. Also verify that new root volume has “diskroot” flag as well. “aggr options root” is used in emergency when original root aggregate is for whatever reasons physically unavailable. You may want to submit documentation feedback regarding this as well. This is highly confusing, I agree, and the part about “aggr options root” had not been present in any previous Data ONTAP version.
... View more
I never had to use “aggr options root” in the past 10 years. Of course, it is possible that something has changed in 8.1, although I would rather suspect documentation error. You can open support call with NetApp and ask them to clarify. If you issue “aggr status” after “vol options xxx root”, which aggregate is marked as “diskroot” - containing old or new root volume?
... View more
You do not assign root aggregate, you assign root volume. So system is correct ☺ Take new 64 bit volume and set root option: “vol options xxx root”
... View more
FAS3240 has 2 onboard FC ports and 2 cluster interconnect ports. You cannot use cluster interconnect port for any other purposes nor can you use any other port as cluster interconnect. NetApp does not support single path shelf attachment for new systems. This will technically work for disk access, but some features will not be available and system will overall be less resilient to errors.
... View more
So why did you assign disks in this case? You really need to open support case and wait for them to guide further steps. At this point any incorrect move can result in data loss.
... View more
This is FAS3020 which is likely to use hardware based disk ownership. Do you know for sure it was using software based disk ownership before?
... View more
If my hosts are generating 32KB IO requests does this translate to 8 IOs on the NetApp system? The real answer is as usual "it depends"; but under optimal conditions single 32KB host IO should result in single 32KB NetApp disk IO.
... View more
So how to schedule snapvault on secondary with a different policy from primary to allways get snapvault updated from primary? You can schedule snapshots on secondary independently from primary. And you can schedule transfer-only without creating snapshots on secondary (count = 0 in schedule). Finally you can use manual "snapvault update" and "snapvault snap create" and schedule them outside of NetApp (hint - Protection Manager ... ). and when a monthly snapvault is requiered, which snapshot will be used? I am not sure I understand the question. When monthly snapshot is required for what? For restore? Then it is up to you which snapshot to use.
... View more
Title: Domain Controller responds with 0xc0000022 (Access Denied) error to SMB2 tree connect request from the storage server on IPC$. Description: This issue might occur under the following circumstances: 1. The storage server sends an SMB2 session setup request with signing required, regardless of the setting of the option smb2.signing.required. 2. The domain controller returns a signed session setup response. 3. The storage server sends an unsigned tree connect request to IPC$. 4. The domain controller refuses the unsigned request and responds with a 0xc0000022 error (Access Denied). Workaround: Disable SMB2 on the storage server by entering the following command: options cifs.smb2.client.enable off
... View more
Sounds similar to http://support.netapp.com/NOW/cgi-bin/bol?Type=Detail&Display=474548 For client - NetApp communication option cifs.smb2.enable is relevant. IIRC option cifs.smb2.client.enable is for NetApp - domain controller communication. So you still should be able to enable SMB2 for clients.
... View more
You have failed root aggregate. You should open support case immediately so the cause can be investigated. Apparently one of LUNs is missing.
... View more
Well … it looks like it the correct LUN (at least, serial number matches). Unfortunately, Data ONTAP is too old and does not show “Occupied size”. Does VMware have something like “thin provisioning” too? Is it possible that it has 600GB reserved, but not actually used? IIRC it has different VMDK modes; one of them is “allocate but zero out on first access”, which sounds like it may be the case. Have you tried to ask on VMware communities?
... View more
It would be interesting to see “lun show -v” to see actual space consumption (which version of Data ONTAP BTW?) and “aggr show_space”. Are you absolutely sure it is really the same LUN and volume?
... View more
not sure why the snapshot is using so much This is the amount of data changed on a LUN since snapshot has been taken. It looks like LUN was almost completely rewritten since snapshot had been taken. As example, restoring full backup would have this effect. not sure if I can delete the snapshots. Well ... it is up to you of course. But you are rsking running out of space on this volume which will result in LUN going offline. You should either plan to increase volume size or celan up unused snapshots and reconsider snapshot policy.
... View more
1. Yes, you can 2. I think usual praxis is - connect primary channel (beginning of stack) to each head first, then secondary channel (end of stack). But I do not think it really matters.
... View more
Syncmirror is unrelated to HA; you do not need this license in your configuration and should remove it. Using syncmirror will reduce available storage by half. It is used to additionally mirror between two aggregates on the same filer head. Most people (me including) would recommend using RAID_DP. If space is at premium, using RAID_DP without spares can be considered; at the end it depends on how fast failed disk would get replaced. Each head must have own disks which means you cannot do much better than you described. Distribution of disks between heads is determined by storage needs, but each head must have aggregate which must have parity disks and should better have spare as well.
... View more
What exactly do you mean under “llun restore” or “vol restore”? Such commands do not exist. There are “snap restore -t volume” or “snap restore -t file”. If you compare them, file restore is very slow and volume restore is near to instant.
... View more
Actually, I would disable all cf.takeover.on_* options just to be sure to avoid accidental takeover. Write down current values to restore them afterwards.
... View more
Please 1. Re-enable cf for now 2. Disable options cf.takeover.on_panic on good partner 3. Perform cf giveback on good partner 4. Make sure no takeover happened 5. Disable cf again 6. Try option 4 once more
... View more