There is KB describing how to remove those stale entries (how they appeared I do not know), but it also says "These steps should be performed under the supervision of an NGS (NetApp Global Services) representative". Did you open case with NetApp?
... View more
ONTAP9 is still RC, not good for production systems. "What's in the name"? Today's RC was known as GA and today's GA was known as GD in the past. So the difference is mostly in amount of testing and length of exposure to "real world" workloads. Quoting current release model definition: RCs are fully tested and are suitable in development / test environments and in production environments, including those with business-critical workloads. I do not endorse using RC, but at least we need to stay with facts 🙂
... View more
Policy is per-volume. vol00 should be assigned export policy that allows client access. In 8.3 you can enable listing exported volumes with showmount -e; setting is per SVM.
... View more
According to documentation, AD group account access is supported only with the SSH and ontapi applications. Are you trying to use group account?
... View more
1. That's not possible. Patent cannot be more restrictive than child (it can, but then child is simply not accessible). 2. That's normal. You still can mount each volume and qtree individually as long as volume has junction point (I.e. is mounted in a namespace).
... View more
Could you show example of problem output you mean? I'm afraid I do not understand what "iSCSI LUN's that have either single or no active paths to the filer" means.
... View more
Keep in mind that longer distance increases likelyhood of partial cluster. It is far more easy to lose half of your nodes if they are in another building than if they are in another rack. In this case if you lose quorum, your cluster will be in severely restricted mode, and if outage is prolonged you will have a problem. That is the reason why stretch metrocluster exists. You have to weigh in all pros and cons and understand impact of cluster split. This is not decision that can be taken based on simple math and cable data sheets.
... View more
On the filer, security context needs to be sec=none, but the client must mount the share with sec=sys. Yes, that's how it works on 7G as well, except it also works using NFS v3.
... View more
You can use "iscsi initiator show" to list initiators and target portals they are logged into and "iscsi interface show" or "iscsi portal show" to match portals and interfaces.
... View more
Yes, of course. Arrays must have the same Data ONTAP edition (7-Mode vs. C-Mode) and there are some considerations regarding versions. If you tell more information, you may get more detailed answer.
... View more
According to TR-4075 In all versions of Data ONTAP, a source volume of a SnapMirror relationship can be moved while the volume is being mirrored. SnapMirror services encounter a brief pause during the cutover phase of the volume move job.
... View more
So are you telling that it is possible to have SVM on root aggregate or not? It was (and I'm pretty sure still is) tecnically possible. That current Data ONTAP makes it more difficult to do by accident is good. Once again - doing it is not and never was recommended. I want (need) above setup migrate to cDOT with minimum redisign. Is it possible? Yes, of course. Do you have specific questions or concerns?
... View more
The only difference between the two is that the one that isn't working has had it's policy changed from DPDefault to MirrorAllSnapshots Version flexible snapmirror transfers all snapshots since common base snapshot. DPDefault policy creates new snapmirror snapshot when you start update and keeps this and only this snapshot, which also becomes next base snapshot. When you change policy to MirrorAllSnapshots, any snapshot on source volume that is older than latest snapmirror snapshot will be ignored. What you can do to transfer old snapshots is destroy existing snapmirror relationship create snapvault (XDP with policy XDPDefault) resync using explicitly the oldest snapshot on source and -preserve to keep target content destroy relationship and recreate again as XDP with policy MirrorAllSnapshots update new snapmirror Step 3 creates base snapshot from the oldest source snapshot; step 5 will transfer everything newer skipping over existing snapshots. P.S. I think using DPDefault as alias for MirrorLatest was mistake; it means that moving from traditional SnapMirror to version flexible will suddenly do entirely different thing for apparently the same configuration. DPDefault should have been alias for MirrorAllSnapshots that is exactly what traditional SnapMirror did.
... View more
I tested with 7.3.7 and now I can confirm that this is broken in 8.x: filer1-1-co> exportfs -io sec=none,rw /vol/t cn1:~ # mount filer1-1-co:/vol/t /mnt cn1:~ # touch /mnt/foo cn1:~ # su - tele tele@cn1:~> touch /mnt/bar tele@cn1:~> ll /mnt total 0 -rw-r--r-- 1 nobody nogroup 0 Jul 19 12:22 bar -rw-r--r-- 1 nobody nogroup 0 Jul 19 11:52 foo While on 8.2 I am not even able to mount it. Of course, I also have different host OS here (RHEL vs. SLES) but as I get error reply from Data ONTAP I do not think it depends on client. I do have a case open with NetApp support for over a week now, and so far they had zero luck fixing the issue (that's basically the point where I turned for help to community). I do not mean opening case and asking "how to implement all_squash". I mean opening case about sec=none being completely broken in 8.x. May be there is already bug about it in this case the more people complain the more chances it gets fixed.
... View more
I do not believe it's recommended to mix the size. That's true. Your large size disk will only function as the smaller size And that's wrong, sorry. This applies to a single raid group, not to the whole aggregate. You can have multiple raid groups with different disk sizes; in each raid group whole disks will be used.
... View more
new feature in ONTAP allow mixed disks in an aggregate This was possible for as long as I remember (which is 15+ years). So there is nothing new. The obvious drawback of this approach is that large disks are utilized more than small so some data is striped across less number of disks. You cannot predict what data and it is impossible to give blanket statement about impact. You probably should avoid it for high load OLTP application; OTOH for simple file server my guess would be - nobody will notice.
... View more
You confuse INTRAcluster and INTERcluster. "Cluster" ipspace is reserved for INTRAcluster interconnect; there is no way to use it for INTERcluster peering. Either use default ipspace (not specifying any ipspace will automatically do it) or create your custom ipspace if you really need.
... View more
OS type affects how LUN is created; "windows" assumes first partition starts at 63 sector so LUN layout is optimized for this. If you already created partition that matched this expectation, this won't change after you connect LUN to Windows 2008. If you created partition that did not match this expectation, you already had misalignment and this won't change after you connect LUN to Windows 2008. Now if you completely wipe out LUN and create partition from the scratch under Windows 2008, then yes, you are likely to get misaligned partition because Windows 2008 by default starts partition at different offset.
... View more
There is no way to get OS space utilization from controller. If you write 1TB file on host and then delete it, space remains allocated in storage even though host now accounts for it as free. The only way to get more reliable estimate from storage is to use thin provisioning and space reclamation.
... View more