@bwells wrote: I was able to get the 9.5 simulator working in QEMU/KVM. I used virt-manager and GNS3 to manage it. 1. Convert vmdk's to qcow2 format using qemu-img 2. Move qcow2 files to a location where virt-manager can find them or browse to 3. Create your vm in virt-manager and select import disk option 4. Add disk one 5 Set memory to no less than 5 gb of ram and cpu to no less than 2 6. Name it and select customize 7. Add remaining qcow2 disks and make sure they are all ide 8. Add 3 more NICs that are e1000 9. Make sure you change from Spice to VNC 10. Boot VM 11. When "CTRL-C for boot menu", hit CTRL-C 12. In boot menu select option 4 (disk have been setup for vmware and we need to zero those out) 13. Follow all netapp procedures from there This worked fine for me using both in virt-manager and GNS3 using Ontap 9.5. This works, but from my experience it is very brittle: If the VM crashes for any reason (i.e. any improper shutdown), it will not come back up and panic, with lots of errors in the md-devices and being unable to mount its root directory. This also happens for VMware and makes it very hard to actually use the simulator (i.e. you cannot show how the system recovers from an unexpected failure, because, well, it usually won't recover at all 😉 ) I think this works better with ONTAP select, but setting that up without the deploy VM is very undocumented Does anyone have a solution for that problem?
... View more
Hi Sean, what bootargs do you set in the env file? Is there a list somewhere on how to configure, say, 2-node clusters with IP addresses and all? Maybe you could share your script? thanks -Michael
... View more
Did you actually try that command? Because it was the first thing I tried, and it doesn't work:
PS C:\> set-ncnetdns -vserver ClusterF -NameServers ""
set-ncnetdns : Invalid value specified with "-name-servers". Specify a valid IP address.
At line:1 char:1
+ set-ncnetdns -vserver ClusterF -NameServers ""
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : InvalidOperation: (10.91.81.206:NcController) [Set-NcNetDns], EAPIERROR
+ FullyQualifiedErrorId : ApiException,DataONTAP.C.PowerShell.SDK.Cmdlets.Net.SetNcNetDns
... View more
Hi,
I notice that there is Get-NcNetDns and Set-NcNetDns (to get the list of DNS servers, and to change the DNS servers), but there is no corresponding Remove-NcNetDns to remove the DNS config from an SVM or the Cluster.
Would it be possible to add that commandlet to the next version of PSTK? Currently we're relying on Invoke-NcSSH for that, and that has some other quirks that make it undesirable
Thanks
-Michael
... View more
Have you ever found a solution for that issue? We have seen the same issue occur recently and we're equally baffled. Our config is very similar
... View more
You have to make sure the home port for the new LIF is on the newly-created VLAN interface (a0a-1501). If you are sure that this is correct, the only thing left is that it is a switch config issue. Maybe the VLAN is not correctly tagged on the ports on your switch. Or on the trunk between the switches. You should ask your networking guys -Michael
... View more
First, do not use "mixed" security style. Ever. It only leads to problems. Use the permission style of the majority of users/accesses. When in doubt, use NTFS. But not mixed. Second, you need something called "user name mapping" configured on your cDOT system, where you map UNIX users to Windows users. This is a required step. You can read about it in the NFS File Access Reference Guide (here) and in the CIFS File Access Reference Guide (here). Note that it might be a bit much to take in if you're completely new to ONTAP.... Your partner that sold you the system should be able to help you with the implementation -Michael
... View more
I would definitely separate CIFS and NFS interfaces. For example, NFS for ESX should be in a separate Layer-2 subnet and not routed, so this is usually different from CIFS and requires separate LIFs. Also, in larger environments, especially if you use NFSv3 with ESX or Oracle or something, it helps if you actually use one NFS LIF for each Datastore/Mount. Because then if you later decide to move a volume to an Aggregate on a different node ("vol move", for performance reasons or something), you can move the corresponding LIF to the destination node as well, so that the traffic does not go through the cluster backend network
... View more
You can do a (disruptive) head upgrade from the single node to one of the new systems configured as single node. Then, when the cluster is running on the new controller, add the second node (HA partner) to it to make it an HA pair. Or, find a (temporary) second controller of the same model as the single node you have (you can borrow them through your Partner from NetApp, it's called Swing Gear), temporarily make a 2-node cluster out of your single system, and then use the regular way to upgrade your cluster hardware and remove the old controllers -Michael
... View more
@Overz wrote: So than as you initially suggested the only way to solve this isue is to add cables. There is another option: You can set the failover policy of that lif to "disabled". That will signal the system that you do not intend this interface to failover and it shuts up the warning message.
... View more
If you're a complete NFS shop, what do you need CIFS for? If it's only for 1 share that 2 or 2 users access, you can also use Microsoft's NFS for Windows, for example. That being said, it's been possible for a few months now (since OnTap 9.0) to run an SVM in Workgroup mode without AD. See the man page for the "vserver cifs create -workgroup" command. -Michael
... View more
Hi, yes, this is definitely a supported configuration. The only shelves which cannot be mixed are 4486 shelves (they can only be on a stack with other 4486 shelves) and the new IOM12 shelves (they cannot mix with either IOM3 and/or IOM6) For details see the Storage Subsystem Technical FAQ and TR3437 and TR3838 NetApp recommends having as few transitions from SAS<->SATA as possible, for historical reasons, but experience has shown that there is no performance impact for spinning disks, no matter how many transitions you have. Still, I wouldn't put the single SATA shelf in the middle of the stack but rather at one end... Regards -Michael
... View more
There is the volume explore command that is available from the cluster shell. But it's a diag-mode command and can potentially be dangerous as it allows you to alter (i.e. destroy) arbitrary data on your disks, so be careful with it The syntax can be found in the man page and with "volume explore help", but the "scope" parameter is rather complicated, so I'll just mention the general syntax for listing a directory: volume explore -format dir -scope <vserver>:<volume>/<path...> For example: cl1::*> volume explore -format dir -scope Infra:iso/Microsoft
found 1454.64/Microsoft to be inode 1454.96
directory data from block 1454.96@0 at location 1454@42693152b
entry 0: inum 96, generation 156613115, name "."
entry 1: inum 64, generation 1565554427, name ".."
entry 2: inum 97, generation 156630775, name "Windows 2008R2" (8.3 "WINDOW~1")
entry 3: inum 98, generation 156631151, name "Windows 2012R2" (8.3 "WINDOW~2")
entry 4: inum 101, generation 156635340, name "Windows 2012" (8.3 "WINDOW~3")
entry 5: inum 103, generation 156640115, name "Windows 2003R2" (8.3 "WINDOW~4")
entry 6: inum 20097, generation 156643561, name "_WSUS Update ISOs" (8.3 "_WSUSU~1")
entry 7: inum 19467, generation 157064132, name "Windows 8.1" (8.3 "WINDOW~1.1")
entry 8: inum 19468, generation 157064706, name "Windows 7" (8.3 "WINDOW~5")
entry 9: inum 19470, generation 157069086, name "Windows 8" (8.3 "E0K0000~")
entry 10: inum 19475, generation 157471053, name "Windows 2003" (8.3 "K0K0000~")
entry 11: inum 19477, generation 157475446, name "Windows 2008" (8.3 "M0K0000~")
entry 12: inum 29674, generation 164062243, name "Windows 10" (8.3 "AWT0000~")
entry 13: inum 29675, generation 415758740, name "Windows 2016 TP5" (8.3 "BWT0000~") You can also read file contents, file metadata (timestamps etc.), and even system files with that command: Metadata: cl1::*> volume explore -format inode -scope Infra:iso/Microsoft
found 1454.64/Microsoft to be inode 1454.96
inode 1454.96 generation 156613115 at location 1454@45369601b+2368:192
type 2.3, flags 0x02, flags2 0x00
size 4096, blockcount 1, future_blocks 0, level 1
umask 0777, uid 0, gid 1, xinode 29673, sinode 0, nlink 14, av-gen-num 0
ctime 28-Apr-2016 14:11:12, mtime 28-Apr-2016 14:11:12, atime 03-Feb-2017 15:48:14
ptr[0]: pvbn 1308139785, vvbn 42693152 Note that if your directory or file contains spaces, you cannot access it with that command, you have to use the 8.3 short name or inode number then. hope that helps regards -Michael
... View more
True, but you failed to notice that the OP was talking about the cluster management LIF (not apparent from the text but clear once you take a loko at his "net int show" output) 😉 YES, the cluster management LIF can definitely share a port with the "regular" data LIFs. This is a fully supported configuration -Michael
... View more
This is probably a caching issue on the client. Try setting the DirectoryCacheLifetime to zero as described here: https://technet.microsoft.com/en-us/library/ff686200(v=ws.10).aspx -Michael
... View more
No, it is not possible. Please check your suggested solutions beforehand. a) vol0 is a 7mode volume and cannot be modified with most cdot commands b) vol0 is owned by the "node" vserver which cannot do NFS (only data vserver can do file protocols) c) you would need a data LIF to mount the volume, which cannot be created for node vservers ...and probably a dozen more reasons why it cannot work regards -Michael
... View more
You are right, there *should* be no problems. But fact is, storage vendors usually don't test these scenarios (where a storage array from a different vendor is connected to the same path/fabric) Normally it should work but if it doesn't, the problems can be very subtle and only manifest in certain specific configurations. We had an issue once where everything seemed to work fine a tfirst, but if you wanted to trigger a storage snapshot on the NetApp you would get hundreds of errors in the event log, and on at least one occasion the machine even bluescreened. So while it should work, and probably will work for a while, you should be on the lookout for subtle problems...
... View more
You can of course connect HBAs to two different storage systems from different vendors. From my experience, what will cause trouble, however, is if you're using management tools, snapshot integration drivers (VSS) or host settings tools from both vendors, as these will generally overwrite each other's settings and get confused easily. for example, SnapDrive might try to snapshot LUNs on 3rd party LUNs and report strange errors. If you're just connecting these storages for data migration purposes, or are not using any integration tools, you should be fine though. -Michael
... View more
If you are running 7mode, try "ifconfig -a". If you're running cDOT, try "network interface show" to see the IP addresses of the system Also, "sp status" (7-mode) and "service-processor show" (cDOT) show the IP address of the management module (Service Processor) regards, -Michael
... View more
The SVM has to be in an AD domain, otherwise you will not get any CIFS functionality. The clients that connect to the NetApp can be either other AD members (domain computers) or standalone computers (i.e. computers running in Workgroup mode). Note that if you try to connect to the SVM on the NetApp cluster from a computer which is NOT part of the same domain, you will have to supply a username and password to connect. This can be either a domain user (using "DOM\User"-style usernames) or a local user created on the SVM (see the vserver cifs users-and-groups local-user create command). This is the same, actually, as with Windows servers and computers without any NetApp involved Also, if you really need CIFS with Workgroup style authentication, it is possible that this might be added to Data OnTap at a later date (all documentation, for example, says it is "not yet available" or "currently not possible").... Regards -Michael
... View more
I have never seen a system experience this particular issue (and we have a lot of FAS2xxx systems in the field). You can take a look at the related bug 682639 and see that a particular part of the problem has alredy been resolved in 8.1.3 so there's even less reason to worry...
... View more
I haven't tried it on anything recent myself, but at least the slightly older IBM-branded FAS270 filers did work with regular NetApp shelves without any problems, I am pretty confident this still holds true for the newer systems. Anything else would be strange, since the shelf hardware and the operating system is identical (except for the copyright string). You probably won't have support from IBM (or Lenovo?) for these shelves but I'm sure you're aware of that
... View more
"Single powered" sounds to me that there's a PSU off in your filer/shelf, are you sure he's talking about the switch? If a PSU fails your NetApp should definitely have sent you an email already. If you want to manage your switches from (Clustered) DataOntap you can use the "system cluster-switch create" command. Which is best practice anyway, so I would ask the partner/company who installed your filers why they didn't bother doing that... They should have known this
... View more