The filer 3200 series has been already cabled, and powered on. I am looking for a cook book, listing all required steps to set it up, either in high level or details in each step. In high level, what I can think of is following: - configure network - setup cifs - setup aggr, volume... - snapmirror - go through all options. - ... Please input anthing you can think of. Thanks!
... View more
Hi Michael, Looking into your screen, I could not locate this "processor summary view" screen as shown in your message, would you please advice what tools (Operation Manager, or Performance Manager, or ..) you use to get to this screen? Thanks for yoru time.
... View more
One of filers send high cifs_latency (77 millis vs 44 misslis as normal )alert to us last night from Perfornamce Manager, since this matrix is a measurement on the filer level, as I understand. Are there any ways to look into further, and find out which share(s) causing the issue? Thanks!
... View more
Michael, Thanks for looking into it further, and provide all useful infor. Within Peformance Manager, can I trigger an execuation of a script upon the alert I can then run ps command in diag as you suggested? What time could high CPU happen is random, so far they seem in off office-hours.
... View more
Hi Michael, We running 7-mode and 8.0.2. Thanks for your message, but, I did not make myself clear. I am actually looking for historic data, since high CPU util is happending in the evening. Are there any ways to trace back what are those processes eatting most of CPU time, by using for example Performance Manager? If not, within Peformance Manager, can I trigger an execuation of a script upon the alert I can then run ps command in diag as you suggested?
... View more
We can see constantly high CPU utilization on the filer running CIFS, and FC, are there any ways to track down what are these processes taking high CPU time? Thank you for your input!
... View more
I need to setup /etc/rc file on two 10g ports, and they are LACP configured. However, it is not working, I have two obvious issues here: First, the interface file12345678NAS-88 could not be created, only took the portion of it "file12345678NAS", and although the IP is assoicated, but the connection is not there, and I could not ping it. Please take a look at it, and any idea what went wrong? Thanks! rdfile /etc/rc #Auto-generated by setup Thu Aug 15 11:05:50 EDT 2013 hostname filer1 ifgrp create lacp file12345678NAS -b ip e5b e1b vlan create file12345678NAS 88 ifconfig file12345678NAS-88 158.8.88.12 netmask 255.255.255.0 partner filer2-88 mtusize 1500 ifconfig e0M `hostname`-e0M flowcontrol full netmask 255.255.255.192 partner e0M mtusize 1500 ifconfig e0a `hostname`-e0a mediatype auto flowcontrol full mtusize 1500 route add default 158.8.88.1 1 routed on options dns.domainname citnet.cit.com options dns.enable on options nis.enable off savecore
... View more
These two statements were excerpted from TR-3749 but, why in NFS i can see those VM file's but in LUN, I can only see them all as a file? please explore it a little bit more. Thanks!
... View more
in NFS datastores, each virtual disk file has its own I/O queue directly managed by the NetApp FAS system. in LUN datastores, it doesn’t natively allow a storage array to identify the I/O load generated by an individual VM. Would you please explore these two statements more? I could not fully understand them. In NFS, please explain to me in detail or by example on how I/O queue directly managed by a filer. Why in LUN datastores the filer cann't identify the I/O load? Also, in a "best practice" document, it says that virtual disks could be connected to all nodes in the clustter but is only accessed by a single VM, i thought it could be shared which means it could be access by multiple VM's. Thanks for your inputs.
... View more
I am new to integrate NetApp storage to VMware environment. So, my question may sound very basic. If I understand correctly, a LUN or a NFS share could be understood as a datastore, and in order to share it between multiple ESX hosts, these hosts have to be clustered together. Multiple datastores could be shared by the same cluster. Such envionment could be managed by Vsphere web console. Am I right so far? If, for instance, 2 datastores are shared by a cluster, any way I can move a virtual machine from one datastore to the other? Also, can a datastore be shared by mulitple different ESX clusters? Thank you very much in advance!
... View more
i feel to remove the ownership would be a right direction. however, it won't allow me to, in the advanced mode: > disk remove_ownership 0b.26 disk remove_ownership: Disk 0b.26 is not owned by this node.
... View more
> disk remove 0b.26 disk remove: Disk 0b.26 not found it's been more than 5 hours now, there are no any changes. the interesting things is, all following numbers on this disk are different than the rest of fine disks, and these fine ones all have the same number, so, it seems we need to find a way to change these numbers to the others. Current owner: 118059988 Home owner: 118059988 Reservation owner: 0
... View more
I found something interesting. "storage show disk 0b.26" is working, but the following numbers are different from those from the rest of drives ... Current owner: 118059988 Home owner: 118059988 Reservation owner: 0 When I run storage show disk 0b.25" or any other fine disks, I got all same numbers. Current owner: 101203034 Home owner: 101203034 Reservation owner: 101203034 It seems a ownershp issue... I tried "disk assign all", not working: disk assign: Could not find any unowned disks to assign. Obviously, because disk show -n got nothing.
... View more
And (sorry, I have to ask) - you're SURE it's not in the "sysconfig -r" output? Maybe under partner disks, or broken disks - or even already part of an aggregate? Yes, I am pretty sure that it is not in "sysconfig -r", not in broken disks, or spares. the address of the disk is 0b.26 I used "aggr status -r", not in any aggr neither.
... View more
> So you had a drive fail, replaced it, and now you can see it under "sysconfig -a" but not "sysconfig -r" or "disk show -n"? Yes > Where does it show up in "sysconfig -a"? Just under the host adapter with the rest of the drives? Yes, it is under host adapter 0b with the rest of drives Do you see the drive in "storage show disk" and "disk show"? I can see the drive in "storage show disk", but not in "disk show". Looking forward to hear you again. Thanks!
... View more
I know, it should be a spare, but unfortunately it is not here in this case. It is the same size as the others existing SATA on the filer, and the same firmware is being used already. Any idea?
... View more
what is the solution for that? It's not being used, because I could not see it from "sysconfig -r" I don't see the disk in any group, not in any array, nor in "spare" group.
... View more
I got a failed SATA drive, and usually I can see it by running "disk show -n", then I can run "disk assign all" to add the disk. However, in this case, I don't see any disk. I tried "disk maint statys" and "sysconfig -r" as well, still can not see it. I can see it by running "sysconfig -a". Please let me know if there is any thing else I can try. Thanks for helping me out.
... View more
As I understand NetApp HA cluster only can failover and failback between 60 miles. If I wanted to do failover between, for instance, NY and IN, any NetApp products can do that?
... View more
I have been asked about NetApp "clustered disk". I have no idea. Would you please shed some lights on it? I don't know under what circumstances this would be used, unfortunately I don't know about it neither. Thanks for your idea!
... View more
It sounds to me then WFA can cover a lot features of Provision/Protection Manger, so, I only need to focus on WFA, and not too much time on the later one?
... View more
If I already have OnCommand Unified Manager Provisoning Capability, why do I need to have Workflow administration software? I know Workflow can replace a lot of administration tasks, but do I really need it? Thanks!
... View more