Greetings! Threads already referenced on this topic: https://communities.netapp.com/message/49600#49600 https://communities.netapp.com/message/20969 http://now.netapp.com/NOW/knowledge/docs/ontap/rel80/html/ontap/cmdref/man1/na_reallocate.1.htm I have a volume that yields the following upon a "reallocate measure": [afiler: wafl.reallocate.check.highAdvise:info]: Allocation check on '/vol/dbvol' is 23, hotspot 0 (threshold 4), consider running reallocate. Looking to better understand the following concepts Allocation Check - What does an allocation check of "23" mean? thresholds - Same as above. 4 out of what? is this abs or relative measure hotspot - Same as above How long should a actual reallocation take (is it comparable to the time taken to complete the "reallocate measure" command)? What's the CPU overhead? What factors does this cpu overhead depend on? Thanks.
... View more
An aggregate can only run on a single controller so I'll want to have at least two (one for each controller). The size limit appears to be 16GB. On the first controller I was planning on running all 24 600GB drives in a single aggregate to run our primary ESXi environment and Exchange systems. How would one configure these disks as a general rule? Using RAID-DP? Assuming that, one HS and I have an odd number of disks (23). Is one HS enough? Look at the Sytem Configuration Guide for 3140. Depending on ONTAP version you want to run, the aggr/vol limits are different. If you go with 8.0.2, then you can have 64-bit aggregates and make them larger - 50TB, IIRC. The link to SysConfig Guide is http://now.netapp.com/NOW/knowledge/docs/hardware/NetApp/syscfg/scdot802/index.htm I wouldn't do anything other than RAID-DP. NetApp allows you to run with 1 Hot Spare per controller. If you enable "disk maintenance center", you'll need two hot spares. That'll allow NetApp to proactively copy out blocks from disks that are going suspect and allow you to switch out drives. Some reading here on these topics http://partners.netapp.com/go/techontap/matl/storage_resiliency.html http://media.netapp.com/documents/tr-3437.pdf http://www.netapp.com/us/library/technical-reports/tr-3786.html http://now.netapp.com/NOW/knowledge/docs/bpg/ontap_plat_stor/data_avail.shtml I've been reading up on using NFS instead of FC. I already have an entire FC environment with (I think) plenty of ports. The major point I've seen on running NFS is that dedupe will "release" the space available to ESX when doing it via NFS but not in a VMFS/FC environment. If the block level dedupe runs against VMFS volumes, where does the free space show up - or does it show up at all? Would you thin provision the VMFS LUNs and the space returned from the dedupe process? I have 8Gbit FC running so 10Gbit ethernet probably would make too much difference on transport performance. Depending on the application, you would be using either iSCSI or NFS. For SME (SnapManager for Exchange), it does have the ability to manage thin provisioning etc. Not sure how much you would get out from de-dup. More discussion is probably in the offing. I was planning on running CIFS on the other controller with the 2TB drives. If I'm reading things correctly, there is a 16TB limit for an aggregate so I'll need multiple aggregates, right? Are HS's assigned to an aggregate? I was also planning on running some test/low IO required VMs on SATA (I'm doing that now on the CX4 with R4+1 groups without any problems). Nearly all of the NAS data on our current systems are running on SATA drives and performs adequately. It seems to me with 2TB drives that you hit the 16TB limit with just 8 disks...two 10 disk RAID-DP sets, a hot spare, and I've got 3 disks that I'm not sure what to do with. Again, look at the links above to get some insight and recommendations on hot spares, aggregate sizing etc. Currently we run two distinct AD domains that don't interact with eachother. Currently I have 3 CIFS servers in one domain and the data is grouped in "failover" units, i.e., I can move a business function to the remote site without having to move the whole thing. The other domain has a single CIFS server with 3TB of data that is 99% of the time read-only. Is it possible to replicate this setup or is it time to rethink things during the migration? Multiple domains are possible using MultiStore (vfiler functionality) We only use one CIFS server "by name" and everything else is presented via DFS. On the one, I'll have to do a hard-cut on the name - there are only about 5 shares referenced by name but how easy is it to rename the CIFS server? If that file server is not doing anything else and that name can be taken away from it, then you can transfer that name to the NetApp system using the netbios aliases feature. Another question about dedupe...is this at the aggregate level? One of the issues on the Celerra (of many) is that the dedupe does not cross filesystem boundaries. The Celerra has been able to dedupe/compress about 20% of the data. I know it isn't apples to apples but as far as raw storgae for comparison I'm doubling in size and on the original system I have about 5TB that isn't even allocated to anything. De-Dupe is at volume level. HTH - I'll let others chime in with more info/direction. rajeev
... View more
Got it. The link had a trailing ":" in it that was causing the error. It is indeed there. Now I guess we can get back to the original question
... View more
I am getting a "Article not found" error when I go to that link. Sure you are not seeing a cached page? "Article Not Found The article was not found, or is no longer available."
... View more
It's a bit late on this reply but was looking into the ldap stuff cause someone else asked me a question on it.. The options ldap.name and ldap.passwd are used for SASL binding. You have a value set for ldap.name. Blank that out and for good measure, blank the ldap.passwd as well (note that you'll still see the six ***s after this). That should set non-SASL bind with your settings.
... View more
what is funny is a lot of peeps at NetApp are asking questions on how people are using 64bit aggrs but are not willing to or do not have answers to the questions that are asked of them..May be no one wants to come out and say "Oh we didn't think of that..". Yes we all have our own individual methods - most of them depend on the specific environment - but it appears that some of these questions have some, quite honestly, stumped. (/me shakes head/)
... View more
That does not seem right. It appears that the filer is configured to do local user authentication. Can you turn on cifs.trace_login and see what the error is? AFAIK, if you do Windows AD authentication, you do not need any /etc/passwd entries. http://media.netapp.com/documents/wp_3014.pdf
... View more
Doh! - should have read the post more carefully. Eugene is on point. If the file is created and deleted between snapshots, then it is not captured.
... View more
Merci à Google Translate Is this node part of a MSCS Cluster? and if so, is Snapdrive installed on both nodes? If it is not part of a cluster, then may be check for all the hot fixes. Looks like this is a catch-all error message from MSFT..
... View more
You should have received an estimate number of minutes to complete the initialization at the start of that process.. I believe the 2TBs take about 6 hrs (should not matter how many drives there are since they occur in parallel). I am a bit surprised that you've been running it for over 6 hrs and it is still going..
... View more
The Array LUN groups should have separate paths to the V-Series via separate fabrics. For two host ports per controller, following the EVA traditional practices and connecting FP1 of both controllers to switch1 and FP2 to switch2, then the array LUN group would not have EVA controller redundancy.. From an EVA standpoint, the (traditional) odd and even connections also look to spread their connections across the switches..I am no EVA expert, but I am wondering if below works (and supported). This *may* meet both the NetApp and EVA requirements. SW1 - C1-FP1 -- C2-FP1 - SW2 SW2 - C1-FP2 -- C2-FP2 - SW1
... View more
My suggestion for EVA-Switch connections would be to put all odd FPs on one fabric and all the evens on the other. FP1 & FP3 to switch1 and FP2 & FP4 to switch2. I don't have any diagrams to share with you, I must admit. HTH
... View more
I believe the ProCurve 2510s are stackable, so in theory, one can create a LACP VIF with members across the two switches. Having said that, the specifics in the diagram you laid out begs a few questions/clarifications
a) On the FAS2020, multi vif01 (lacp) is indicated. In NetApp world, a vif type is either multi or lacp - not both. Type "multi" is a static EtherChannel (active/passive connection and no switch configuration is necessary) and type "lacp" is a LACP VIF (where it creates active/active connections and switch configuration IS necessary). You'll need to pick one
b) I can't speak for the switch side settings but I did dig up this doc that might be of use: http://www.hp.com/rnd/support/config_examples/2524_lacp.pdf
HTH
... View more
I am not finding any flexscale outputs in the hourly performance archiver files. I can see them in the cm_hourly files, however. Can you confirm that this is expected behavior and can this be changed, so I can get more granular flexscale outputs? (And yes - I can always run statit - but... )
... View more