Yes, this info is correct. I'm not sure why you are mixing everything up by mentioning schedules that won't run, etc... If you are going to script everything, then you can easily script the deletion of the oldest snapshots too (if that is what the problem is, but I can only guess really)... Again (and for the last time), if you tell us what you are trying to accomplish, someone might be able to help you more quickly. I have better things to do than re-read the documentation with you. S.
... View more
Arrange to have someone toss it in the bin or dismantle it for spare parts... You aren't far away from losing all of your data anyway, so this way you can just eliminate the element of surprise.
... View more
Hi, I think you are a bit confused, so trying to answer your questions is not entirely easy. A snapshot is a snapshot, fundamentally. The contents of the snapshot are dependent on where the data comes from. A Snapvault snapshot is basically the contents of the remote NetApp+ONTap system's comparable snapshot (listing the snapshots will show you if they are "snapvault" snapshots) or last OSSV state. You can configure Snapvault to take a snapshot after the last transfer to keep a sort of checkpoint of the contents of the destination volume over a longer period of time. Volume snapmirror is much the same, but you have a 1:1 relationship with the contents of the source volume. Removing snapshots that are not locked is done with 'snap delete -V volXXX' . Snapvault (and volume snapmirror) snapshots that are "busy" are locked until the releationship is broken. This is also true for source snapshots for clone volumes, fwiw. Still, if you share what you actually are trying to do, you might get more specific help. S.
... View more
Hi, Why don't you just setup (initialize) the relationship and then not create a schedule? 'snapvault snap create' will just create a snapshot... You can trigger that via a script or the like. You will have to also script the removal of snapshots you no longer need as well just using 'snap delete volXXX'. Perhaps if you tell us what you are trying to accomplish (the "why" behind your questions), we can suggest a different approach that will get the job done. S.
... View more
Hi, I have a similar setup, sort of thinking in "7-mode" when I ordered my net connections. Basically this doesn't involve my 10G ports, but 3 ports on each controller, 2 are in an Etherchannel/Portchannel config and one is just a standalone connection. Basically, the home-port is the etherchannel link and the next priority is the single link on each controller. I did this by using a hidden command to set priorities, the home-port already having priority 0. You can find more here: https://library.netapp.com/ecmdocs/ECMP1196817/html/network/interface/failover/create.html So there is actually a way to do it without creating per controller failover groups. You can put everything in a single failover group and then setup a policy per lif to failover in a pre-determined order or priority.
... View more
"They"? ls -laq .vSphere-HA results in? At worst, if you don't need to keep the volume, you can always destroy it and create a new one. S.
... View more
Yes. If you are running ONTap 8.1 or better, then vol move is your tool for moving volumes (and accompanying LUNs). The limitations are, unfortunately, that your volumes (and LUNs) are in vfiler0 if you are using Multistore on 7-Mode. Cluster Mode just does this as part of it's normal magic although I don't know how often you'll encounter 32-bit aggregates there. 64-bit aggregates in ONTap 7.3.x also have had some performance problems, so you might want to do a bit of testing before you jump in with both feet. S.
... View more
I think if you read the 'vol' manpage you will find that you can do this too, not that it really makes any difference. You could sync with snapmirror ( also an element of 'vol move') for a year before you did the final move. It doesn't matter really when you start. When you decide to do the cutover, it just does all of the necessary operations in the background with the added advantage that you don't have to eve stop your server. I don't see the point in your objections. S.
... View more
You can't. By using LUNs, you leave any such usage considerations to the server using the LUN's. Suggest that they implement mail quotas or the like. S.
... View more
You can enable quotas on the volume/qtree, just don't set limits. There's really no point. The LUN size limits LUN data. A quota won't limit any other per user data because this isn't a CIFS or NFS exported filesystem. You really need to read the documentation on SAN/block data administration. S.
... View more
Seeing as how what I suggested actually works and you don't seem to have a better solution, this seems to be more a matter of PEBKAC, from where I'm standing. Enjoy.
... View more
Go with the bigger controller and flash pools, if you can. It's easy to get CPU bound if you are going to do much of anything these days (compression of your CIFS data, for example). I don't have the max limits for each controller in my head, but I think the 2240 hits limits pretty quick wrt number of SSDs. S.
... View more
I'm not sure I see the problem. Given that one can internally (in ONTap) mount a volume pretty much anywhere in a custom namespace, one can almost use volumes like qtrees, just with a much better "container" for moving data in the background within the cluster. One just exports the volumes as part of the namespace. Admittedly, I haven't spent enough time with this yet to fully wrap my head around all the possibilities. I guess if you exported "hidden" directories below the old qtree level, you had a pretty quirky setup (although I've done CIFS shares a level or two below the qtree before. It wasn't very pretty though. Migration, if you mean from 7- to Cluster Mode, is pretty painful and the available tools really seem like lab experiments. Getting people over to cluster mode means migration for almost all of us but it doesn't look like that fact has hit home at NetApp yet. VTW was crude, MTT is actually perhaps even a bit worse. Given that NetApp wants to be a major SAN vendor, lack of migration tools for LUNs seems like a critical blindspot also. S.
... View more
Hi, Looks like you have a lot of spare time. Getting down to ms latencies on every piece of equipment would be hard enough in an environment where you had dedicated and isolated equipment every step of the way. FC latencies are definitely a matter of the "moving parts", as you say, mostly queuing/buffering along the paths, possibly reordering of commands within switches with multiple paths/ISL's. NFS latencies are probably more complex given congestion algorithms and the possibility of necessary TCP retransmissions (assuming/hoping you use TCP with NFS). NFS also needs to keep track of the status of files (and parts of files) and directories and has quite a bit of RPC "chatter" depending the version. Depending on the client, you also have a fair amount of buffering within the OS, without mentioning normal TCP window sizing, NFS read/write size options, Ethernet jumbo frames, TCP offloading or other tuning options for the NICs, general TCP stack tuning, and the like. NFS just has a lot more slack built in. It's not always a negative thing, but this resilience has its cost if it needs to be used, i.e. the latency spikes you see. FC is pretty rigid, but almost simple in comparison and has generally lower latencies. FC also fails in spectacular ways during congestion scenarios that TCP handles with ease, at least if one doesn't use a lot more complex fabric configuration and strict queue depth policies on servers. It's like a fine sports car while NFS is like your reliable jeep. I wish you luck in your endeavor, but I would be slightly surprised if you succeed. Isolating all of the parts and settings would be a very complex matrix. S.
... View more
Hi, Thin provision everything (no volume guarantees, no lun space reservation) and stop using snap reserve (set it to 0). Configure SME To not complain about the thin provisioning. Set up volume auto sizing and warning levels on your aggregates and live a quiet life. S. (If it hurts, stop doing it...)
... View more
Asked and answered. Give Scotty his 10 points. No QSM. Snapmirror gives you 1-to-Many mirroring. Given that you can mount volumes anywhere in custom namespaces and the ever increasing maximum number of volumes, the role of the qtree is taking a backseat to bigger ideas. S.
... View more
Hi, It's still all a bit confusing because you are not specific enough about which Netapp you have sent results for. The first results here show that you aren't using smb2. The new cifs stat output looks more like you are using smb2. Try to limit your posts to a minimum of relevant information and keep the tested combinations well labeled. Also, watch the output of 'sysstat -x 1' while you are doing your tests. Use a new empty volume without any dedupe/compression. Try using a single identical file for all tests or a single config in a program like "iometer". The 35MB/s you post is pretty typical "smb1" speed. S.
... View more
Hi, The CIFS/SMB protocoll is not very performant. If you have newer clients than win2k3 and XP, SMB2 will give you much better performance. I see, however, that you aren't using it. Enable it on the filer CLI with 'options cifs.smb2.enable on' . Do take some time to read the documentation as well. Many of your questions will be answered there with much better authority than you will get on Communities. You might also want to try updating to a much newer 7.3.x release to take advantage of some performance enhancements in ONTap. S.
... View more
This would seem to only be available for machines running i Cluster Mode. I have wondered for years why there is such a disconnect between new functionality in ONTap and corresponding monitoring capabilities. I assume that most new functionality gets its internal counters exposed either via XML or SNMP interfaces, but DFM (..., ...) has no way to monitor: 1. vscan performance ... 2. deduplication performance (when it runs, how long it runs, how many GB are scanned) 3. reallocation performance (when it scans and when it actually reallocates) 4. vol autosize ... when it runs and when it is close to maxsize... perhaps even better ways of setting thresholds... Some of these monitoring activities could lead to more automatization of certain more mundane tasks...
... View more
Hi Adaikkappan, Yes, I think adding any possible references to the actual underlying techology used will be useful, probably most useful for those of us that have been doing things on the cli on the filers for years. Perhaps even set up two policy views (expert and normal) or whatever... If you really get ambitious, then you can also update the "List of Performance Counter" documentation as it is lacking a good deal of the performance counters descriptions that have come with FlashCache(PAM) . Should be easy to get a quick update going from 'stats counter explain' .
... View more