volume efficiency start -vserver vs1 -volume vol1 -scan-old-data true -b I am running 8.2.1P1 There was a opton, "-snapshot-blocks true", it should compress locked snapshot. This is how I understood. However, the option did not work, and error message was "invalid argument". I used "-b" option instead, and accepted. according to TR-42369, should it do the same as "-snapshot-blocks true", is that true? The following command would not work neither, it should work according to TR-3966. volume efficiency start -vserver vs1 -volume vol1 -scan-old-data true -compression true -dedupe true -shared-blocks true -snapshot-blocks true I wanted to make sure snapshots will be included in compressing? The volume "vol1" that I am working on is 1.6TB volume for vmware vcenter, about 1.2TB are logs and yet snapshots, and that's why I wanted to use "-snapshot-blocks true" optioin. Should it considered to be nornal that neither compression nor dedup did much saving on logs? I only saved 3% by deduping? Any logic explaination here? Pls stay with me to get it done. Thank you!
... View more
Hello BOBSHOUSEOFCARDS , Thanks a lot for the message. While I need some more time to go through your message. I have the following specific question. The dedup has already been enabled. and saved 3% on dedup. I did the following to enable the compression. My question is, should the 2nd one start only compressing the volume or start both deduping and compressing? volume efficiency modify -vserver vs1 -volume vol1 -compression true volume efficiency start -vserver vs1 -volume vol1 -scan-old-data true -b # If i understand correctly, -b is to compress locked snapshot. As the result, I got 0 compressed, if I run "vol show -vserver vs1 -volume vol1 -fields compression-space-saved", which means I got nothing saved by compressing.
... View more
I know NA DeDup which is taking place on the storage. What about backup software, for instance CommVault Deduplication? Is it taking place on hosts? can it also be running on the storage? What is the preference of using one over the other? Thanks for sharing!
... View more
It worked out as you said, and very well. Now, why I did not get any space back, if I enabled postcompression on the deduplicated volume. What type of data the compression would work out better even after deduplication? or usually compression would not needed if dedup got worked well?
... View more
Thanks a lot for the prompt message. 1. Yes, it is thin provisioned, Space Guarantee in Effect: true 2. It is NFS volume, no LUN 3. The only snapshot was taken is the result of snapmirror. Does this snapshot count as well? and this is the reason I could not get the spade back before this SS got removed which will not be until SM replatinship got deleted. Correct? In this case, should we schedule DeDup process before SS is taken, would this change get the deduped space back?
... View more
Before I run DeDup on a 1TB volume with 94% full. After the process is completed, it shows me 700GB space got saved(vol efficiency -vserver xx -volume xx -fields dedupe-space-saved). However, when I run vol show on the volume, it is still showing me 94% full. What steps do I need to go through in order to get the space back?
... View more
Could you please provide the right link of the document? or what is the detailed command to failback to a new volume name? Appreciate your patience. There are a lot of docs, I am just not sure which one is the right one.
... View more
Please if anybody share the following: 1. Would you pls send me the link of the doc in details about failback? 2. what the command is to fail back and update what's been changed on DR to original productions? 3. When I failback, can I failback to a new volume? Thank you for sharing.
... View more
Would you pls send me the link of the doc and what the command is to fail back and update what's been changed on DR to original productions? Thanks a lot!
... View more
These commands seems helpful, and will look into them further. Configurations here look like the following: LIFNFS-1 and LIFVMWARE-1 on node1, these 2 LIF's are via 2 diferent VLAN's and down to the same ifgrp (4 phsycial ports on node1) LIFNFS-2 and LIFVNWARE-2 on node2, same as above but 4 physical ports on node2 LIFNFS-1 and LIFNFS-2 are DNS balanced, so LIFVMWARE-1 and LIFVMWARE-2. If I understand correctly, by using this configuration, accesses from the same type of clients should be balanced well between 2 ifgrp's and on 2 diff nodes. The only contention might happen when two different type of accesses are coincidently trying to access LIF's/VLAN on the same node therefore the same ifgrp, the contention will be then on the node level. When this situation happened, accesses should still be able to be balaced between 4 physical ports. Am I understanding correctly here?
... View more
Thanks a lot for your message, it was very helpful. these two accesses are going through 2 differnt LIF's which are based on two differnet VLAN's, and which are on the same ifgrp. The ifgrp is based on 4 physical ports with 10Gbit each, in multimode-lacp mode. It is commonly assumed here that the contention on 4x10Gbit should not be a concern. What tools / commands can I use to verify or monitor if we have any contention on the ifgrp or ports?
... View more
Sorry for confusing Thanks for the correction, when I said VLAN, I really should say LIF's which are built on VLAN's As I said, although we have several SVM's, 2 major applications access volumes through one same SVM, therefore, most of clients accessing traffic go through LIF's on the SVM. I am wondering if this configuration would cause any traffice issues on LIF's of this SVM. The rest of SVM's is lightly used. Should not we at lease to separate these two traffices to two differnet SVM's and then differnet LIF's? Or, would this configuration cause any other issues? Hopefully, you understand me better this time, or tell me what I am missing. Thanks!
... View more
Currently, we have two major accesses, NFS access by different hosts(linux, Unix etc) and the other is vmware datastore. Both are using the same major SVM. Both access are based on 2 different primary subnet via two different VLAN's. Which part do I need to check into, and to then to found out whether or not there are any issues with this design? By using two different VLAN, we should be then okay to use the same SVM? Thanks for your inputs!
... View more
Thanks a lot for your prompt response. Understand that I cannot remove files under .snapshot on the client. So, can I then remove the entire snapshot "hourly.2015-04-06_1205" on the client?
... View more
-> pwd /some/directory/.snapshot/hourly.2015-04-06_1205 -> rm file1 rm: remove write-protected regular file `file1'? y rm: cannot remove `file1': Read-only file system I can delete them from NetApp storage. Is there any way I can allow the user to delete their snapshots? Thanks,
... View more
Right. Understood. NFS volume doesn't use F.R. Should I change or any reasons to change the volue to 0%? since all being set to 100% here
... View more
If a volume for NFS export and it's F.R is set to 100%, are there any impacts on the volume?
As I understand F.R. is only useful in volume with LUN.
If no useful in NFS volume, do I have to change it to 0%, can I just leave as 100%
Thanks!
... View more
I got the same situation, and feel this parameter is not so much meaningful. So, how can I disable the Global threshold? I know I can edit the volue but no where I can find to disable it. Please advise. Thanks!
... View more
Found this link. We are getting the same error when we run ssh command line to the filer. ssu id@filer. We could not upgrade the version for now. Are there any soluctions for that?
... View more
I am trying to setup passwordless ssh connection, but got the following error. By using the same command, it was successful on the other cluster. Could you please advice what could be the cause? What I can tell is that these 2 clusters are running two diferent version one is 8.2.3, and the other is 8.2.1P1 clu2::> security login create -username xyz -application ssh -authmethod publickey -profile xyz Error: command failed: failed to set field "role" to "xyz"
... View more
OK. I am starting to recognize now you could not find the DNS load-balancing information among all LIF's from NetApp storage level if you just start to work on it, unless somebody who knows the environment told you so. Now if you go back to my previous output, we have DNS load-balancing setup over clu1-nfs-2-g-01 and clu1-nfs-2-g-02. there is no DNS load-balancing points to clu1-nfs-2-01 and clu1-nfs-2-02, we could do so, by creating a DNS pointer, say the name could be clu1-nfs-2, for a type of application, or clients accesss. Correct
... View more