Essentially they use the same process - a snapvault is a qtree snapmirror. For performance improvements you can look at network compression, data compression and data deduplication. Snapvault does not take advantage of data compression / deduplication though you can run it after it has reached the destination. A volume snapmirror would take advantage of data compression / deduplication savings.
... View more
Use can use ndmp to backup the snapshots. You could also clone the volume using the latest snapshot and let it hold the original snapshots while the source volume cycles through the current ones. How many snapshots are you wanting to keep? Snapshots aren't really a great method of long term backups as it uses the same storage as the original volume. SnapVault might be a possibility if you have the license.
... View more
Your best solution would be either to setup a logging server to capture the events on the filer to a separate server which you can customize to keep the logs for longer. Or you could copy the messages files themselves from the filer over to another server and then you can control how long the messages files are kept. The messages files are used by the syslog deamon on the filer. You can control some things via the syslog.conf file.
... View more
Eventually after many reboots of the rlm and some cluster failover / givebacks, it decided to start working. Unfortunately, there was no particular activity on my side that seemed to help.
... View more
Changing the size of the parent of a clone is fine. Just turn the fs_size_fixed to off. This generally gets set when a volume was the destination in a snapmirror.
... View more
It just means if you are using volumes with qtrees, then you would need to ensure the qtree name on the DR side matches the qtree on the primary side as well as the containing volume. So, as long as all the volumes and qtrees are named the same on both primary and DR, then as you say, updating DNS to point to the DR filer would avoid having the clients have to update their details. Most NFS mounts would likely have to be re-mounted anyway to avoid stale handles.
... View more
What is on those LUNs? Can the data be backed up via the assigned client server? You can use ndmp but it won't be consistent if the volume contains LUNs and you don't have any method to flush the file system before the backup starts.
... View more
You can get some info via netstat -a (on the filer) and showmount -a (from a linux server on the network), but it may not be 100%. But it might be close enough to help narrow down the possibilities.
... View more
For test 1 - is it going from local disk on the server to the filer? For test 2 - is it from local disk on server A to local disk on server B? Are the servers / filers on the same network configuration - subnet or hop count? Would there be any other major activity happening on the filer at the test time? What is the aggregate layout on the filer?
... View more
You can use iSCSI LUNs on a VMware guest - the MPIO might not work as before due to the virtualization of the network. But it should be ok, if you configure the VM with the same number of NICs / networks as the physical. I'm not sure if you can easily switch between RDMs and iSCSI connectivity - probably simpliar to just stick with using iSCSI.
... View more
For a bit more info, are you wanting to physically detach the shelf from one filer (or filer pair) and move it to another filer (or filer pair)? Or is the shelf assigned to one partner and you want it assigned to the other? You can see which disks are part of an aggregate using sysconfig -r or aggr status -r. If the disks are assigned to a filer but not in an aggregate, they should be listed as spares (aggr status -s). If they aren't assgined to a filer, you can see them using disk show -n.
... View more
No problems at all. Having a higher capacity connection on the NetApp will be good as it can handle requests from multiple clients without issue. No communication going to the client will exceed 2 Gbps, the switch will handle any buffering required.
... View more
Which version of Data OnTap are you planning on using? There is an Install / Upgrade guide for each version. Also the System Admin and Storage Admin guides provide additional details for configuration. The filer will start with a setup script that will handle some basics and get it online to the point where you can configure it via web page or ssh. Each setup will be different depending on the purpose of the filer and the type of disks / networking / file access protocols that will be used. In general, once the cabling is completed and the components power up with green lights, run through the initial setup script then you can change the networking to add vlan tagging or multi-level ifgrps (update /etc/rc) secureadmin should already be enabled if the initial install is a version 8 configure the aggregate(s) as desired - may require moving vol0 around depending on how the initial aggr was configured during setup configure vol0 - this volume's snap sched / snap reserve / language will be used as the default for all new volumes created. add all licenses verify fibre connectivity if needed start up all file access protocols that will be needed configure cifs, join AD if required add local users for admin and other purposes setup snmp setup syslog enable clustering configure and test autosupport change options as needed update firmware on disks, shelves, rlm/sp, etc test failover, test network redundancy, etc document the filer and setup monitoring
... View more
If a shelf fails entirely, you're going to have problems in any scenario. If the disks in that one shelf belong to both filers, then you are going to lose access to part of the data on both at the same time. To reduce the overall risk, when possible, assign the disks in each shelf to only one filer. The shelves have mostly redundant parts, so a complete failure is low but not impossible.
... View more
I don't know if it is documented, but it would be a good practice to assign all the disks on a shelf to a single controller. Of course, best, best practice is for each to have their own stack. But we all know how things work in the real world. What we have done in the past in scenarios where you start with one or two shelves, is to split the disk assignment between the heads. Once more shelves are added, the new disks can be re-assigned so that each controller owns all the disk in their "own shelves". You can do this via the disk replace commands (if you have enough spares, and the patience). It is also a good practice to spread the disks in the raid groups in the aggregate over all shelves that the controller is using. So for 6 disk shelves, maybe shelves 1, 2 and 3 belong to head 1 and shelves 4, 5 and 6 belong to head 2, and then in the aggregate for head 1 the disks would be added such that the first disks from shelf 1, shelf 2 and shelf 3 are added first, then the second disks from all three shelves and so on so that IO to each raid group is spread across all shelves.
... View more
What is the constraint? There are 255 snapshots being created? Or some size issue? You can change the trigger from volume to snapshot space if that helps. How are the snapshots being created?
... View more
You won't be able to start the FCP service without a license. In theory iSCSI is a free license but I don't think you would be able to get it without a support contract. Perhaps the seller of the filer may have some licenses linked with the serial number they could provide you.
... View more