You still do not give enough information. If your “data” is in form of LUN on NetApp, it may be possible using external data migration appliances (or even just host based tools). Or it may be possible using third party storage capabilities (like EMC CLARiiON SanCopy). If your “data” is in form of files on NetApp volumes which are accessed via NFS/CIFS - it is not possible.
... View more
Most limits are listed in Storage Management Guide. The only minimum limits I am aware of are - You need at least 2 disks to build an aggregate - There is minimum root volume size which is model dependent - Flexvol cannot be smaller than approximately 20MB
... View more
You are a genius! Seriously. @SVILLARDI: reassign disks to new FAS3240, do netboot of version 7.3.5 (or whatever version you currently have) and then do download from booted system. Should work. The link is for FAS31xx, but it works exactly the same (for 7G) for FAS32xx as well. Be sure to do netboot and not allow it to boot from UFM.
... View more
It looks like the only way to do it is to take couple of spares and install 8.0.2 on them and then do revert. You may unplug all other disks (and hook just a single shelf to start with). That does mean longer downtime though. Have you already contacted support? I bet there should be a possibility - the situation does not look that rare. If you get it done without brute force I appreciate feedback. Thank you! P.S. someone found a way to connect UFM to commodity PC?
... View more
You really have to open support case to get it analyzed. But it starts to sound very much like bug http://support.netapp.com/NOW/cgi-bin/bol?Type=Detail&Display=510586
... View more
It is not enough to simply zone both NetApp controllers to host. Hosts (and NetApp) have to be properly configured for multipathing so they can handle path failures.
... View more
VDX does not use STP at all. It runs proprietary protocol encapsulated in TRILL packets which ensures shortest path between each pair of switches. All links between switches in VCS fabric are always active. There is very little configuration necessary - fabric is formed automatically. It also seamlessly inter-operates with classic STP by virtue of being fully transparent - other switches see VCS fabric as single medium. It also arguably more scalable than Nexus vPC which allows 2 switches only. Brocade at the moment supports up to 8 switches if I am not mistaken and you can configure LAG across any switches. But to have more than two switches in VCS fabric you will need additional license ... VDX is used in one of projects I am involved in as 10G backbone. I am not aware of major issues. But neither have I seen major issues with Nexus which is used in other projects ...
... View more
Your output shows large number of CP in “s” phase. One of possible reasons in the past was enabled atime updates (which would induce massive inode change even in read-mostly workload). Is volume option “no_atime_update” set on your VMware volumes?
... View more
I have couple of servers connected to Node1 and few other servers to Node2. So when I do a takeover from node2 or node1 and vice versa, how can I achieve HA. You can't in this configuration. To achieve HA host must be connected to both NetApp controllers. How do you expect it to work? If host is connected to Node1 and Node1 is not available - how host can access data?
... View more
Could you please edit your message and change font of sysstat output to fixed width (e.g. courier) - it is near to unreadable as is. Thank you.
... View more
You assign disks to pool0 and pool1 and then simply use “aggr add” command. It will automatically select disks from pool0 and pool1 to keep mirror. In your example you will need to add 40 disks: “aggr add aggr01 40”. You may use “-n” to preview disk selection and use usual qualifiers like desired disk size to disambiguate.
... View more
To redistribute data across disks you should use "reallocate -A". Simple reallocate will not touch snapshot data. You should consider removeing aggregate snapshots, as reallocation will cause "growth" of them.
... View more
Data ONTAP will automatically start zeroing disks if you add non-zeroed spare to aggregate. There is no need to remove disk ownership in this case. As source system must be shut down anyway to remove shelves, it is easier to assign disks in maintenance mode on source to correct target system before removing shelves and then simply add them online to target. One just have to be careful to add all disks at once. But it is near to impossible to assign disk ownership to all new disks at the same time, which will result in scary “errors” about incomplete aggregate and possibly aggregate reconstruction. There are other ways to do it, that why I said initially that there are some points to consider. And it is better done by professional service which has experience with NetApp.
... View more
Removing disk ownership does not destroy information on disks (although in this case I may not be needed - rather it makes sense to immediately reassign disks to target system). You need not to create aggregates on target system - doing this will indeed destroy all previous disks content. Did you consider ordering this job from NetApp or partner professional service?
... View more
Data in snapshots is located in the volume. Snap reserve does not allocate nor limit snapshot space - it only limits visible available space on volume.
... View more
Yes, it is. You will need to assign disks to new controller after which aggregates become visible. It is advisable to rename aggregates and volumes in advance to avoid name conflicts with new system. There are several considerations when doing it though, and it requires downtime at least on source system.
... View more