This is up to you (or whoever configured metrocluster). They should be configured to use different stacks but as far as I know, nothing in Data ONTAP actually enforces it. For FMC you normally mirror between sites and shelves in each site are of course in separate stacks.
... View more
I briefly tested and yes, you can make LUN larger, at least when space reservation is disabled. simsim> df s32 Filesystem kbytes used avail capacity Mounted on /vol/s32/ 19456 2200 17256 11% /vol/s32/ simsim> lun show /vol/s32/ttt.lun /vol/s32/ttt.lun 10t (10995116277760) (r/w, online)
... View more
Auto grow is for volumes, not for LUNs. NetApp won't grow LUN on its own. 20.11.2012, в 1:56, "Joey Prewett" <xdl-communities@communities.netapp.com<mailto:xdl-communities@communities.netapp.com>> написал(а): <https://communities.netapp.com/index.jspa> Autogrowing thin volumes and LUNs created by Joey Prewett<https://communities.netapp.com/people/HC_ITDEPT> in Data ONTAP - View the full discussion<https://communities.netapp.com/message/94973#94973> Hi all, Just got my FAS2240 up and I'm trying to fire up my VMware environment. My first task is setting up my ESXi cluster swap volumes. These are probably the only volumes for which I will enable autogrow to avoid my vhosts freaking out if they need more swap space. On my first cluster, I have set a volume as 50GB, autogrow up to 100GB at 1GB increments. I then created the LUN as 48GB (actual size of physical memory), but I have no autogrow options in the LUN dialogs (I'm doing all of this in the OnCommand System Manager). Everything is thinly provisioned and I'm using iSCSI. How does the LUN know to grow into the volume, etc? Am I supposed to set the LUN size to the full 100GB so that my vhosts see the max size and let the volume actually deal with the physical space consumption? Sorry. I'm sure this is asked elsewhere but I haven't found a satisfactory answer... Reply to this message by replying to this email -or- go to the message on NetApp Community<https://communities.netapp.com/message/94973#94973> Start a new discussion in Data ONTAP by email<mailto:discussions-community-products_and_solutions-data_ontap@communities.netapp.com> or at NetApp Community<https://communities.netapp.com/choose-container.jspa?contentType=1&containerType=14&container=2877>
... View more
Assuming you have recent enough Data ONTAP and Linux filesystem that supports fstrim, it can be done. E.g. ext4 does support it. I am not sure if NetApp explicitly qualifies specific combinations of DataONTAP/Linux distro version/filesystem type, you have to check IMT. SDU cannot and does not perform space reclamation.
... View more
You should be able to just pull failed disk out. Of course, I would contact NetApp support before to ensure there are no known bugs that result in controller panic or something. Otherwise disk copy usually takes less time and impose less load on a system than full rebuild. And you will need to make rebuild anyway.
... View more
OK, thank you. I need to get a closer look at SnapCreator indeed. SMO/SMSAP would be overkill here (it complicates deployment without clear benefits).
... View more
Interesting. It is not actually crap, but square brackets are missing. It should have been :[fe80::20c:29ff:fea5:ac48]:[fe80::250:56ff:fe63:7152]:[fe80::250:56ff:fe66:28d7]:[fe80::250:56ff:fe66:fd86] Looks like a bug in SRA somewhere. Disabling IPv6 in ESX is just a workaround, but it needs to be solved properly. Did you open case with NetApp?
... View more
Data Motion for Volumes from 32 to 64 is not supported. You can use either snapmirror or vol copy. Snapmirror will give you minimal interruption. See also TR 3978 for details.
... View more
Yes. Do not forget to mark new volume as root (vol options new_volume_on_small_aggregate root). And you will need to mark volume on new big aggregate as root again when you copy data back.
... View more
You can go via both routes - create small aggregate or completely recreate root aggregate. The former has advantage of preserving all existing settings. If you decide to reinstall, do not forget to capture licenses (or better make sure you can fetch them from support.netapp.com). You will need to reenter them. But in this case you do not actually “reinstall Data ONTAP” - you just recreate root aggregate. Data ONTAP 8.x binaries are completely located on internal boot device (it is USB Flash Module in newer hardware). And if you ever need to replace boot device, you use netboot with image downloaded from support.netapp.com.
... View more
I again obliged to ask - which issue? This thread names half a dozen issues at least. And again - RLW_Upgrading status/process is not an issue.
... View more
If you have VSM like S => D and want to move destination to D1. 1. snapmirror initialize -S D D1 2. snapmirror resync -S S D1 Now you have fully functional snapmirror S => D1 and can destroy original one.
... View more
Please show example of these symlinks (commands and output or at least screenshot). So far I do not understand the question to be honest. If you speak about Unix symlinks, they are entirely client question. As long as volume will be mounted on the same mount point on client, everything will be fine.
... View more
Automatic background disk firmware updates not enabled for non-mirrored RAID4 aggregates http://support.netapp.com/NOW/cgi-bin/bol?Type=Detail&Display=594453 This bug is marked as fixed in 7.3.7P1. Which implies that it is also supported in 7.3.7? Nothing in release notes I can find ...
... View more
IHAC that is currently implementing disaster recovery for Oracle databases using simple script + SnapMirror. Oracle is split between two volumes - data + logs. I am considering whether this can be replace by Protection Manager. One consideration is consistency - we need to make sure data + log volumes get coordinated snapshot that is replicated. Is it possible with PM? Can I configure it to perform consistent snapshots on dataset volumes? Current functionality is actually very simple - script makes coordinated snapshots, initiates replication, checks that replication is finished for all volumes and removes old snapshots. It ensure that at least several last versions is available for the case latest snapshot could not be fully transferred. Database is in crash consistent state which is OK. Thank you!
... View more
“disk show -n” does not show all disks, it shows unowned disks. To see all disks use “disk show -v”. But it won’t work if disks have no ownership information anyway. I do not think 8.x supports hardware based disk ownership, so the first step is to migrate from hardware to software.
... View more