Have you verified that tape drives (not library itself) are supported by Data ONTAP? http://www.netapp.com/us/solutions/a-z/data-protection-devices.html
... View more
- does the filer have to be connected directly to the tape drive or can it stream data through the server You can stream data to another server - what plugins do I need to add for Networker to do this ? My reseller indicates I'll need Networker's NDMP and SnapImage plugins. What role do both of these take ? Recent enough Networker includes all needed functionality. You will need NDMP license (it is per head and tiered; I'd expect FAS2020 to be Tier 1). You may need DSA license as well for EMC NetWorker, I do not remember exact license structure; Fujitsu NetWorker does not require it. NDMP license enables NetWorker to talk to filer and initiate backups; DSA functionality accepts data stream from filer and saves it to NetWorker device. IIRC NetWorker supports DSA since 7.2 or 7.3. Correction: NetWorker feature is called DSA (Data Server Agent) and seems to be included in base funcionality; there is separate license for NDMP Tape Server indeed but as far as I can tell it is not required in this case. SnapImage has absolutely nothing to do with your case. - is there any way I can backup both to an attached local disk (ie the XServes) as well as to tape ? Not sure I understand what you mean. Using NetWorker you sure can do it as long as you have needed licenses (e.g. backup to disk) - does NDMP on the Filer allow me to do incremental backups or do I have to dump the whole volume each time ? Yes, you can do incremental backups. Keep in mind that NDMP is (near to) useless for LUNs (you mentioned iSCSI ...). It is only really useful for file system data, i.e. volumes exported via NFS/CIFS. Message was edited by: aborzenkov
... View more
You do not run an aggregate reallocate after growing an aggregate. That will not gain you anything, and it says as much in the manual. That is not me saying it, that is NetApp. E-h-h - no, it is not what NetApp is saying, it is how you read it NetApp says: Do not use -A after growing an aggregate if you wish to optimize the layout of existing data. But that is exactly what Jeremy was telling you all the time. Aggregate reallocation won't improve layout of data - but it will improve distribution of data over disks. If you are having performance issues I would stop here and ask - which performance issues? Performance is not equal performance. I have customers who never run reallocate and are quite happy - for their specific workload.
... View more
In fact I'd be happy just to know if aggr reallocates take care of everything under them. I can handle one large flood of snapshots if I'm ready for them but I'd prefer not to get ready if it's not worth my time... According to official NetApp manuals aggregate reallocation does not optimize file layout (which is logical when you think about it - aggregate does not know anything about files that are too far above). It compacts used blocks to create more contiguous free space. So aggregate reallocation may help with disk writes, but it shouldn't have any effect on large sequential disk reads.
... View more
Sounds great since just last week we could have saved a trip to the DC to repower a shelf on an older system... could have saved 4 hours of downtime! host:~ # rsh filer 'priv set -q diag; acpadmin' Usage: acpadmin list_all acpadmin expander_reset <adapter_name>.<shelf_id>.<module_number> acpadmin expander_power_cycle <adapter_name>.<shelf_id>.<module_number> acpadmin post_data <adapter_name>.<shelf_id>.<module_number> acpadmin voltage_status <adapter_name>.<shelf_id>.<module_number> Looks like there are some possibilities to remotely power-cycle shelf. Of course, they are on diag level ...
... View more
Fractional reserve is set to %100, does this need to be turned off ? I would not put it this way. It may be turned off. will that impact anything on the volume ? In the worst case if you run out of free space during snapshot existence writing to your LUN will fail. It will likely upset host using this LUN. Turning fractional reserve off requires very good understanding how space in managed and permanent monitoring.
... View more
Just read in announcement. Could someone give more pointers about it? NFS transparent migration would be killer selling point for some projects. Thank you!
... View more
http://now.netapp.com/NOW/cgi-bin/bol?Type=Detail&Display=164329 I must honestly admit I fail to understand what exactly was fixed. Bug description visible to me sounds like deliberetae design decision. Do you have any details (i.e. what exactly was changed in a "fix")?
... View more
Egenera BladeFrame provides virtualization environment where IO to raw LUNs is tunneled through control nodes running Linux. So initiator type has to be set to Linux (because only control nodes have direct access to storage) but LUN type has to be set to match guest (called pServer in BF) type to ensure ... whatever is ensured by setting LUN type And if you consider more widely known ESX with RDM, you have the same issue. It is ESX initiator that storage sees; there is no way to configure multiple types to match every guest.
... View more
Correct me if I'm wrong; I think the "initiator group" type and LUN type should match No, I know several cases (mostly including virtualization of some sort) where initator type is required to be set differently from LUN type. Lack of infornation is really frustrating. I once stumbled upon kb article that described LUN type in pretty much details. Unfortunately I did not save it and apparently it was considered too dangerous and removed from NOW since then
... View more
Also, please check the info at: http://wikid.netapp.com/w/ONTAP_Blocks/Lun_Type Unfortunately this is internal-only link Selecting the correct LUN Protocol Type Was not the original question about initiator type, not the LUN type? I am interested in this as well.
... View more
As per EMC knowledge base article esg106853: A RFE has been filed LGTsc31666 for NMM configuration to have client
attributes that will allow the selection of the VSS hardware provider.
Currently this feature is not available.
... View more
Happy New Year to everybody! Somehow I could not find information where to download this wonderful software. I appreciate any pointers. Thank you!
... View more
forgette wrote: I'd be interested in hearing about any other features that may be lost with using this model. Host based mirroring between multiple storage arrays. Usually this is done to build disaster recovery solution but can also be to increase resiliency of local config (e.g. there are known low-end raids that won't support online firmware upgrade). Since ESX does not offer any form of hypervisor based disk mirroring, it has to be done inside VM => volume manager.
... View more
fajarpri2 wrote: Before I try to increase vol0 size, I dont understand this... the total LUN size on vol0 is still below vol0 size. Please show df -r output.
... View more
Let me ask very simple question. When NetApp creates snapshot - does it take existing CP on disk or does it create new CP by flushing current NVRAM content to disk?
... View more
OK, you are right; PAM is PCIe and of course is not supported by 3050; I cnfused it with 3040 (where PAM II is not supported). But the point is different. I needed PCS to check whether plattform change to allow PAM would make sense (i.e. how much saving PAM would be). It was possible under 7.2.6.1 and is not possible now. To me it is regression You can find about PCS in http://ctistrategy.com/2009/02/27/netapp-cache-pcs/. The TR I mentioned is marked confidential so you have to look it up on field portal; but blog above sums up this TR just fine.
... View more
Hmm ... question: does NetApp reserve space immediately or only when snapshot is created? I remember having read something on this matter but forgot where. If space is reserved immediately, df output makes sense. aggr show_space still not OK, answering to myself. Quoting TR-3483: Data ONTAP removes or reserves this space from the volume as soon as the first Snapshot copy is created. There are some snapshots on vol0 (as indicated by snap reserve being != 0), which explains reservation. Does not explain aggr show_space though Message was edited by: aborzenkov
... View more
We just updated two systems (FAS3050C and FAS3040A) from 7.2.6.1 to 7.3.2. Running 7.2.6.1 I was able to activate PCS on both systems (as described in TR-3681). Now running 7.3.2 old ext_cache.* is gone and I have only flexscale.* namespace. I can activate PCS on FAS3040A without issues; but on FAS3050C I get host:~ # rsh fas options flexscale flexscale.enable off (same value in local+partner recommended) flexscale.lopri_blocks off (same value in local+partner recommended) flexscale.normal_data_blocks off (same value in local+partner recommended) flexscale.pcs_high_res off (same value in local+partner recommended) flexscale.pcs_size 0GB (same value in local+partner recommended) host:~ # rsh fas options flexscale.enable on FlexScale PCS is not active and cannot be enabled. host:~ # rsh fas options flexscale.enable pcs FlexScale PCS is not active and cannot be enabled. host:~ # rsh fas options flexscale.pcs_size 16 FlexScale PCS is not active and cannot be configured. Is it a known bug? Even if I cannot use PAM II on FAS3050, I still can use PAM I and that is what I'd like to measure.
... View more
Take Oracle as an example. Oracle does dependent writes, meaning Oracle will not ack write B unless it received an ack from write A. Therefore the snapshot you create will be one three possibilities: a) A and B are present; 2. A is present but not B, 3) Both A and B are not present. Sorry for stupid question but I miss logical connection between two parts ("therefore" above). Yes, Oracle does dependent writes. But write IO is acknowledged when data is placed in (NV)RAM. While my concern is about data on disk. And you did not mention split write problem Consider application which three (dependent) writes A (4K), B (8K), C (4K). How can I actually be sure that they do not end on disk as CP1: A (4K), B' (first 4K) CP2: B" (second 4K), C (4K) Such case makes content of CP1 effectively corrupted from application PoV - but application has no way to know it because it already got ACK from NetApp for all three operations. And to make extreme example - let's consider the case of fd = open("/file/on/netapp", O_DIRECT|O_SYNC,...); write(fd, buffer_10_MB_size, 1); Operating system ensures that this write will not return until full 10MB of data is transferred to underlying device (NetApp in this case). So application has all rights to assume that when "write" returned, data is safe on stable storage. But 10MB is fair amount of data which could easily be split in multiple IOs between system and NetApp. And when "write" returns some of these IOs are still sitting in NetApp memory and not yet flushed to disk. Again - on disk state is inconsistent with what application expects. May be this is due to my misunderstanding on how snapshots work. If NetApp does flush current NVRAM state to disk when snapshot is initiated, this does seem to solve partial write issue. And actually both issues ... but I have never seen this mentioned anywhere.
... View more