I'd like to be able to enable compression when setting up a new snapmirror relationship using powershell. I'm guessing this would be a feature request associated with the Set-NaSnapMirrorSchedule CMDLet. Currently I can only update the snapmirror.conf with the source, destination and schedule details. I then have to go back and manually add the compression enabled option.
... View more
It sounds like you have things in order for removing the shelf. I'd remove the "Not Available" drive before you remove the shelf though. Better to have a clean system before you attempt to remove hardware. With disks that are in a "Not Available" the best way to remove them is simply to pull the drive. The system will then run through it's failed drive process. Before pulling it confirm that it's not in an existing Aggregate and associated with a back end rebuild. Remove "Not Available" disk filer>priv set advanced filer>blink_on <disk> The blink_on option will light up the second drive light and make it easier to find. Also note the second disk number octet indicates the shelf number.
... View more
Thanks for the reply beam. I actually got it to work and found that I was running into a syntax issue more then anything else. The was able to resolve my issue by simply removing the credential details from my execution string. invoke-naNDMPCopy -l 0 /vol/source volume/source folder/ /vol/dest volume/dest qtree/ This worked for moving folders into qtrees on a volume found on the same filer.
... View more
I'm attempting to script moving home directories into Qtree's using invoke-NDMPcopy and have run into some problems. Host OS Version: Win2k8 64-bit PowerShell ToolKit Version: 2.2 NetApp DataONTAP 8.1.1 (Run from within a vfiler) All commands are being executed within a vfiler where I've been able to successfully run ndmpcopy from the command line with no errors. When running it from within Powershell it tells me that "Controller x.x.x.x (vfiler ip) does not support NDMP version 4". Attempts to re-run the command specifying version 3 yield the same result. Execution Syntax: invoke-naNDMPCopy <source ip> /vol/source volume/source folder/ <dest. ip> /vol/dest volume/dest qtree/ -SrcCredential $PSCred -DstCredential $PSCred -Level 0
... View more
The blog post mentions that in some environments NetBios is disabled on Domain Controllers. This would lead to the filer making NetBios requests and not getting a response. Check with your AD team to see if this is the case. I will be checking shortly and will report back if this is the root cause.
... View more
Found a possible root cause... NetBios (windows ports 137, 138 (UDP) and 139 (TCP) are used to NetBIOS over TCP. TCP445 is used for SMB over TCP.) being disabled and/or blocked on the domain controller. The fix is to disable NetBios and to reset the DC connection on the filer. Reference / Source: http://sysadmin-dayindayout.blogspot.com/2009/04/netapp-errors-nbt-cannot-send-broadcast.html
... View more
I'm seeing the same error on many of my filers as well. I'm running 8.1.1 7-Mode Exact Error: filer:ems.engine.suppressed:debug]:Event 'nbt.nbss.network.error' supressed 1 times in last 96 seconds.
... View more
I'm writing a script which clones volumes which are not being written to via cifs. I get the cifs write counter using the following line: $cifs_ops = get-naperfdata -name volume -instances <volume name> -counters cifs_write_ops | select -expandproperty counters | select value I've run into an issue where the counter value is persistent ie. CIFs copy finishes, counter value in terms of cifs ops = 8000+ when the share is no longer mounted and ops should be 0. Is there any way to force an update of the counter value through the PowerShell toolkit?
... View more
I am running into this problem as well. I'm seeing 600 millisecond other_latency spikes within the Volume Latency View in performance advisor. This is a Fibre Channel LUN which is storing Oracle Data files.
... View more
With the workload you have (write intensive) I'd be curious to see your disk utilization. This can be viewed within performance advisor or by running the stats commands from the filer. I'd be curious to specifically see the output from this command: filer> stats show disk:*:disk_busy Also, how many disks do you have within your aggregate? What aggregate raid group size? What speed & type (FC, SAS, SATA)?
... View more
If the shelf is completely empty (SSD shelves must be isolated) you should be able to use it as a new SSD shelf. Keep in mind that you'll want to add it as a new stack on a controller which only serves out SAS & SSD disk. This will ensure that CP commits are not hindered by slower disk on the back end. The outstanding question would be whether NetApp sells just the disk.
... View more
Here's a link to the documentation covering why running a reallocation scan on a deduped volume is not recomended. "A file reallocation scan using reallocate start or reallocate start -p does not rearrange blocks that are shared between files by deduplication on deduplicated volumes. Because a file reallocation scan does not predictably improve read performance when used on deduplicated volumes, it is best not to perform file reallocation on deduplicated volumes. If you want your files to benefit from a reallocation scan, store them on volumes that are not enabled for deduplication." http://now.netapp.com/NOW/knowledge/docs/ontap/rel80/html/ontap/sysadmin/GUID-9447DB40-4537-4C4F-8C14-45BF8B0F40EF.html The list that I put together is the ideal case but not the rule.
... View more
I discussed this option with the customer and they explained that this won't work due to an application constraint. For now it's back to the drawing board. I'm looking at their performance and I/O requirements now to see if CIFs is a viable option. Although based on the partition size requirement and their data change rate I'm guessing CIFs will fall short on the performance side.
... View more
I just ran into the Datat ONTAP 8.x 16TB LUN limit within my own environment and thought I'd start a thread to discuss how customers are handling requirements for LUNs greater then 16TB in size? As NetApp has indicated that LUNs greater then 16TB will not be supported in the near future I ask what are my alternatives. I am limited to using Windows has my host OS and am also limited due to snapdrive not supporting backups of dynamic disks. I'm guessing I'll end up having to write a script to snapshot a consistency group. In my mind it makes no sense that I can do all of the following but create a 16TB+ LUN Aggregate 100TB Size Limit (6280) Volume 100TB Size Limit
... View more
Reallocation should only be used under the following conditions (ALL must be met). Performance is degraded on a Volume, Share (CIFS / NFS), or LUN. Deduplication is NOT turned on for the volume. Snapshots are NOT configured for the volume. Use the reallocate measure command to determine the fragmentation of the volume or LUN. If the output shows that optimization level is higher then 5-6 then you can either manually run a reallocate against the volume/LUN or you can schedule a reallocation job outside of business hours. In your case I would simply run a reallocate measure if you're curious to see what if any impact deleting the LUN will have. Typically you don't have to reallocate LUNs in a volume simply because you deleted a LUN within a volume.
... View more
Where are you seeing that it's taking 180 seconds? ie. are you seeing it within logs? If so please provide them... the logs that is. Failover to the cf partner should be near instantaneous. Failback is another story since the controller has to boot before it can takeover services. I'm also curious to hear what transport protocols you're using. With FC you should see no downtime whatsoever since both boxes should be configured in "single_image" mode. NFS/CIFS and iSCSI will be impacted and require use of the NetApp Host Utilities kit which will update the timeout settings to 120 seconds for physical hosts. For virtual hosts a seperate host utilitities kit is bundled with the ESX host utilities kit which updates each hosts disk timeout settings to 180 seconds.
... View more
After the download command completed and upon reboot did you run "update_flash" from the Loader prompt? Link to 7.3.5.1 upgrade guide. If step 11 is not executed the kernel will not be upgraded and the version command will always display the previous version. https://now.netapp.com/NOW/knowledge/docs/ontap/rel7351/html/ontap/upgrade/frameset.html
... View more
So I see your point but would like to reiterate that the Fibre Channel ports on this filer will never be used as initiators (disk initiators to be exact). All of the shelves I plan on attaching are of the SAS varient so my only use for FC ports is as targets for host I/O. Soon I'll have the box fully cabled and will start my baseline performance testing. We'll see how hard I can punish the two local 4Gb FC ports...
... View more
Thanks for the input guys, I really appreciate the links and corrections. I guess one of my concerns is that the onboard SAS ports reside on the same asic. So if the asic fails a controller failover will be initiated where as with use of two four port cards I can layout the redundant drops such that the card is no longer a single point of failure. So I just re-read this statement and while true it can be mitigated by mixing the drops between a local 4 port SAS card and the onboard ports. Radek, within the linked post bellow the one you mentioned expands upon my concern regarding mixing local and expansion FC cards. Since the local ports are 4Gb/s and an expansion card would be 8Gb/s this would mean that I could not simply add the expansion card and use both the local and expansion card ports. This would mean that my local 4Gb/s ports would be useless since I'm running in single image mode which includes all local target fc ports within the same group. Please correct me if this functionality has changed.
... View more
After recently receiving my new 3270 controller pair I had a few suprise's... There is an additional issue which I either missed when reviewing the documentation or is possibly not mentioned at all. The issue I speak of is that with the new 10GbE interconnect (HA Active/Active) change there is no longer 4 onboard fibre channel ports available per controller. If you wish to cluster a pair on 3200's series arrays you will only have two 4GbS FC ports per controller pair. While I understand that an FC card can easily be added to expansion ports this seems like a glaring issue for those of us who wish to use our new 3200 as a multi-purpose SAN/NAS box. Plus this goes against existing best practices which state that FC target traffic should be isolated to either a single card or the local FC ports but not both. I'd also like to get feedback on how others are planning on using the two local SAS ports built into each controller. With the cabling diagrams I've reviewed as well as the best practice documents I see no real use for them unless I wish to build out a non-redundant SAS stack. In my environment this is not an option since uptime and performance is my main concern.
... View more
Interesting, I just obeserved the same issue. I wonder if NetApp could fix this within a service pack. You would think that -v output would include ALL information about a volume.
... View more
After recently receiving my new 3270 controller pair I had a few suprise's... There is an additional issue which I either missed when reviewing the documentation or is possibly not mentioned at all. The issue I speak of is that with the new 10GbE interconnect (HA Active/Active) change there is no longer 4 onboard fibre channel ports available per controller. If you wish to cluster a pair on 3200's series arrays you will only have two 4GbS FC ports per controller pair. While I understand that an FC card can easily be added to expansion ports this seems like a glaring issue for those of us who wish to use our new 3200 as a multi-purpose SAN/NAS box. Plus this goes against existing best practices which state that FC target traffic should be isolated to either a single card or the local FC ports but not both. I'd also like to get feedback on how others are planning on using the two local SAS ports built into each controller. With the cabling diagrams I've reviewed as well as the best practice documents I see no real use for them unless I wish to build out a non-redundant SAS stack. In my environment this is not an option since uptime and performance is my main concern.
... View more