Legacy Product Discussions

FAS2020, only 30MB/s throughput and the Tech Support says it's normal????

bchiu2000
15,835 Views

I am very unable with my $35000 FAS2020 purchase.  Unfortunately, I bought two of them.They are populated with 12 1TB Seagate ES.2 7200K drives.

I only getting 30MB/s throughput from it, everyone in my company is complaining about it performance.  And the NetApp Tech support  says it's normal!!!??

Even my single disk Seagate Barracula.12 (100MB/s Read) is faster than it.

Man!!!

1 ACCEPTED SOLUTION

shaunjurr
10,997 Views

Hi,

The Windows DSM supports, ALUA, by the way... and it is part of win2008.  We use it with FCP and it seems to do things correctly with MPIO.

If you don't think you need aggregate snapshots, just set the schedule to 0 0 0 ... 'snap sched -A <aggr_name> 0 0 0 ', then you can reduce the snap reserve down to basically nothing... 'snap reserve -A <aggr_name> 0'.  The process is bascially the same for volumes.  You can also use the option "nosnap on" on volumes.  If you use that with aggregates, it just balloons the reserve it needs and you bascially lose space.

Beating the system on raidgroup restrictions is a bit of a hack (and whoever uses this uses it at their own peril) but there's an option for this 'options raid.raid4.raidsize.override on' .  Then you will be able to do an 'aggr options <aggr_name> raid4' ... you'll get a "disk missing" error... ignore it, zero the newly freed parity disk, add it to your aggregate.  Do a full reallocate on your volumes/luns.

Mis-aligned disks refers to filesystem alignment between your upper layer NTFS and the underlying WAFL file system for block access.  If their block boundaries aren't aligned, then you cause extra I/O on the WAFL filesystem.  There are a number of TR's on file system alignment.  Support should have seen any problems there from the perfstat output.  You can check yourself too by looking at LUN stats in the perfstat output.  It should look something like this:

lun:/vol/vol000/server_e_disk/server_e.lun0-W-BpOZUhNk4/:read_ops:0/s
lun:/vol/vol000/server_e_disk/server_e.lun0-W-BpOZUhNk4/:write_ops:0/s
lun:/vol/vol000/server_e_disk/server_e.lun0-W-BpOZUhNk4/:other_ops:0/s
lun:/vol/vol000/server_e_disk/server_e.lun0-W-BpOZUhNk4/:read_data&colon;436b/s      <--- not a lot of traffic to this one
lun:/vol/vol000/server_e_disk/server_e.lun0-W-BpOZUhNk4/:write_data&colon;17820b/s
lun:/vol/vol000/server_e_disk/server_e.lun0-W-BpOZUhNk4/:queue_full:0/s
lun:/vol/vol000/server_e_disk/server_e.lun0-W-BpOZUhNk4/:avg_latency:59.77ms
lun:/vol/vol000/server_e_disk/server_e.lun0-W-BpOZUhNk4/:total_ops:1/s
lun:/vol/vol000/server_e_disk/server_e.lun0-W-BpOZUhNk4/:scsi_partner_ops:0/s
lun:/vol/vol000/server_e_disk/server_e.lun0-W-BpOZUhNk4/:scsi_partner_data&colon;0b/s
lun:/vol/vol000/server_e_disk/server_e.lun0-W-BpOZUhNk4/:read_align_histo.0:77%     <--- good
lun:/vol/vol000/server_e_disk/server_e.lun0-W-BpOZUhNk4/:read_align_histo.1:0%
lun:/vol/vol000/server_e_disk/server_e.lun0-W-BpOZUhNk4/:read_align_histo.2:0%
lun:/vol/vol000/server_e_disk/server_e.lun0-W-BpOZUhNk4/:read_align_histo.3:0%
lun:/vol/vol000/server_e_disk/server_e.lun0-W-BpOZUhNk4/:read_align_histo.4:0%
lun:/vol/vol000/server_e_disk/server_e.lun0-W-BpOZUhNk4/:read_align_histo.5:0%
lun:/vol/vol000/server_e_disk/server_e.lun0-W-BpOZUhNk4/:read_align_histo.6:0%
lun:/vol/vol000/server_e_disk/server_e.lun0-W-BpOZUhNk4/:read_align_histo.7:0%
lun:/vol/vol000/server_e_disk/server_e.lun0-W-BpOZUhNk4/:write_align_histo.0:91%    <---- good
lun:/vol/vol000/server_e_disk/server_e.lun0-W-BpOZUhNk4/:write_align_histo.1:0%
lun:/vol/vol000/server_e_disk/server_e.lun0-W-BpOZUhNk4/:write_align_histo.2:0%
lun:/vol/vol000/server_e_disk/server_e.lun0-W-BpOZUhNk4/:write_align_histo.3:0%
lun:/vol/vol000/server_e_disk/server_e.lun0-W-BpOZUhNk4/:write_align_histo.4:0%
lun:/vol/vol000/server_e_disk/server_e.lun0-W-BpOZUhNk4/:write_align_histo.5:0%
lun:/vol/vol000/server_e_disk/server_e.lun0-W-BpOZUhNk4/:write_align_histo.6:0%
lun:/vol/vol000/server_e_disk/server_e.lun0-W-BpOZUhNk4/:write_align_histo.7:0%
lun:/vol/vol000/server_e_disk/server_e.lun0-W-BpOZUhNk4/:read_partial_blocks:22%
lun:/vol/vol000/server_e_disk/server_e.lun0-W-BpOZUhNk4/:write_partial_blocks:8%

Having your "read_align_histo" and your "write_align_histo" in bucket "0" is a good thing.  If most of your writes are in other buckets or a majority in one of the other buckets, then you are causing more blocks to be read in the WAFL file system than necessary and causing artificially high I/O load.

If your problems come from disk I/O then you probably won't improve the situation any other way then having more disks until you are CPU bound.  Do try to upgrade to 7.3.5.1 .

Good Luck.

View solution in original post

40 REPLIES 40

brendanheading
12,543 Views

The FAS2020 isn't a speed demon but we get 50MB/sec from ours over NFS, and that's without doing any kind of tuning or using jumbo frames.

It's really impossible to diagnose speed problems without information on your network configuration, drivers, OS, network card types, or indeed what sort of test you're doing to measure the speed.

bchiu2000
12,544 Views

You are right.  So here's the info of our setup

network configuration,


I am using dual multi-path 4Gb/s Fiber connections from my FAS2020 to my Windows 2008 Std R2 64bit server box with 2 F/O HBA support up to 10Gb/s.  I created a lun on the aggr and with GPT format for my Windows environment.

This is what it looks like in my SAN, vol0 for the OnTap OS, vol1 for my VMware via NFS, Lun1 for storage on my 2008 server.

Space allocated to volumes in the aggregate
Volume                          Allocated            Used       Guarantee
vol0                           13349636KB       1927004KB          volume
lun1                         5685045392KB    5640266040KB            none
vol1                          113463304KB     108664304KB            file
Aggregate                       Allocated            Used           Avail
Total space                  5811858332KB    5750857348KB     122454000KB
Snap reserve                  312435392KB      13172084KB     299263308KB
WAFL reserve                  694300876KB        537352KB     693763524KB


drivers,

You mean the onTAP?  7.2.3. 

HBA driver, the latest version


OS,

onTAP?  7.2.3.

Windows 2008 Std R2 64bit


network card types,

Not related, but it's the onboard BroadComm 1Gbit network card x 4


what sort of test you're doing to measure the speed.

I am using HD Tune Pro, HdTach, Atto Disk Bench, PassMark (all newest version) running from my win 2008 server host.  All benchmark I performed all consistant.  around 30MB/s

From my other SAN, a Xraytex 6GB 12 x 500GB, same setup.  200MB/s!!!!!!!!!!!

ivissupport
12,543 Views

I read the post, and as can i undestand, your report a data throughput in "????" only at 30MB/s using a FAS2020....

I can say also that this it is normal if for example some other applications running....

If you not clarify the problem, noone can help you resolving with performance issues.. or simple understand what the problem is...

Is this a filer performance, problem with a database on SQL Server or Oracle?

A problem with CIFS, NFS or FCP?

You also said that Tech Support says that this is normal...

Did you open any technical case? If yes, paste the content of the case,  to better understand your problem in depth....

bchiu2000
12,543 Views

your report a data throughput in "????" only at 30MB/s using a FAS2020....

???? = accessing the filer's lun via FCP from my windows 2008 server R2 64bit.  Mounted as a GPT drive.

Please note that I am talking about sequential access here!  I expect way better performance from a $35K filer.  I use the filer to keep my company systems images backup.  All files are larger than 50GB.

here's the response from the case owner:

I have reviewed the perfstats  you have submitted and don’t see any performance related issues occurring. The  filer is performing NFS operations exclusively, the CPU doesn’t exceed 5% usage  in any of the perfstat data submitted, the disks are not overly taxed with any  workload.


Like we had discussed  previously, you are only able to work with the disks available (12 total, 10 in  RAID and 2 spares) and you would probably benefit greatly from added disks. In  addition, the disks you are using are 7200K SATA. These are low performance disks typically deployed for  non-iop intensive data like logs (sequential read data) or user home  directories. It is typically recommended that 15K FC/SAS drives or greater be  deployed for VM’s and random-read type databases like  Exchange.

I am not sure if I am suppose to post his name.

But I strongly disagree with him, 7200K SATA might not be fast as the 15K FC/SAS but they are nowhere in the 30MB/s range.  The SATA disk used by NetAPP are low-end enterprise drives according to the spec I got from Seagate website, but their speed should be way higher than 30MB/s.  So it's the problem is the performance of the filer.

scottgelb
12,543 Views

Good points here.  The key question comes down to what is the performance requirement and was there a sizing analysis prior to the system being quoted?  Due diligence to size the solution, even when entry level on a 2020 is important.  NetApp has very good sizing tools that take all workloads (sequential, random, read%, write%, working set size) and the ability to combine workloads of different types to make sure the right controller and number of spindles are quoted.  Often we are able to troubleshoot network configuration (flow control, lacp misconfigured, crc errors on the switch, etc) or other issues when we know the system was sized properly.

To say 30MB/sec is normal isn't a good answer from support... it may or may not be depending on your spindle type, spindle count and workloads.  2020s often push 100MB/sec read and 50MB/sec write from some of the systems I have looked at in production, but that can vary by many factors already mentioned.

bchiu2000
12,543 Views

I would accept the filer if the performance is 100MB/s read and 50MB/s write, even tho I am getting 200MB/s read and write from my other SAN with 12 x 500GB SATA 7200rpm I got for the same price as the FAS2020 and it is even 4 years older than the FAS2020.

shaunjurr
12,986 Views

Well, there are a few differences to your laptop drive... or wherever the Barracuda is...

1) It isn't part of a raid set where parity has to be calculated and written

2) If the Barracuda fails, you get to find a compatible drive, unscrew your system, insert the new drive, install an OS, restore the rest. Even on a raid system, you get to rebuild the array manually, often enough.

3) The Barracuda is probably doing simple single file operations, where your NetApp is doing a lot more searching for multiple I/O operations

4) The Barracuda is simply responding to I/O instructions from its upper I/O stack.  The NetApp is pushing data over the semi-broken SMB 1.0 stack (I'm assuming you haven't gotten to SMB 2.0 yet)

NetApp produces Enterprise class storage systems that can do a lot of almost magical things.  They don't sell "disk".  If your only concern was $/GB, then a box of 4TB usb disks might have been a better idea.

You probably could get more out of your systems, but I think I'll leave that exercise for you.  Choosing SATA drives and then complaining about performance would seem to point to a poor pre-sales investigation.

bchiu2000
12,986 Views

Shaun,

I have been playing around with RAID storage for the last 15 years.  So I just going to ignor your 4 point statement.

As a world-wide company, I had the chance to purchase other different enterprise class storage systems and know what to expect from them with the price we pay.

You got that right NetApp doesn't sell "disk."  As in my earlier replies. We also have a Xraytex 5604 SAN populated with 12 X 500GB (6TB in total)  I am getting 5TB of usable space. 

From my FAS2020, 12 x 1TB (12TB in total)  I am getting less than 6TB of usable space.

Xraytex: 6 years, 24/7 usage, 1 disk failed, but no downtime, RAID60, 1 hot spare, never need servicing.

FAS2020: 2 years, 1% of usage, no disk failed, RAID-DP, 2 hot spare. Snapmirror failed after 6 months by itself, LUN goes offline when it's filled, hung our server (had to disconnect the filer to get my server start up properly), NetApp Tech still can't get it fixed, downtime for the production.  This is what you call magic?

I admit it was a poor pre-sales investigation that I trust that sale guy who also got us the Xraytex.  We have black-listed him for any future company purchase.

ivissupport
12,543 Views

Hi

could you send the outputs from the following commands:

lun show -v

igroup show -v

ifstat -a

Did you create the lun from NetApp command line interface or filerview? or from Windows using SnapDrive?

Whats your allocation status for this lun? run the following command to get a status for fragmentation level

reallocate measure /vol/volname/qtree/lunname.lun

also from Windows w2k8 run msinfo32.exe and send me the output

thanks

bchiu2000
9,278 Views

Hi there,

First off, I want to thank you for looking at my problem.   I created the lun from either FileView or NetApp System Manager (I can't remember it's 2 years ago.)

Here's the info you asked, see attachment

DELSNBKUP001> reallocate measure /vol/lun1/lun1

Reallocation scan will be started on '/vol/lun1/lun1'.

Monitor the system log for results.

This didn't give me any output I assume it's in the log somewhere?
---------

msinfo32.exe

-----------

OS Name Microsoft Windows Server 2008 R2 Standard

Version 6.1.7601 Service Pack 1 Build 7601

Other OS Description Not Available

OS Manufacturer Microsoft Corporation

System Name DELSGBKUP001

System Manufacturer Dell Inc.

System Model PowerEdge R710

System Type x64-based PC

Processor Intel(R) Xeon(R) CPU           E5502  @ 1.87GHz, 1862 Mhz, 2 Core(s), 2 Logical Processor(s)

Processor Intel(R) Xeon(R) CPU           E5502  @ 1.87GHz, 1862 Mhz, 2 Core(s), 2 Logical Processor(s)

BIOS Version/Date Dell Inc. 1.2.6, 17/07/2009

SMBIOS Version 2.6

Windows Directory C:\Windows

System Directory C:\Windows\system32

Boot Device \Device\HarddiskVolume1

Locale Canada

Hardware Abstraction Layer Version = "6.1.7601.17514"

User Name Not Available

Time Zone Pacific Daylight Time

Installed Physical Memory (RAM) 16.0 GB

Total Physical Memory 16.0 GB

Available Physical Memory 13.7 GB

Total Virtual Memory 32.0 GB

Available Virtual Memory 29.6 GB

Page File Space 16.0 GB

Page File C:\pagefile.sys

Message was edited by: bchiu2000

shaunjurr
8,470 Views

Hi,

You need to check that your setup is also a supported setup. There is a support matrix.  It will normally include necessary MS patches and HBA firmware/driver levels.  Have you installed the Host Utilities from NetApp (it wasn't clear initially which protocol you were using)?  Are you using MPIO? with the Microsoft DSM?  I don't see ALUA set on the igroup either.

Basically, you are probably not getting the utilization from your disks because of a number of things, but you certainly could at least use a couple more disks.  With only 12 disks total, there is probably not a huge reason to use raid_dp, nor would you probably need 2 spares.  You have reduced the number of disks that you write data on to 8... from 12.  That is, of course, no fault of NetApp's, but rather a matter of having enough experience to know these things and to hack an option that will allow you to override raidgroup sizes.  You would then get 10 disks for data.  (1 parity, 1 spare as well).  That should give you over 20% more I/O assuming you do full reallocates of all of your volumes so that the data is spread out over all of the disks.  It might be a good idea to do this in any case.

I haven't seen any sysstat output that would tell me that the disks are at 100% load, even though the support person did comment that you were short on disk I/O.  Obviously you have enough CPU if it is only running at 5%.  The perfstat output should have been enough for support personell to identify mis-aligned filesystems.

I would also suggest an upgrade to 7.3.5.1 because a few things on the performance side have changed and you have a few volume options that can be useful (extent (older) and read_realloc).

Comparing a little SAN box to a NetApp is still a bit of apples and oranges, especially at the lowest levels.  Some of the overhead you are suffering under is there even if you get that advantages in increased functionality or not.  The WAFL filesystem is good at a lot of things, but lightning fast I/O is not one of those things. NetApp's history as a NAS vendor and the fact that it can do lightning fast consistent snapshots without I/O penalties has probably setup a few roadblocks to high sequential I/O, especially compared to "disk boxes" that basically use no filesystem between client host and disk blocks. The fact that your 2020 has very little intern memory makes all of this a lot worse as well.

Good luck

bchiu2000
8,470 Views

Hi Shaun,

I have the Host Utilities installed.  Yes I am using MPIO, with the DOT DSM.  ALUA is not supported according to mpclaim

C:\Users\administrator.NTPROC> mpclaim -e
"Target H/W Identifier   "   Bus Type     MPIO-ed      ALUA Support
-------------------------------------------------------------------------------
"NETAPP  LUN             "   Fibre        YES          ALUA Not Supported
C:\Users\administrator.NTPROC>

"With only 12 disks total, there is probably not a huge reason to use raid_dp, nor would you probably need 2 spares.  You have reduced the number of disks that you write data on to 8... from 12. "

I have to 100% agree with you.  That is what I ask the NetApp tech to do, he said no.  I told him I don't need 2 hot spares.  He said the 2 spares are needed in order for any failed disk to roll over to the spare, otherwise the filer will fail.  I think he's telling me a lie.

I have change the raidgroup size to 16, and I was able to add a hot spare into the raid group.  So now I have 9 data disk, 2 parity disk, 1 hot spare.  But I still couldn't figure out how to change the one of the parity disk to data disk?

The NetApp tech suggest using RAID4 since it only use 1 parity disk, but it can't be done, the maximum disk allowed in raid4 is 8 disk.

Would you able to tell me what is "mis-aligned filesystems?"

I know that WAFL is some good technology, but does it really need 700GB of space?  I see only 500MB get used.  And Snap Reserve is 300GB?  I don't need snapshot at all since this filer is use as a disk-to-disk backup.  That's 1TB in total I could make use of.

Aggregate                       Allocated            Used           Avail
Total space                  5811858332KB    5750857348KB     122454000KB
Snap reserve                  312435392KB      13172084KB     299263308KB
WAFL reserve                  694300876KB        537352KB     693763524KB

I have two FAS2020, so maybe clustering them will get me more I/O throughput?  Basically we bought these two FAS2020 for a DR solution.  They are in different physical location, but hooked up to a fibre switch.



shaunjurr
10,998 Views

Hi,

The Windows DSM supports, ALUA, by the way... and it is part of win2008.  We use it with FCP and it seems to do things correctly with MPIO.

If you don't think you need aggregate snapshots, just set the schedule to 0 0 0 ... 'snap sched -A <aggr_name> 0 0 0 ', then you can reduce the snap reserve down to basically nothing... 'snap reserve -A <aggr_name> 0'.  The process is bascially the same for volumes.  You can also use the option "nosnap on" on volumes.  If you use that with aggregates, it just balloons the reserve it needs and you bascially lose space.

Beating the system on raidgroup restrictions is a bit of a hack (and whoever uses this uses it at their own peril) but there's an option for this 'options raid.raid4.raidsize.override on' .  Then you will be able to do an 'aggr options <aggr_name> raid4' ... you'll get a "disk missing" error... ignore it, zero the newly freed parity disk, add it to your aggregate.  Do a full reallocate on your volumes/luns.

Mis-aligned disks refers to filesystem alignment between your upper layer NTFS and the underlying WAFL file system for block access.  If their block boundaries aren't aligned, then you cause extra I/O on the WAFL filesystem.  There are a number of TR's on file system alignment.  Support should have seen any problems there from the perfstat output.  You can check yourself too by looking at LUN stats in the perfstat output.  It should look something like this:

lun:/vol/vol000/server_e_disk/server_e.lun0-W-BpOZUhNk4/:read_ops:0/s
lun:/vol/vol000/server_e_disk/server_e.lun0-W-BpOZUhNk4/:write_ops:0/s
lun:/vol/vol000/server_e_disk/server_e.lun0-W-BpOZUhNk4/:other_ops:0/s
lun:/vol/vol000/server_e_disk/server_e.lun0-W-BpOZUhNk4/:read_data&colon;436b/s      <--- not a lot of traffic to this one
lun:/vol/vol000/server_e_disk/server_e.lun0-W-BpOZUhNk4/:write_data&colon;17820b/s
lun:/vol/vol000/server_e_disk/server_e.lun0-W-BpOZUhNk4/:queue_full:0/s
lun:/vol/vol000/server_e_disk/server_e.lun0-W-BpOZUhNk4/:avg_latency:59.77ms
lun:/vol/vol000/server_e_disk/server_e.lun0-W-BpOZUhNk4/:total_ops:1/s
lun:/vol/vol000/server_e_disk/server_e.lun0-W-BpOZUhNk4/:scsi_partner_ops:0/s
lun:/vol/vol000/server_e_disk/server_e.lun0-W-BpOZUhNk4/:scsi_partner_data&colon;0b/s
lun:/vol/vol000/server_e_disk/server_e.lun0-W-BpOZUhNk4/:read_align_histo.0:77%     <--- good
lun:/vol/vol000/server_e_disk/server_e.lun0-W-BpOZUhNk4/:read_align_histo.1:0%
lun:/vol/vol000/server_e_disk/server_e.lun0-W-BpOZUhNk4/:read_align_histo.2:0%
lun:/vol/vol000/server_e_disk/server_e.lun0-W-BpOZUhNk4/:read_align_histo.3:0%
lun:/vol/vol000/server_e_disk/server_e.lun0-W-BpOZUhNk4/:read_align_histo.4:0%
lun:/vol/vol000/server_e_disk/server_e.lun0-W-BpOZUhNk4/:read_align_histo.5:0%
lun:/vol/vol000/server_e_disk/server_e.lun0-W-BpOZUhNk4/:read_align_histo.6:0%
lun:/vol/vol000/server_e_disk/server_e.lun0-W-BpOZUhNk4/:read_align_histo.7:0%
lun:/vol/vol000/server_e_disk/server_e.lun0-W-BpOZUhNk4/:write_align_histo.0:91%    <---- good
lun:/vol/vol000/server_e_disk/server_e.lun0-W-BpOZUhNk4/:write_align_histo.1:0%
lun:/vol/vol000/server_e_disk/server_e.lun0-W-BpOZUhNk4/:write_align_histo.2:0%
lun:/vol/vol000/server_e_disk/server_e.lun0-W-BpOZUhNk4/:write_align_histo.3:0%
lun:/vol/vol000/server_e_disk/server_e.lun0-W-BpOZUhNk4/:write_align_histo.4:0%
lun:/vol/vol000/server_e_disk/server_e.lun0-W-BpOZUhNk4/:write_align_histo.5:0%
lun:/vol/vol000/server_e_disk/server_e.lun0-W-BpOZUhNk4/:write_align_histo.6:0%
lun:/vol/vol000/server_e_disk/server_e.lun0-W-BpOZUhNk4/:write_align_histo.7:0%
lun:/vol/vol000/server_e_disk/server_e.lun0-W-BpOZUhNk4/:read_partial_blocks:22%
lun:/vol/vol000/server_e_disk/server_e.lun0-W-BpOZUhNk4/:write_partial_blocks:8%

Having your "read_align_histo" and your "write_align_histo" in bucket "0" is a good thing.  If most of your writes are in other buckets or a majority in one of the other buckets, then you are causing more blocks to be read in the WAFL file system than necessary and causing artificially high I/O load.

If your problems come from disk I/O then you probably won't improve the situation any other way then having more disks until you are CPU bound.  Do try to upgrade to 7.3.5.1 .

Good Luck.

aborzenkov
7,862 Views

options raid.raid4.raidsize.override on

And this is officially supported?

shaunjurr
7,862 Views

I think when I stated that "you use it at your own peril" that there was a hint there somewhere...  It's worked for years for what I needed it for.

YMMV

bchiu2000
7,862 Views

Hi Shaun,

I removed the snap reserve for the aggr now.

DELSNBKUP001> aggr show_space
Aggregate 'aggr0'
    Total space    WAFL reserve    Snap reserve    Usable space       BSR NVLOG           A-SIS
   7810884864KB     781088484KB             0KB    7029796380KB             0KB       1632188KB
Space allocated to volumes in the aggregate
Volume                          Allocated            Used       Guarantee
vol0                           13349636KB       1919944KB          volume
lun1                         5749348948KB    5702930176KB            file
vol1                          113650252KB     108851252KB            file
Aggregate                       Allocated            Used           Avail
Total space                  5876348836KB    5813701372KB    1134827160KB
Snap reserve                          0KB      16524184KB             0KB
WAFL reserve                  781088484KB      61661832KB     719426652KB

I get more Total Space, but WAFL also grew.  It seems like the WAFL reserve is 10% of my total space.  Now it's 781GB and only 61GB is being used.  Is it possible to reduce the WAFL reserve to 2-3% of my total space?

Thanks in Advance

shane_bradley
7,862 Views

No,

Wafl reserve, is formating overhead.

Its the space reserved to store the filesystem information

bchiu2000
7,289 Views

Thanks shaun,  The commands you posted works very well.  I am now able to have 10 data disk, 1 parity, and 1 hot spare, just the way I like it.  I have tried to spot if my filesystem is mis-aligned or not.  However, I'm not exprience enuf to do so.  Would you able to see if I have a mis-aligned filesystem?  I have uploaded my perfstat on my other reply.  TIA

shaunjurr
7,289 Views

Hi,

As was already mentioned, you don't have any FC traffic going when you did your perfstat so it is hard to see if the blocks are being written over WAFL block boundaries.  If you do another perfstat, make sure you remove as much of your private information as necessary.  It might also be just as easy to attach the file here instead of leaving it at a site that has gambling pop-ups.  That sort of thing is not very popular where I work.

1. You do seem to have a few network problems.  I don't really know what sort of data you need to serve, but spending a little time with the Best Practices docs might improve things.  Lots of retransmissions.

2. If there are still snapshots in your main aggregate ('snap list -A') and you don't want to use snapshots, then remove them ('snap delete -a -A <aggr_name>') .

3. I would really recommend getting your system up to 7.3.5.1 .  If you can re-create your LUN after the upgrade, use the windows_2008 lun type.

4. I hope you reallocated your volumes after you added disks to the aggregate.

You might also want to check for patches for your 2008 system.  We had huge problems with 2008 and I/O because it like to use very small I/O chunks, especially over iSCSI.  Your system doesn't seem to be loaded at all, but your perfstat perhaps wasn't collecting samples while you were actually doing any work. Trying hitting the filer with some tests using IOMeter (freeware easily found) and run the perfstat (with the right options).

Hard to comment on the perfstat because there isn't much info there.

bchiu2000
7,289 Views

I was able to compile the perfstat again while having the IOMeter bench running, so there should be enough TCP traffic now.

1. You do seem to have a few network problems.  I don't really know what sort of data you need to serve, but spending a little time with the Best Practices docs might improve things.  Lots of retransmissions.

Okay, I will need to check that out and see what is happening

2. If there are still snapshots in your main aggregate ('snap list -A') and you don't want to use snapshots, then remove them ('snap delete -a -A <aggr_name>') .

This was done already

3. I would really recommend getting your system up to 7.3.5.1 .  If you can re-create your LUN after the upgrade, use the windows_2008 lun type.

I will need to do that tomorrow

4. I hope you reallocated your volumes after you added disks to the aggregate.

Yes, I did that.

This is what I see

lun:/vol/lun1/lun1-P3agH4VnOypr:read_align_histo.0:0%
lun:/vol/lun1/lun1-P3agH4VnOypr:read_align_histo.1:0%
lun:/vol/lun1/lun1-P3agH4VnOypr:read_align_histo.2:0%
lun:/vol/lun1/lun1-P3agH4VnOypr:read_align_histo.3:0%
lun:/vol/lun1/lun1-P3agH4VnOypr:read_align_histo.4:0%
lun:/vol/lun1/lun1-P3agH4VnOypr:read_align_histo.5:0%
lun:/vol/lun1/lun1-P3agH4VnOypr:read_align_histo.6:100%
lun:/vol/lun1/lun1-P3agH4VnOypr:read_align_histo.7:0%
lun:/vol/lun1/lun1-P3agH4VnOypr:write_align_histo.0:0%
lun:/vol/lun1/lun1-P3agH4VnOypr:write_align_histo.1:0%
lun:/vol/lun1/lun1-P3agH4VnOypr:write_align_histo.2:0%
lun:/vol/lun1/lun1-P3agH4VnOypr:write_align_histo.3:0%
lun:/vol/lun1/lun1-P3agH4VnOypr:write_align_histo.4:0%
lun:/vol/lun1/lun1-P3agH4VnOypr:write_align_histo.5:0%
lun:/vol/lun1/lun1-P3agH4VnOypr:write_align_histo.6:100%
lun:/vol/lun1/lun1-P3agH4VnOypr:write_align_histo.7:0%

But if you have time, please help me to take a look, I have removed most of the personal info as you suggested.

TIA

Public