Subscribe
Accepted Solution

How to measure NetApp network performance (iSCSI)

Hi,

Does anyone know the best way to monitor/view network performance with a filer (FAS2020) or where I should start looking for problems?

We have a FAS2020 running CIFs shares, iSCSI LUNs and NFS for our ESX infrastructure.

The filer and ESX servers are all connected to a pair of stacked gigabet 3750? Cisco switches and my network guy setup port channels for the filer.

We also seperate the iSCSI or VM kernel traffic into a seperate VLAN.

It's a fairly small deployment with 3 ESX servers, 30 virtual servers (not all using iSCSI LUNs - this is reserved for SQL & Exchange servers) and approx 150 users.

My problem is that iSCSI performance on the VMs with LUNs mapped seems quite bad. Copying files for example from the C:\ (VMDK file on the NFS vol) to the D:\ (iSCSI LUN) takes an age.

Also, if you map a iSCSI LUN from a normal PC/server and copy data to it, it also seems relatively slow. Whereas copying data to the normal CIFs shares is fine.

The CPU on the filers doesn't seem taxed, and I have checked the Cisco port channel ports on the filers and ESX servers (we use Orion NCM) - but they are at very low utlization.

Any ideas on how I go about looking for a problem or how I perform some tests to try & measure the iSCSI performance?

Thanks,

Marc

Re: How to measure NetApp network performance (iSCSI)

Hi Marc,

The quick & dirty method is to issue following command on the filer:

sysstat –x 1

whilst you are doing something normally causing bottleneck (e.g. copying files from LUN to LUN)

Other than that, there is perfstat utility (you can find it on Field Portal) for more thorough data collection & analysis.

The first thing on my mind:

You haven't got physical separation for iSCSI traffic (e.g. dedicated physical ports & switches), have you? (the port count on FAS2020 is limited to say the least...)

Also - the switches are decent, but are you 100% sure jumbo frames are enabled end to end?

Regards,
Radek

Re: How to measure NetApp network performance (iSCSI)

Sysstat is a good starting point to look at your filer wide workload. You can also use the "lun stats" command to see LUN performance statistics.

Re: How to measure NetApp network performance (iSCSI)

Thanks guys.

Both ways are proving a good starting point for my performance testing - so far the only thing we've really seen is high CPU loads during the days (seems to be related to Snapshots and SnapMirrors running) - but nothing too drastic it seems.

And yes, our filers only have 2 NICs as far as I am aware - and I've since found out that the FAS2020s have rather limited memory too.

Thanks,

Marc

Re: How to measure NetApp network performance (iSCSI)

Hi Radek,

- No we don't have physical separation of iSCSI traffic (but we have a separate VLAN for it).

- And you're correct in that I think we only have 2 NIC ports per controller on the filer.

- Finally, your point about jumbo frames. What impact does this have and how important is it?

During installation we asked our NetApp partner about this and we were told that it wasn't necessary. Therefore we never enabled it. How important is this??

Thanks for the advice.

Marc

Re: How to measure NetApp network performance (iSCSI)

marcconeley wrote:

During installation we asked our NetApp partner about this and we were told that it wasn't necessary. Therefore we never enabled it. How important is this??

Well, it certainly "works" without Jumbo's, but our performance tests showed that under full load (2x 1Gb iSCSI - Round Robin MPIO) the difference between a MTU size of 1500 compared to 9000 means 5% - 12% more CPU utilization (Win2k8 Server). You're telling that you have a seperate iSCSI VLAN, so there should be no reason to NOT enable Jumbo's.

Re: How to measure NetApp network performance (iSCSI)

the difference between a MTU size of 1500 compared to 9000 means 5% - 12% more CPU utilization (Win2k8 Server).

Bear in mind it also makes a difference to your filer CPU utilization! And you've said it seems to be high.

More reading about jumbo frames:

http://now.netapp.com/NOW/knowledge/docs/ontap/rel732_vs/html/ontap/nag/GUID-D3AB10A1-D15A-490D-8DCE-34BE73C3DACF.html

Re: How to measure NetApp network performance (iSCSI)

I am glad I started this thread now - it sounds quite important.

Yes, I realised that during the day our filers always seem to run at over 50% CPU (really spikey - not consistent) - I always assumed it was normal.

So is there any downside to using jumbo frames, or any technical reason why our partner would have said its not necessary?

We have all our ESX servers on the same GB switches as the Filer.

And all our VMKernel is configured on the Storage VLAN (which means that all iSCSI traffic goes through this VLAN if my understanding of the VMKernel is correct?).

And our VMs with iSCSI LUNs (like Exchange & SQL) all have a 2nd virtual NIC which is also configured for our storage VLAN & which the iSCSI initiator uses.

So all being good with our setup, do you think we should quite easily be able to enable jumbo frames without causing any problems?

I really appreciate the advice guys. I've opened up a whole new can of worms for my network admin!

Marc

Re: How to measure NetApp network performance (iSCSI)

So is there any downside to using jumbo frames, or any technical reason why our partner would have said its not necessary?

I am not a networking guru, but I've never came across any obvious downsides of jumbo frames.

Here is a couple of additional links about the topic:

http://www.networkworld.com/forum/0223jumboyes.html

http://sd.wareonearth.com/~phil/jumbo.html

regards,

Radek

Re: How to measure NetApp network performance (iSCSI)

How exactly are your nics configured?

Could you post your rc file here?

I do quite a bit of work with the 2000s so understand the issues, and am pursuading customers to configure a LACP vif with one or more VLANS running over it, this allows for jumbo frames and flow control to be turned on individually. also make sure wins is turned off for your iscsi vlan, its like sticking a tractor on a motorway; alternatively you can tell the protocols what interfaces you want them to use, or not. Also installing the VSC 2 on Vsphere will help set your MPIO setttings for all your esx hosts which can help.

The 2020 is quite limited on performance and the sales pitch says " upto 68 drives" but to be honest if you get to even half that your hitting the limits on the CPU (single core 32bit ) and is pitched for windows appliactions and file serving. the redeeming feature is you can do a head upgrade to a 2040.