Subscribe

Settle an "argument" 32bit v 64 bit aggregates

Hi Guys

Can anyone help settle an argument?

We're starting to deploy OnTap 8 on new filers some of which have 2TB SATA disks. I am saying it would be a good idea to use 64 bit aggrs with these, but I'm getting "it's too new" "nobody is using them yet" So question is, are any of you using 64 bit aggrs in production? The other point they're making is even if the aggr is set to over 16TB any volumes will still be have to be under 16TB. I'm not aware of anything that would stop a CIFS share being over 16TB. I understand luns needing to be below that.

Any thoughts welcome.

Re: Settle an "argument" 32bit v 64 bit aggregates

Hi,

I would challenge objections against 64-bit aggregates & say that in a case of 2TB drives it just makes common sense to use them - otherwise the capacity is overhead is rather high IMHO, as you can have only 9 data drives in a 32-bit aggregate.

See a discussion here (plus some links to different docs):

http://communities.netapp.com/message/30810#30810

Regards,
Radek

Re: Settle an "argument" 32bit v 64 bit aggregates

Hi there,

we already have a few ontap 8 machines in production at our customers. We have both, machines that actualy need bigger (16TB+) aggregates as well as filers who dont need bigger aggregates but are still running 64bit ones. So far it seems stable and all snapmanager/snapdrive products work as well as snapmirror/snapvault replication etc.. only showstopper might be snaplock, its not yet supported with ontap 8.

especialy in your case with 2TB drives, i would strongly recommend to go for ontap 8 as you will only have 9 data disks in a 32bit aggregate, this most possibly isnt enough spindels to handle more then a few hundred users.

regards

Thomas

Re: Settle an "argument" 32bit v 64 bit aggregates

I have already 3 filers with ontap 8.0.1 in production and all use only 64 bit aggregates.

nigelg1965 wrote:

The other point they're making is even if the aggr is set to over 16TB any volumes will still be have to be under 16TB.

A volume larger than 16 TB is possible.

Filesystem               total       used      avail capacity  Mounted on
/vol/test/                18TB      204KB       17TB       0%  /vol/test/
/vol/test/.snapshot        0TB        0TB        0TB     ---%  /vol/test/.snapshot

Only for volumes with deduplication enabled you can't go beyond the 16 TB limit.

Re: Settle an "argument" 32bit v 64 bit aggregates

Hi,

     We have two 3140s, each with two DS4243 SAS shelves, with 2TB SATA drives, all at 64bit aggrs.  We are using these as snapvault/snapmirror targets.  Using 32bit aggrs we simply lost far too many drives to parity, with raid groups that were honestly too small.

One thing to keep in mind, snapmirrors to/from 32/64 bit aggrs are an issue.  We simply hung a DS14 shelf on them for the few 32bit replications we needed.

- Scott

Re: Settle an "argument" 32bit v 64 bit aggregates

nigelg1965 wrote:

Hi Guys

Can anyone help settle an argument?

We're starting to deploy OnTap 8 on new filers some of which have 2TB SATA disks. I am saying it would be a good idea to use 64 bit aggrs with these, but I'm getting "it's too new" "nobody is using them yet" So question is, are any of you using 64 bit aggrs in production? The other point they're making is even if the aggr is set to over 16TB any volumes will still be have to be under 16TB. I'm not aware of anything that would stop a CIFS share being over 16TB. I understand luns needing to be below that.

Any thoughts welcome.

I can settle it. there is NO difference.  ALL 64-bit means is LARGER support (beyond 16TB).  That's ALL it means.

Performance?  Nah.. I have tested it.  First of all ESX is 64-bit it has support for 64-bit block addressing and I see ZERO difference in performance, as I tested it in more than one way.

And we are using ESX 4.1 latest build, I tried ESX and ESXi (latest build) same difference which is ZERO difference.

So there you go, it's system level (they should not use the 64-bit name because it implies OS level problems, but it's not true).  It overcomes limitations..

See Fat vs Fat 32, same EXACT methods.  The 64-bit aggregrate means you can have more drives with larger space and go over the 16TB limit on an Aggregate that's ALL it means.

Re: Settle an "argument" 32bit v 64 bit aggregates

I would challenge objections against 64-bit aggregates & say that in a case of 2TB drives it just makes common sense to use them - otherwise the capacity is overhead is rather high IMHO, as you can have only 9 data drives in a 32-bit aggregate.

I have to argue this point not only this is simply UNTRUE, we have 56 disk 32-bit aggregrates.  It's not the number of disks is the size of the TOTAL allocation, not number of spindles.

We just upgraded to 8.01 last week, so anything OTHER than 32-bit aggs was impossible, been using these 28 /56 spindle counts for over 3 years... 

Re: Settle an "argument" 32bit v 64 bit aggregates

I have to argue this point not only this is simply UNTRUE, we have 56 disk 32-bit aggregrates.

Dude, did you actually read what I wrote? You did quote me in your post, so it is hard to miss - you can't have more than 9 of 2TB data drives in a 32-bit aggregate. So yes, you may have 56 spindles, but 300GB ones, not 2TB.

Re: Settle an "argument" 32bit v 64 bit aggregates

Performance?  Nah.. I have tested it.  First of all ESX is 64-bit it has support for 64-bit block addressing and I see ZERO difference in performance, as I tested it in more than one way.

To be pedantic, 64-bit addressing on ESX has nothing to do with storage - it is about addressing memory.

Performance-wise though, 64-bit agregates will not be faster simply because they are 64-bit (in fact, on smaller systems they may be even slower due to more metadata required to fit into RAM). They may be faster though if higher capacity cap allows more spindles in an aggregate - exactly the case with 2TB drives.

Re: Settle an "argument" 32bit v 64 bit aggregates

Thanks to everyone for the feedback.

You confirmed what I thought so I win the agrument!

Meantime a collegue in the US, where the system in question is located just created one anyway and started using it. Upset our long serving Netapp guru but it was a bit of a no brainer in the circumstance.