Subscribe

PAMII(FlexScale) and A-SIS Deduplication

I recently upgraded my FAS3020 to a FAS3160 with PAMII modules.

Previously with the FAS3020 I found that using deduplication was unfeasable due to the increased CPU overhead generated by culling through metadata to get to the correct block. With my new FAS3160 however I am interested in trying deduplication again.  However as stated above the 3160 is outfitted with 256GB PAMII modules(1 in each head).

My question for NetApp or somebody in the know is whether or not FlexScale is Dedupe aware.  Take the following scenario to understand what I mean...

1) A-SIS consolidates block x which is used in both File A and File B

2) File A is accessed causing FlexScale to cache block x.

--Heres the question, which happens?

3a) File B is accessed and the block x already cached in the PAM is used.

3b) File B is accessed and another copy of block x is cached into the PAM.

The reason I am interested in this is that it could potentially allow me to cache a significantly larger effective data pool and increasing potential IOPS from my filer heads.

Re: PAMII(FlexScale) and A-SIS Deduplication

I asked a group of technical NetApp folks this same question and they assured me that both the PAM cards and the internal cache are de-dupe aware on recent OnTap versions.  This is apparently a big advantage on VDI stores.

Re: PAMII(FlexScale) and A-SIS Deduplication

Yea the volumes I am thinking about deduping again are used at backend datastores to my VMware Environment.  Currently we have just over 200 highly active VMs.  It pretty much brought our 3020 to the brink.

Re: PAMII(FlexScale) and A-SIS Deduplication

PAM cards cache physical data blocks and are dedepe aware. So you'll get more of your working set in the pam card if you can dedupe the data. We also have a number of vdi setups using V3170 and PAMI and PAMII. Always get maxed on IO before we get anywhere near maxing disk space, so the only reason we use dedupe is to complement PAM.

200 is not very many images, interested in your read/write ratio and IO per VM.

Chris