Data Backup and Recovery

Thick or Thin Provisioning for high I/O Apps

cgeck0000
7,739 Views

Hello NetApp Community,

Is there a performance gain by thick provisioning volumes for high I/O applications such as MSSQL and Oracle. I am doing some research and found the following recommendation from an engineer I was speaking with:

"As far as the thick provisioning recommendation.  It is really being proactive and preventing any DB write delays.  By design "thin-provisioning" always takes a small hit when writing (inflating) the volume, this is just to avoid that hit.  It is always one of those nice to haves where high I/O apps SQL, Oracle, etc... come into play.  If it is something that would cost the customer more then we can discuss removing it as an option."

In all the NetApp documentation read I have not come across this information and as I understand it writes are first done to NVRAM and then flushed to the disks every 10 seconds or when it is full.

Can anyone let me know the validity of this statement since NetApp preaches thin provisioning?  Thanks.

6 REPLIES 6

saranraj456
7,739 Views

Hi,

It doesn't make any differences in IO perf ,for thin provisioning you should have to keep monitoring the size of the volume \ LUN .

Regards,

saran

scottgelb
7,739 Views

Agreed on NetApp. I don't know of any performance issue with thin vs thick in NetApp both the same spindle count/type in the aggr I haven't heard of a sizing difference. Unless something I haven't heard about.

sahil_anand
7,739 Views

I do not think there is performance gain using thin provisioning, but I can understand in case of your databases like SQL, Oracle; you do not want to be in situation where this is high I/O load and lots of transactions going on and you run out of space on your volume or LUN, which would be un-acceptable. I guess that’s would be NetApp engineer’s opinion when he said it’s nice to have.  

Thanks,

Sahil

thomas_glodde
7,739 Views

Hi there,

if thick or thin provisioned, its just a matter of the netapp internal space calculation. NetApp will not create, edit or delete any blocks or metadata when going from thick to thin or vice versa, its just a matter of "accounting" so to say.

The next thing is about space reservation and overwrite reservation, you must have an active monitoring system to make sure your volumes and aggregates do not run out of space.

Last thing, and that is something to consider, is about volume autogrow. If the volumes is configured using autogrow, each time autogrow increases the volume it will give a very slight performance hit due to the internal process of resizing and allocating new blocks. But this impact only hits as long as the resize is running. So you better go with thin provisioning and a fixed size, no autogrow.

Same for snap autodelete, deleting a snap causes a certain performance impact as well, but usualy, like autogrow, its not realy worth to notice.

Kind regards,

Thomas

cgeck0000
7,739 Views

Thanks everyone, I needed a sanity check with this.  Like I said it wasn't something I came across.

kkaushal2
7,739 Views

Hi,

The performance impact is minimal. According to the Netapp TR-3965, a performance test performed for MS Exchange on thin and thick volumes, the performance degradation was only 3% on thin volume.

Public