ONTAP Discussions

Fastest way to copy 12TB volume?

CCHDEVOPS
8,648 Views

I'm a noob here, so I apologize if break some rules

What we have: We have production database (Oracle) data files (12TB) stored on a single data volume mounted to Oracle RAC database

What we need: Create a physical copy (not FlexClone) of production data files on our stress test data volume

What we will do: mount these files to our stress test database and run stress tests

Question: What would be the fastest way to copy data files from one volume to another?

"Regular" copy process (Linux to Linux) will take a week,

I believe, our engineers tried to use V-Copy in the past without a success (or at least, it was not faster than regular copy).

Is there another way (using some NetApp's features/magic) to copy 12TB data files from "production" volume to "stress test" volume that would take a day or less?

I'm not NetApp expert, but a mortal "Ops" guy, all I can tell about our NetApp version is that it has two FAS3170 head units.

Thank you,

Alex.

6 REPLIES 6

DAVE_WITHERS
8,648 Views

Curious as to why would you not want to use a flexclone? Its exactly the same as having the physical copy.  And specifically the point of it is to do the testing you would like.  I think you would find zero differences in your testing between actually doubling your capacity used by copying all of the actual data, versus using a flexclone.

However, if you must do a physical copy, i would recommend doing a snapmirror, then updating a few last times to catch the data up.  There really arent any fast methods for copying 12TB of data.  Snapmirror would give you the most control as far as making sure its up to date.  12TB is going to take a few days anyway you cut it, unless you want to hook up a USB drive and drop the data on there and copy it back onto a new volume.

You could also us NDMPcopy.  If your data is large files it may go quicker, but if its tons of small files, its still going to take a few days.

I should add in, if this is the same controller, do a flexclone, then split it off.  There you go.  You have a full copy.  Might take a few hours/day to split, and it still usable during this process, but probably the quickest way.

CCHDEVOPS
8,648 Views

about FlexClone:

In order to prepare for stress testing production database will have to be modified to be in sync with any changes made in our QA environment (a few columns to add to existing tables, a couple of new tables and etc.).

Stress tests will not only read but update a lot of data.

It was my understanding that FlexClone performance is as close as possible to performance on the original volume for read I/O, but suffers (down to 50%) if data needs to be updated. Is that assumption wrong?

We use very large files (data files) so your proposal to use NDMPcopy could work.

We have aprx. 3 files (4TB each). I found this blog: http://netapp-blog.blogspot.com/2010/06/which-is-faster-ndmpcopy-or-vol-copy.html

Somebody (see last comment) measured V-Copy vs. NDMPcopy performance and NDMPcopy results look great!

DAVE_WITHERS
8,648 Views

Understood.  Splitting off a flexclone may also be the other option.  Good luck!

CCHDEVOPS
8,648 Views

Thank you, Dave.

We will test NDMPcopy on one of smaller volumes (about 1TB) for another system and see how it performs. I appreciate your help!

SALBISTON
8,648 Views

I recently made physical copies of large (3TB) volumes to new aggregates within the same filer. NDMP copy was MUCH faster than snapmirror. Just be careful, because NDMP copy produces a snapshot for the duration of the copy operation. Plan your space accordingly -- else changes to the original volume over the course of a couple days will cause the snapshot to grow and fill up the original volume.

One limitation using NDMP copy is you only get 3 chances. 1st copy and 2 refreshes for the deltas. Snapmirror allows more flexibility, perhaps safety, but incurs more overhead on the filer while it does its block calculations.

RUTURAJ_PIMPARKAR
8,648 Views

If you have snapmirror license available ,try out mirroring the volume to secondary volume. We have done migration within filers and across different nodes for database/Cifs shares volmes and they have been pretty fast.

Public