ONTAP Discussions

How to duplicate Volumes / Nfs exports and LUNs config ?

SEBPASTOR
9,189 Views

Hi All,

We are about to add a new shelf to our NetApp Filer FAS3140. We need to duplicate all volumes from the original aggregates to this new aggregate :

Volumes, NFs  and LUNs config.

Is there a way to dump the config of the current aggregates, change volumes, NFS and LUNs names and get all those being created into the new aggregate ?

[ I had a look and config dump but I am not too sure whether it does exactly what we want ..]

Thanks !


Sebastien

1 ACCEPTED SOLUTION

paul_wolf
8,482 Views

40% is not what I would refer to as fairly loaded. I run systems at 65+% per CPU and don't see any performance issues.

Snap Reserve should only be there is you need to hold back space for snapshots. It sets aside a portion of the FlexVol's space that can only be used for snapshot data. This doesn't mean that snapshot data can't overflow the Snap Reserve and use up data space in the FlexVol.  Setting up Snap Autodelete can help prevent this but that's a different issue.  Snap Mirror uses Snap Shots to maintain sync between volumes so you will have some snap shots if you are using Snap Mirror so I would set a smaller reserve of 5% or so (you can tune this as needed).

This can be dynamically changed with no impact so if you have a FlexVol with a Snap Reserve set and don't need it, then reuce that percentage to 0 and use that space for active data.  Or shrink the FlexVol to a more reasonable size.

When you refer to FC Cards, are you referring the on-board FC ports?  Are you providing blcok based fibre-channel access to hosts? If so, how are you doing that now? Are there other FC interfaces that are acting as targets?

If you need to convert the on-board ports to FC, then you can make the change on both controllers then perform a failover of controller 1 to controller 2, wait for controller 1 to reboot and come back up, then perform a giveback of controller 1's resources.  Once that has stabilized, repeat the process by failing over controller 2's resources to controller 1, wait for controller 2 to reboot and come back up and then perform a giveback of controller 2's resources.

View solution in original post

19 REPLIES 19

billshaffer
9,105 Views

If the new shelf is the same number and size of drives as the existing shelf, your best bet would be to mirror the aggregates.  This requires a syncmirror licence - but if you've just paid for the new shelf, chances are you'll be able to get a temp key.  Mirroring the aggregates is the simplest means - there is no downtime, and because nothing changes at the volume level, all the volume config (luns, masking, shares, etc.) stay the same.

If this isn't an option, then there is not really a one-stop config dump available.  The NFS share config is all in /etc/exports.  The CIFS share config is split between /etc/cifs_share.cfg and /etc/registry.  I've not seen the lun config in a file anywhere.  There are a couple of options for migrating data (vol copy, ndmpcopy, snapmirror, client side copy) - but they all incur varying levels of downtime, and require a mostly manual reconfig of shares and mapping.

Does that help?

Bill

SEBPASTOR
9,105 Views

Thanks Bill,

It does help. I should have mention that the new shelf will be added in a different NetApp filer from which we want to duplicate the configuration. So mirror thing is defintively not an option.

I guess we will have to go the manual way then.

Thanks for your help.

Seb

paul_wolf
9,105 Views

Sebastien,

What is the end goal? Have you looked at creating the new aggreate on the target filer and then using SnapMirror to mirror all the volumes over?

SEBPASTOR
9,105 Views

Paul,

The goal is to prepare our new aggregate that will be used by our DRP platform.

I had a quick look at SnapMirror and it does look great to do what we want.

Is it possible to use it only to initiate the environnement then stop it?  (our DRP process will be synced via another mechanism)

Also my understanding is that, after a SnapMirror, LUN will need to be manually re-mapped, do you know of other things to be done manually at the LUN level ?

Thanks for your help.

Seb

paul_wolf
9,105 Views

OK, that makes sense but one of the things that you need to take into consideration is that SnapMirror is a block level copy so it will send all blocks and then send only those blocks that have change.  Files are made up of one or more blocks so your DRP process is most likely a file level replication so if any block changes in a file, it will have to send the whole file again. Not a major problem if there isn't a lot of change rate but something you need to be aware of.

So you can SnapMirror to set up the inital replication and then have a schedule where changes are sent but at some point you will have to break the SnapMirror relationship and start your DRP process.  I'm not sure what DRP product you are looking at but my recommendation is to use one that integrates with SnapMirror (such as SRM for VMWare, etc.) as SnapMirror will be the best option for replicating data between NetApp controllers as it will only send the changed blocks.

As for LUNs on the target, you will need to zone your hosts to the storage controller (if FCP) or set up iSCSI.  Then you will need to create iGroups with the FCP or iSCSI initiators and then map the LUNs to the appropriate iGroups.

le me know if you have more questions

SEBPASTOR
9,105 Views

Thanks again Paul!

Yes our DRP process is roughly a big rsync... I would personnaly prefer to use SnapMirror to do this as it is optimized as you indicated it, and hope I could push this

I ve couple of concerns though:

- Even though I understand we can set a specific time for replication updates, I am wondering if the load might be noticeable on the source system ?

- Volumes on our source system are set with very low space for snapshot. Could this impact the SnapMirror process ? Or Snapshot created during SnapMirror are different from the Volumes snapshots ?

Thanks again.


Seb

paul_wolf
9,105 Views

Yeah, good luck with THAT 🙂  I understand the difficulty.

- Yes there is some load on the controllers but SnapMirror is a low impact background process and unless you are syncing hourly and have a high change rate, there isn't a noticible impact

- SnapMirror uses a volume snap shot to track what blocks need to be synced during the next update so the disk space required for these snaps depends on the following:

1) The block change rate on the volume

2) How often you are syncing to the target.

The baseline snap for SnapMirror is updated once the sync takes place and the old baseline is released.

radek_kubka
9,105 Views

- Yes there is some load on the controllers but SnapMirror is a low impact background process and unless you are syncing hourly and have a high change rate, there isn't a noticible impact

I would argue with that - the truth is, your mileage may vary! It is better to be careful with running SnapMirror on a heavy loaded system during peak hours.

Updates can be easy scheduled outside of business hours, but doing the baseline is trickier - unless you can fit it in one night (or weekend, or whatever maintenance windows you have available). If we are talking loads of data for baseline, with initial transfer spanning multiple days, then throttling / switching off SnapMirror during peak business hours is strongly recommended (either manually, or via script).

Regards,
Radek

paul_wolf
9,105 Views

Good point Radek.  I was referring to the scheduled updates not the initial sync. 

YMMV is an excellent way to express it. If the system is running at high utilization (systat -m 1 to see what each CPU is reporting) then scheduling the intial sync after hours and/or throttling the transfer rate (this isn't the greatest way of limiting resource utilization but it's the only one available) is a way to limiting the affect.  TR-3346 can give more detail on throttle.  

Also there are limits on the number of SnapMirror sessions that can be active at the same time. If I recall correctly a 3140 is limited to 16?

radek_kubka
8,411 Views

16 concurrent sessions limit is for sync / semi-sync only.

paul_wolf
8,411 Views

Gah, I'm getting too old for this stuff. 

Thanks.

SEBPASTOR
8,411 Views

Sorry for the delay! Thank you very much for your time and answers, Paul and Radek; Highly appreciated!

I ran a sysstat -m on our production system and the load is an average of 40% CPU. So I guess we could say it is fairly loaded. Scheduling the intial sync off-pic hours seems definitively

like a good option anyway.

Regarding the possibility of doing a one-shot sync to get all volumes info properly replicated, I think I am not going to do it, because the current config seems pretty odd to me.

For instance, some space (up to 20% on some volumes) is reserved for Snapshots while they have been deactivated on all volumes. And that would be my new question , is there a point of keeping

a minimum of space for snapshot when definitively not in use ? (like I have been told). Can I just simply use the whole space for data instead?

Oh .. yes another question if I may. We have realized our FC cards are not in Target mode. As you know this calls for a restart. In your experience, knowing that our system has been up for almost 2 years, and that we are dealing with 2 controllers in cluster here,

how risky you feel it is to restart the whole thing? (disk failures, controller failures etc ... ) which measures would you take to limit it (config back-up, snapshots etc ...) ?

Thanks again!


paul_wolf
8,483 Views

40% is not what I would refer to as fairly loaded. I run systems at 65+% per CPU and don't see any performance issues.

Snap Reserve should only be there is you need to hold back space for snapshots. It sets aside a portion of the FlexVol's space that can only be used for snapshot data. This doesn't mean that snapshot data can't overflow the Snap Reserve and use up data space in the FlexVol.  Setting up Snap Autodelete can help prevent this but that's a different issue.  Snap Mirror uses Snap Shots to maintain sync between volumes so you will have some snap shots if you are using Snap Mirror so I would set a smaller reserve of 5% or so (you can tune this as needed).

This can be dynamically changed with no impact so if you have a FlexVol with a Snap Reserve set and don't need it, then reuce that percentage to 0 and use that space for active data.  Or shrink the FlexVol to a more reasonable size.

When you refer to FC Cards, are you referring the on-board FC ports?  Are you providing blcok based fibre-channel access to hosts? If so, how are you doing that now? Are there other FC interfaces that are acting as targets?

If you need to convert the on-board ports to FC, then you can make the change on both controllers then perform a failover of controller 1 to controller 2, wait for controller 1 to reboot and come back up, then perform a giveback of controller 1's resources.  Once that has stabilized, repeat the process by failing over controller 2's resources to controller 1, wait for controller 2 to reboot and come back up and then perform a giveback of controller 2's resources.

SEBPASTOR
6,799 Views

Noted for the load perception.

Noted also for the Snap Reserver and the way Snap Mirror is using it. I did not know this could be changed dynamically.

As for the FC cards, yes, I "think" (sorry for not sounding so sure about myself but I still new to NetApp), here is the list brought by : storage show adapter

Slot:            0a

Description:     Fibre Channel Host Adapter 0a (QLogic 2432 rev. 2)

Slot:            0b

Description:     Fibre Channel Host Adapter 0b (QLogic 2432 rev. 2)

Slot:            0c

Description:     Fibre Channel Host Adapter 0c (QLogic 2432 rev. 2)

Slot:            0d

Description:     Fibre Channel Host Adapter 0d (QLogic 2432 rev. 2)

Firmware Rev:    4.5.2

FC Node Name:    5:00a:098100:26b21c

FC Packet Size:  2048

Link Data Rate:  1 Gbit

SRAM Parity:     Yes

External GBIC:   No

State:           Disabled

In Use:          No

Redundant:       Yes

Slot:            0e

Description:     IDE Host Adapter 0e


Currently on our "DRP" netapp system, we do not provide any access via FCP, and that is what we need to change to make it as similar as possible to our PROD environnement. In our PRD env. 2 DB Servers are accessing LUNs via their own Fibre Card going through an FCP type group initiator. (If that makes any sense   ... veeeery new to me ...). Each server is using Multipath mode and (I am assuming) they should be connected on both controllers FC ports for redundancy.

So I am assuming we need to convert the on-board card to make them "Target". Following your procedure, will this operation be completly transparent for our end users?

Thanks again for your detailled answer.

Sebastien

radek_kubka
8,411 Views

Hi Sebastien,

is there a point of keeping a minimum of space for snapshot when definitively not in use ?

No point in doing this, reducing snap reserve to 0% makes perfect sense. Even when you use snapshots, the amount of snap reserve is subject to a debate.

We have realized our FC cards are not in Target mode.

You mean actual cards, not onboard FC ports? You can't change mode of the former, only for the latter.

Regards,

Radek

paul_wolf
8,411 Views

Actaully, there is a 4 port target/intiator card available (has been for about a year now).  Also, the 2 port FC card for the 2240 is a Target/Initiator card. So that's not entirely correct.

🙂

radek_kubka
6,800 Views

Fair enough, I stand corrected - I forgot about the "new" quad-port 8Gbit X1132A card.

I always treated though the mezzanine card for 2240 as onboard

SEBPASTOR
6,799 Views

You lost me a little here . I pretty sure what is install is 4 ports Dual Channels 4 Gb per ports

radek_kubka
6,799 Views

if you are referring to ports 0a-0d listed above, these are on-board ones, are 4Gbit indeed, and can be set for either target or initiator mode.

Public