ONTAP Discussions
ONTAP Discussions
Hi guys,
I'm a newbie to the NetApp world, and have a quick and easy (I hope!) query for you.
Both of our NetApp engineers are away at the moment, and a customer has requested a rename of their aggregates on a FAS2040.
I can see the command itself is pretty straightforward: aggr rename aggr1 aggr2
My question is can this be done online, or do I need an outage? Are there any other considerations that I need to be aware of?
Thanks a lot for your assistance,
Grant
Solved! See The Solution
If these are aggregates that contain flexvols, operation is completely transparent and can be done any time. If they are actually traditional volumes, there are some considerations. Could you post output of "aggr status" and "vol status -v"?
Grant,
You do not need to take an outage to rename an aggregate. Some considerations I can think of would be to update any potential script (cron tasks, bash to use new aggregate name); it also somewhat helpful to use appropriate naming convention with disk types, departments... (sata01; finance_aggr, aggrSas...)
Regards,
If these are aggregates that contain flexvols, operation is completely transparent and can be done any time. If they are actually traditional volumes, there are some considerations. Could you post output of "aggr status" and "vol status -v"?
Thanks both, (long) details below:
syd-san01-ctl1> aggr status
Aggr State Status Options
aggr2 online raid_dp, aggr raidsize=22
aggr0 online raid4, aggr root
aggr1 online raid_dp, aggr
syd-san01-ctl1> vol status -v
Volume State Status Options
CTL1_VOL2 online raid_dp, flex nosnap=on, nosnapdir=off,
sis minra=off, no_atime_update=off,
nvfail=off,
ignore_inconsistent=off,
snapmirrored=off,
create_ucode=on,
convert_ucode=off,
maxdirsize=28835,
schedsnapname=ordinal,
fs_size_fixed=off,
compression=off, guarantee=none,
svo_enable=off, svo_checksum=off,
svo_allow_rman=off,
svo_reject_errors=off,
no_i2p=off, fractional_reserve=0,
extent=off,
try_first=volume_grow,
read_realloc=off,
snapshot_clone_dependency=off
Containing aggregate: 'aggr1'
Plex /aggr1/plex0: online, normal, active
RAID group /aggr1/plex0/rg0: normal
Snapshot autodelete settings for CTL1_VOL2:
state=off
commitment=try
trigger=volume
target_free_space=20%
delete_order=oldest_first
defer_delete=user_created
prefix=(not specified)
destroy_list=none
Volume autosize settings:
state=off
vol0 online raid4, flex root, diskroot, nosnap=off,
nosnapdir=off, minra=off,
no_atime_update=off, nvfail=off,
ignore_inconsistent=off,
snapmirrored=off,
create_ucode=on,
convert_ucode=off,
maxdirsize=28835,
schedsnapname=ordinal,
fs_size_fixed=off,
compression=off,
guarantee=volume, svo_enable=off,
svo_checksum=off,
svo_allow_rman=off,
svo_reject_errors=off,
no_i2p=off,
fractional_reserve=100,
extent=off,
try_first=volume_grow,
read_realloc=off,
snapshot_clone_dependency=off
Containing aggregate: 'aggr0'
Plex /aggr0/plex0: online, normal, active
RAID group /aggr0/plex0/rg0: normal
Snapshot autodelete settings for vol0:
state=off
commitment=try
trigger=volume
target_free_space=20%
delete_order=oldest_first
defer_delete=user_created
prefix=(not specified)
destroy_list=none
Volume autosize settings:
state=off
CTL1_VOL7 online raid_dp, flex nosnap=on, nosnapdir=off,
sis minra=off, no_atime_update=off,
nvfail=off,
ignore_inconsistent=off,
snapmirrored=off,
create_ucode=on,
convert_ucode=off,
maxdirsize=28835,
schedsnapname=ordinal,
fs_size_fixed=off,
compression=off, guarantee=none,
svo_enable=off, svo_checksum=off,
svo_allow_rman=off,
svo_reject_errors=off,
no_i2p=off, fractional_reserve=0,
extent=off,
try_first=volume_grow,
read_realloc=off,
snapshot_clone_dependency=off
Containing aggregate: 'aggr1'
Plex /aggr1/plex0: online, normal, active
RAID group /aggr1/plex0/rg0: normal
Snapshot autodelete settings for CTL1_VOL7:
state=off
commitment=try
trigger=volume
target_free_space=20%
delete_order=oldest_first
defer_delete=user_created
prefix=(not specified)
destroy_list=none
Volume autosize settings:
state=off
CTL1_VOL8 online raid4, flex nosnap=on, nosnapdir=on,
sis minra=off, no_atime_update=off,
nvfail=off,
ignore_inconsistent=off,
snapmirrored=off,
create_ucode=on,
convert_ucode=off,
maxdirsize=28835,
schedsnapname=ordinal,
fs_size_fixed=off,
compression=off, guarantee=none,
svo_enable=off, svo_checksum=off,
svo_allow_rman=off,
svo_reject_errors=off,
no_i2p=off, fractional_reserve=0,
extent=off,
try_first=volume_grow,
read_realloc=off,
snapshot_clone_dependency=off
Containing aggregate: 'aggr0'
Plex /aggr0/plex0: online, normal, active
RAID group /aggr0/plex0/rg0: normal
Snapshot autodelete settings for CTL1_VOL8:
state=off
commitment=try
trigger=volume
target_free_space=20%
delete_order=oldest_first
defer_delete=user_created
prefix=(not specified)
destroy_list=none
Volume autosize settings:
state=off
CTL1_VOL3 online raid_dp, flex nosnap=on, nosnapdir=off,
sis minra=off, no_atime_update=off,
nvfail=off,
ignore_inconsistent=off,
snapmirrored=off,
create_ucode=on,
convert_ucode=off,
maxdirsize=28835,
schedsnapname=ordinal,
fs_size_fixed=off,
compression=off, guarantee=none,
svo_enable=off, svo_checksum=off,
svo_allow_rman=off,
svo_reject_errors=off,
no_i2p=off, fractional_reserve=0,
extent=off,
try_first=volume_grow,
read_realloc=off,
snapshot_clone_dependency=off
Containing aggregate: 'aggr1'
Plex /aggr1/plex0: online, normal, active
RAID group /aggr1/plex0/rg0: normal
Snapshot autodelete settings for CTL1_VOL3:
state=off
commitment=try
trigger=volume
target_free_space=20%
delete_order=oldest_first
defer_delete=user_created
prefix=(not specified)
destroy_list=none
Volume autosize settings:
state=off
CTL1_VOL4 online raid_dp, flex nosnap=on, nosnapdir=off,
sis minra=off, no_atime_update=off,
nvfail=off,
ignore_inconsistent=off,
snapmirrored=off,
create_ucode=on,
convert_ucode=off,
maxdirsize=28835,
schedsnapname=ordinal,
fs_size_fixed=off,
compression=off, guarantee=none,
svo_enable=off, svo_checksum=off,
svo_allow_rman=off,
svo_reject_errors=off,
no_i2p=off, fractional_reserve=0,
extent=off,
try_first=volume_grow,
read_realloc=off,
snapshot_clone_dependency=off
Containing aggregate: 'aggr1'
Plex /aggr1/plex0: online, normal, active
RAID group /aggr1/plex0/rg0: normal
Snapshot autodelete settings for CTL1_VOL4:
state=off
commitment=try
trigger=volume
target_free_space=20%
delete_order=oldest_first
defer_delete=user_created
prefix=(not specified)
destroy_list=none
Volume autosize settings:
state=off
CTL1_VOL9 online raid_dp, flex nosnap=off, nosnapdir=off,
sis minra=off, no_atime_update=off,
nvfail=off,
ignore_inconsistent=off,
snapmirrored=off,
create_ucode=on,
convert_ucode=off,
maxdirsize=28835,
schedsnapname=ordinal,
fs_size_fixed=off,
compression=off, guarantee=none,
svo_enable=off, svo_checksum=off,
svo_allow_rman=off,
svo_reject_errors=off,
no_i2p=off, fractional_reserve=0,
extent=off,
try_first=volume_grow,
read_realloc=off,
snapshot_clone_dependency=off
Containing aggregate: 'aggr2'
Plex /aggr2/plex0: online, normal, active
RAID group /aggr2/plex0/rg0: normal
Snapshot autodelete settings for CTL1_VOL9:
state=off
commitment=try
trigger=volume
target_free_space=20%
delete_order=oldest_first
defer_delete=user_created
prefix=(not specified)
destroy_list=none
Volume autosize settings:
state=off
CTL1_VOL5 online raid_dp, flex nosnap=on, nosnapdir=off,
sis minra=off, no_atime_update=off,
nvfail=off,
ignore_inconsistent=off,
snapmirrored=off,
create_ucode=on,
convert_ucode=off,
maxdirsize=28835,
schedsnapname=ordinal,
fs_size_fixed=off,
compression=off, guarantee=none,
svo_enable=off, svo_checksum=off,
svo_allow_rman=off,
svo_reject_errors=off,
no_i2p=off, fractional_reserve=0,
extent=off,
try_first=volume_grow,
read_realloc=off,
snapshot_clone_dependency=off
Containing aggregate: 'aggr1'
Plex /aggr1/plex0: online, normal, active
RAID group /aggr1/plex0/rg0: normal
Snapshot autodelete settings for CTL1_VOL5:
state=off
commitment=try
trigger=volume
target_free_space=20%
delete_order=oldest_first
defer_delete=user_created
prefix=(not specified)
destroy_list=none
Volume autosize settings:
state=off
CTL1_VOL10 online raid_dp, flex nosnap=off, nosnapdir=off,
sis minra=off, no_atime_update=off,
nvfail=off,
ignore_inconsistent=off,
snapmirrored=off,
create_ucode=on,
convert_ucode=off,
maxdirsize=28835,
schedsnapname=ordinal,
fs_size_fixed=off,
compression=off, guarantee=none,
svo_enable=off, svo_checksum=off,
svo_allow_rman=off,
svo_reject_errors=off,
no_i2p=off, fractional_reserve=0,
extent=off,
try_first=volume_grow,
read_realloc=off,
snapshot_clone_dependency=off
Containing aggregate: 'aggr2'
Plex /aggr2/plex0: online, normal, active
RAID group /aggr2/plex0/rg0: normal
Snapshot autodelete settings for CTL1_VOL10:
state=off
commitment=try
trigger=volume
target_free_space=20%
delete_order=oldest_first
defer_delete=user_created
prefix=(not specified)
destroy_list=none
Volume autosize settings:
state=off
CTL1_VOL1 online raid4, flex nosnap=on, nosnapdir=off,
sis minra=off, no_atime_update=off,
nvfail=off,
ignore_inconsistent=off,
snapmirrored=off,
create_ucode=on,
convert_ucode=off,
maxdirsize=28835,
schedsnapname=ordinal,
fs_size_fixed=off,
compression=off, guarantee=none,
svo_enable=off, svo_checksum=off,
svo_allow_rman=off,
svo_reject_errors=off,
no_i2p=off, fractional_reserve=0,
extent=off,
try_first=volume_grow,
read_realloc=off,
snapshot_clone_dependency=off
Containing aggregate: 'aggr0'
Plex /aggr0/plex0: online, normal, active
RAID group /aggr0/plex0/rg0: normal
Snapshot autodelete settings for CTL1_VOL1:
state=off
commitment=try
trigger=volume
target_free_space=20%
delete_order=oldest_first
defer_delete=user_created
prefix=(not specified)
destroy_list=none
Volume autosize settings:
state=off
syd-san01-ctl1>
So all your volumes are flexible volumes, in which case renaming aggregate is fully transparent – it is never exposed to anything outside NetApp.
Thankyou both very much for all of your help - truly appreciated.
Regards,
Grant