adding the disks in existing aggregate


I am going to add some new disks in existing aggregate, currently my agg0 having 10 disks and i am going to add 22 more disks in same aggregate. so help me that how can i share/ distribute the IOPS of existing agg over new disks.



Re: adding the disks in existing aggregate


After adding the disks, you should run a reallocate jon on each volume under aggr0.

reallocate start -f -p /vol/volumename

reallocate status -v

This will distribute the blocs used by each volume over all disk in aggr0.

Runing reallocate you  can observe a performance degradation.So run it on a low iops period.

Re: adding the disks in existing aggregate

Hi Said,

Thanks for prompt reply.. pls correct me here.. reallocate command is used for defragmentation or free up the space.. will this command share the existing 10 disks IOPS load with newly added disks.

Re: adding the disks in existing aggregate

reallocating effectively tells wafl to move some of the existing data to the new disks and frees up space equally over all disks.

Re: adding the disks in existing aggregate

reallocating command will move data in all RG in same aggr0??? let in my case RG0 and RG1..

Also i have virtual environment (volume assigned to some ESXi server.). while running the reallocation command, is there any precaution to be taken.

In reallocate start -f -p /vol/volumename command, what is stand for -f and -p..

Re: adding the disks in existing aggregate

I did not test rellocate on a virtual environment.

The rellocate  process will consume some resources,  so will have a little impact  slowing down performance, This impact  depends on how and for what  the volume is used ( NFS, iSCSI, Database, application, Archiv....) ( for reallocation, virtual or physical environment,  I think is the same)  so schedul it off peak.

reallocate start -f [-p]   /vol/volname

( you can see the man of reallocate)

The -p option requests that reallocation of user data take place on the physical blocks in the aggregate,  but the logical block locations within a flexible volume are preserved. This option may only be used with flexible volumes,  or files/LUNs within flexible volumes.

Using the -p option may reduce the extra storage requirements in a flexible volume when reallocation is run on a volume with snapshots. It may also reduce the amount of data that needs to be transmitted by SnapMirror on its next update after reallocation is performed on a SnapMirror source volume.

Using the -p option may cause a performance degradation reading older snapshots if the volume has significantly changed after reallocation has been performed. Examples of reading a snapshot include reading files in the .snapshot directory,  accessing a LUN backed by a snapshot,  or reading a qtree snapmirror (QSM) destination. When whole-volume reallocation is performed with the -p option and snapshots exist an extra redirection step is performed to eliminate this degradation.

The -f (force) option performs a one-time full reallocation of a file,  LUN or an entire volume. A forced reallocation will rewrite blocks in the file,  LUN or volume unless the change is predicated to result in worse performance.

If a reallocation job already exists for the path_name it will be stopped,  and then restarted as a full reallocation scan. After the reallocation scan completes the job will revert to its previous schedule. If the job was previously quiesced,  it will no longer be quiesced.

When doing full reallocation the active filesystem layout may diverge significantly from the data stored in any snapshots. Because of this,  volumelevel full reallocation may not be started on volumes that have existing snapshots unless