Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
HI
I have an MS SQL db server with SnapManger for SQL which I have som problem with. From time to time I get following alert from opretion manager " The volume is 85.44% full (using 59.8 GB of 70.0 GB)". I usually resize the volume,
but it is not sustainable in the long run. SnapManager is configured to save two snapshot copies. When I reszie the volume the LUN is still the same, how can I use the area between the LUN and the snapshot copies?
Rgds
Tobias
Solved! See The Solution
1 ACCEPTED SOLUTION
migration has accepted the solution
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
It looks like they are created by snapmirror and are no orphaned (probably due to a break and a resync or a dr test). Those can be deleted, because the first snapshot in the list belongs to snapmirror as well and this is the one, snapmirror needs for further updates.
14 REPLIES 14
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
Can you provide the output of:
vol status <volname> -v
lun <lunpath> -v
df -r <volname>
df -s <volname>
Regards,
Radek
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi
Here´s the output from a volume with the same problem. I couldn´t get the output from "lun <lunpath> -V" ???
faspri02> vol status Servernamn_fcdb04 -v
Volume State Status Options
SERVERNAMN_fcdb04 online raid_dp, flex nosnap=on, nosnapdir=off,
minra=off, no_atime_update=off,
nvfail=off,
ignore_inconsistent=off,
snapmirrored=off,
create_ucode=on,
convert_ucode=on,
maxdirsize=73400,
schedsnapname=ordinal,
fs_size_fixed=off,
compression=off,
guarantee=volume, svo_enable=off,
svo_checksum=off,
svo_allow_rman=off,
svo_reject_errors=off, no_i2p=on,
fractional_reserve=0, extent=off,
try_first=volume_grow,
read_realloc=off,
snapshot_clone_dependency=off,
nbu_archival_snap=off
Volume UUID: 21c40724-49ea-11e1-8c33-00a098143420
Containing aggregate: 'aggr1'
Plex /aggr1/plex0: online, normal, active
RAID group /aggr1/plex0/rg0: normal
RAID group /aggr1/plex0/rg1: normal
RAID group /aggr1/plex0/rg2: normal
Snapshot autodelete settings for SERVERNAMN_fcdb04:
state=on
commitment=try
trigger=volume
target_free_space=20%
delete_order=oldest_first
defer_delete=user_created
prefix=(not specified)
destroy_list=none
Volume autosize settings:
state=on
maximum-size=72 GB
increment-size=3 GB
Filesystem kbytes used avail reserved Mounted on
/vol/SERVERNAMN_fcdb04/ 62914560 50220444 12694116 0 /vol/SERVERNAMN_fcdb04/
/vol/SERVERNAMN_fcdb04/.snapshot 0 8087888 0 0 /vol/SERVERNAMN_fcdb04/.snapshot
Filesystem used saved %saved
/vol/SERVERNAMN_fcdb04/ 50220444 0 0%
Rgds
Tobias
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Sorry, I missed one word! It should be:
lun show <lunpath> -v
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Don´t know if Iám doing something wrong, but I can´t get any information
lun show /vol/SERVERNAMN_fcdb04/SERVERNAMN_lunnamn -V
Rgds
Tobias
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Tobias,
You may wish to use '-v' (Not the capital -V) flag.
Good luck
Henry
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi
I have tried with both -v and -V and it still dosen´t work.
Rgds
Tobias
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Tobias,
Apologies - I got it wrong second time around! 😕
The correct syntax:
lun show -v <lunpath>
That said, looking at previous outputs I don't see anything surprising / unusual so far: 60g volume with ~48g space taken. If the latter is equal to your LUN size, everything looks just normal to my eye.
Regards,
Radek
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Radek
Here´s the lun output
faspri02> lun show -v /vol/SERVERNAMN_fcdb04/SERVERNAMN_qtree_db01/SERVERNAMN_db01
/vol/SERVERNAMN_fcdb04/HPASQLDB01_qtree_db01/SERVERNAMN_db01 40.0g (42952412160) (r/w, online, mapped)
Serial#: dfPsK4hrI0dc
Share: none
Space Reservation: enabled
Multiprotocol Type: windows_2008
Maps: SERVERNAMN=13
Occupied Size: 4.8g (5150593024)
Creation Time: Thu Jan 26 22:06:23 CET 2012
Cluster Shared Volume Information: 0x0
Rgds
Tobias
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
Everything looks just 'normal' I'd say - with the exception of one thing: your snapshots occupy a *lot* of space!
You have fully provisioned 60g volume with 40g space reserved LUN in it. It looks though like there is only 4.8g of data inside the LUN, whilst snapshots are taking ~8g of space.
Are some of the snapshots really old? Or are you running any database maintenance tasks, reindexing, etc.? Or maybe disk defrag on Windows OS?
Regards,
Radek
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
In SnapManager for SQL I have sat the retantion two 2 days in primary and 14 days in secondary. Shuold I look in SnapDrive to see if there are any snapshots?
Rgds
Tobias
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I'd suggest command line again:
snap list <volname>
snap delta <volname>
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi again
Here´s the output
faspri02> snap list HPASQLDB01_fcdb04
Volume HPASQLDB01_fcdb04
working...
%/used %/total date name
---------- ---------- ------------ --------
2% ( 2%) 0% ( 0%) May 08 00:59 fassec02(1573838366)_SnapMgr_SQLServer_HPASQLDB01_mirror_faspri02_HPASQLDB01_fcdb_8.61 (snapmirror)
2% ( 0%) 0% ( 0%) May 07 23:02 sqlsnap__hpasqldb01_05-07-2012_23.00.59__daily (snapvault)
17% (16%) 2% ( 2%) May 06 23:02 sqlsnap__hpasqldb01_05-06-2012_23.01.00__daily
46% (39%) 7% ( 5%) Apr 25 00:48 fassec02(1573838366)_SnapMgr_SQLServer_HPASQLDB01_mirror_faspri02_HPASQLDB01_fcdb_8.40 (snapmirror)
59% (36%) 11% ( 5%) Apr 04 11:45 fassec02(1573838366)_SnapMgr_SQLServer_HPASQLDB01_mirror_faspri02_HPASQLDB01_fcdb_2.1 (snapmirror)
faspri02>
Volume HPASQLDB01_fcdb04
working...
From Snapshot To KB changed Time Rate (KB/hour)
--------------- -------------------- ----------- ------------ ---------------
fassec02(1573838366)_SnapMgr_SQLServer_HPASQLDB01_mirror_faspri02_HPASQLDB01_fcdb_8.61 Active File System 92304 0d 19:14 4795.220
sqlsnap__hpasqldb01_05-07-2012_23.00.59__daily fassec02(1573838366)_SnapMgr_SQLServer_HPASQLDB01_mirror_faspri02_HPASQLDB01_fcdb_8.61 2708 0d 01:56 1391.691
sqlsnap__hpasqldb01_05-06-2012_23.01.00__daily sqlsnap__hpasqldb01_05-07-2012_23.00.59__daily 956172 0d 23:59 39843.266
fassec02(1573838366)_SnapMgr_SQLServer_HPASQLDB01_mirror_faspri02_HPASQLDB01_fcdb_8.40 sqlsnap__hpasqldb01_05-06-2012_23.01.00__daily 3295884 11d 22:14 11514.396
fassec02(1573838366)_SnapMgr_SQLServer_HPASQLDB01_mirror_faspri02_HPASQLDB01_fcdb_2.1 fassec02(1573838366)_SnapMgr_SQLServer_HPASQLDB01_mirror_faspri02_HPASQLDB01_fcdb_8.40 2912496 20d 13:02 5907.207
Summary...
From Snapshot To KB changed Time Rate (KB/hour)
--------------- -------------------- ----------- ------------ ---------------
fassec02(1573838366)_SnapMgr_SQLServer_HPASQLDB01_mirror_faspri02_HPASQLDB01_fcdb_2.1 Active File System 7259564 34d 08:28 8805.077
faspri02>
Rgds
Tobias
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Your two last snapshots are 20 and 34 days old respectively & they are taking some substantial space.
migration has accepted the solution
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
It looks like they are created by snapmirror and are no orphaned (probably due to a break and a resync or a dr test). Those can be deleted, because the first snapshot in the list belongs to snapmirror as well and this is the one, snapmirror needs for further updates.
