Active IQ Unified Manager Discussions

DFM.Snapshots table imports

f_duranti
8,022 Views

Hi, anyone have a good implementation of importing DFM snapshots table from DFM with finders and filters to share?

Thanks

Francesco

1 ACCEPTED SOLUTION

dekel
8,020 Views

Hello ,

First let me say , Nice progress!

The problem is that the Snapsot query you have created contains ID references taken from the DFM  (vol_id ,aggr_id ,array_id)  and reference to already created tables in the storage schema .

(We definitely need to change the UI so such mistakes will not be done .)

In order to solve you problem you should

1) Remove the FK to foreign keys to array table (you will need to recreate the table )

2) Add to the Select query  the volume Name and the Array IP (or host name)  -do JOIN with the relevant DFM tables.

3) Do the JOIN in the Filter level  between snapshot and Volumes .

Another simpler option is to wait for 1.1 where WFA contains embedded support for snapshot caching as part of  the certified content

Another note :

If you choose to go on and implement the query on 1.0.2  ,note that the  query you created  caches deleted snapshots

you can use the below Create and Select queries as reference

CREATE TABLE storage.snapshot (

`id` INT NOT NULL AUTO_INCREMENT,

  `name` VARCHAR(255)  NOT NULL,

`timestamp` DATETIME DEFAULT NULL,

`volume_id` INT NOT NULL,

  PRIMARY KEY (`id`)       

) ENGINE=InnoDB

SELECT  
snapshot.snapshotId AS id,
snapshot.snapshotName AS name,
snapshot.accessTime AS timestamp,
snapshot.volId AS volume_id

FROM
dfm.snapshots snapshot
JOIN
dfm.volumes volume ON volume.volId = snapshot.volId AND volume.volState IS NOT NULL
JOIN
dfm.aggrs aggregate  ON volume.aggrId = aggregate.aggrId
JOIN
dfm.objects obj   ON volume.volId = obj.objid AND obj.objDeleted IS NULL AND ((obj.objFlags & 0x20000000000000) = 0)

View solution in original post

12 REPLIES 12

yaronh
7,970 Views

Hi Francesco,

Snapshot caching will be part of 1.1 release (ETA March 2012)

Regards,

Yaron Haimsohn

f_duranti
7,970 Views

Can it be exported and reimported in 1.0.2? It would be a nice addition to have without having to rewrite it.

Francesco

dekel
7,970 Views

Hi Francesco,

It can not be directly imported into 1.0.2, but it can be used as reference both table structure and queries .

Can you give some details on the use case ,why do you need to cache snapshots ?

Dekel

f_duranti
7,970 Views

I need the snapshot list for 2 use cases:

1) to check the existence of a static snapshot (before deleting it)  used for a workflow that will clone a volume on a destination VSM volume

2) I need to find the most recent snapshot starting with a specified string to do workflow for disaster recovery

I was trying to create this one in a new cache table but got some proble with the object definition and linking to other table with foreign key (probably using wrong name somewhere).

f_duranti
7,970 Views

Checking on how other table are done I was trying to do this:

1) create a cache table (I've done this in a different schema because It seems that I cannot them change the standard DFM provider storage tables:

CREATE TABLE storage2.snapshot (

    `id` int NOT NULL AUTO_INCREMENT,

    `name` varchar(255) DEFAULT NULL,

    `vol_id` int,

    `aggr_id` int,

    `array_id` int,

    `snapid` bigint,

    `accessTime` timestamp NOT NULL,

PRIMARY KEY (`id`)) ENGINE=InnoDB;

I've set array_id, vol_id, name as natural key and put the foreign key for array_id on storage.array.id, vol_id on storage.volume.id and aggr_id on storage.aggregate.id

Then I've created a new DFM Data Source Table linking the storage2 schema and doing this selection from dfm snapshots table:

select

snapshotId as id,

snapshotName as name,

volId as vol_id,

aggrId as aggr_id,

hostId as array_id,

snapshotUniqueId as snapid,

accessTime as accessTime

from dfm.snapshots

In the dictionary I changed aggr_id, vol_id, array_id to aggregate, volume, array (with the correct type) and on the ref column I've put the aggr_id, array_id, vol_id.

My problem is that when i create the datasource and try to do the import I get a "Foreign Key to a non existent table array" error.

Any suggestion?

dekel
8,021 Views

Hello ,

First let me say , Nice progress!

The problem is that the Snapsot query you have created contains ID references taken from the DFM  (vol_id ,aggr_id ,array_id)  and reference to already created tables in the storage schema .

(We definitely need to change the UI so such mistakes will not be done .)

In order to solve you problem you should

1) Remove the FK to foreign keys to array table (you will need to recreate the table )

2) Add to the Select query  the volume Name and the Array IP (or host name)  -do JOIN with the relevant DFM tables.

3) Do the JOIN in the Filter level  between snapshot and Volumes .

Another simpler option is to wait for 1.1 where WFA contains embedded support for snapshot caching as part of  the certified content

Another note :

If you choose to go on and implement the query on 1.0.2  ,note that the  query you created  caches deleted snapshots

you can use the below Create and Select queries as reference

CREATE TABLE storage.snapshot (

`id` INT NOT NULL AUTO_INCREMENT,

  `name` VARCHAR(255)  NOT NULL,

`timestamp` DATETIME DEFAULT NULL,

`volume_id` INT NOT NULL,

  PRIMARY KEY (`id`)       

) ENGINE=InnoDB

SELECT  
snapshot.snapshotId AS id,
snapshot.snapshotName AS name,
snapshot.accessTime AS timestamp,
snapshot.volId AS volume_id

FROM
dfm.snapshots snapshot
JOIN
dfm.volumes volume ON volume.volId = snapshot.volId AND volume.volState IS NOT NULL
JOIN
dfm.aggrs aggregate  ON volume.aggrId = aggregate.aggrId
JOIN
dfm.objects obj   ON volume.volId = obj.objid AND obj.objDeleted IS NULL AND ((obj.objFlags & 0x20000000000000) = 0)

f_duranti
7,970 Views

Hi dekel, thanks for the good explanation   Thanks for the information about deleted snapshots (I was not thinking about it... )

If i've understood the problem is mostly that the array/volume/aggregate table are already created so when I define a new table in another schema those old table in the storage schema will not be updated and the data are not imported. So only tables in the same schema can reference each other, is this correct?

I'll probably go with this until 1.1 is out just to start to do some testing and put on some test workflow we will need but I'll wait for 1.1. to starting working with them.

Just a curiosity: I saw that I cannot put the snapshot table in the default Data Source Type for DFM (because it will not let me put in the query on the snapshot table) but can I create a new Data Source Type copying all the old query to the new with the added one for snapshot? This should create a schema with all the tables with the right foreign keys. Is this correct?

Francesco

dekel
7,970 Views

Hi.

Your understanding is indeed correct .

Your suggested workaround will work as well ,but I advice against it .when you create your "own" storage schema you are detaching yourself from the mainline ,

You will not benefit from updates and bug fixes made for the certified "storage" schema .

Hope it answers your q

Dekel

f_duranti
7,970 Views

You're right, I'll just go with a simple table and the join at filter level for know. I just need to check for snapshot existance or newer snapshot so it's not a problem to wait for 1.1 and the full implementation.

Thanks really much for the full explanation on how WFA is working, it helped really much.

Francesco

f_duranti
7,263 Views

Done some tests and still have problems because it seems that in this case, with tables in different schemas it will not be possible to have the correct "volid" because the volid from the dfm snapshots table is a number based on DFM internal data while WFA volume.id is an autoincrement. When I join the 2 tables I don't get the match between volumes and snapshot. I'm trying to include the volname directly into the snapshot table so at least this will solve for now my problem (i should just check for snapshot existance).

Francesco

dekel
7,263 Views

Hello ,

Please see my response from above, the second bullet ,you cannot join By ID in this case   -one reflects the DFM row IDs and the other is the WFA ID

2) Add to the Select query  the volume Name and the Array IP (or host name)  -do JOIN with the relevant DFM tables.

f_duranti
7,263 Views

You're right, I had misunderstood what was there I've done it that way and it works correctly.

Thanks

Francesco

Public