Accepted Solution

DFM.Snapshots table imports

Hi, anyone have a good implementation of importing DFM snapshots table from DFM with finders and filters to share?



Re: DFM.Snapshots table imports

Hi Francesco,

Snapshot caching will be part of 1.1 release (ETA March 2012)


Yaron Haimsohn

Re: DFM.Snapshots table imports

Can it be exported and reimported in 1.0.2? It would be a nice addition to have without having to rewrite it.


Re: DFM.Snapshots table imports

Hi Francesco,

It can not be directly imported into 1.0.2, but it can be used as reference both table structure and queries .

Can you give some details on the use case ,why do you need to cache snapshots ?


Re: DFM.Snapshots table imports

I need the snapshot list for 2 use cases:

1) to check the existence of a static snapshot (before deleting it)  used for a workflow that will clone a volume on a destination VSM volume

2) I need to find the most recent snapshot starting with a specified string to do workflow for disaster recovery

I was trying to create this one in a new cache table but got some proble with the object definition and linking to other table with foreign key (probably using wrong name somewhere).

Re: DFM.Snapshots table imports

Checking on how other table are done I was trying to do this:

1) create a cache table (I've done this in a different schema because It seems that I cannot them change the standard DFM provider storage tables:

CREATE TABLE storage2.snapshot (


    `name` varchar(255) DEFAULT NULL,

    `vol_id` int,

    `aggr_id` int,

    `array_id` int,

    `snapid` bigint,

    `accessTime` timestamp NOT NULL,


I've set array_id, vol_id, name as natural key and put the foreign key for array_id on, vol_id on and aggr_id on

Then I've created a new DFM Data Source Table linking the storage2 schema and doing this selection from dfm snapshots table:


snapshotId as id,

snapshotName as name,

volId as vol_id,

aggrId as aggr_id,

hostId as array_id,

snapshotUniqueId as snapid,

accessTime as accessTime

from dfm.snapshots

In the dictionary I changed aggr_id, vol_id, array_id to aggregate, volume, array (with the correct type) and on the ref column I've put the aggr_id, array_id, vol_id.

My problem is that when i create the datasource and try to do the import I get a "Foreign Key to a non existent table array" error.

Any suggestion?

Re: DFM.Snapshots table imports

Hello ,

First let me say , Nice progress!

The problem is that the Snapsot query you have created contains ID references taken from the DFM  (vol_id ,aggr_id ,array_id)  and reference to already created tables in the storage schema .

(We definitely need to change the UI so such mistakes will not be done .)

In order to solve you problem you should

1) Remove the FK to foreign keys to array table (you will need to recreate the table )

2) Add to the Select query  the volume Name and the Array IP (or host name)  -do JOIN with the relevant DFM tables.

3) Do the JOIN in the Filter level  between snapshot and Volumes .

Another simpler option is to wait for 1.1 where WFA contains embedded support for snapshot caching as part of  the certified content

Another note :

If you choose to go on and implement the query on 1.0.2  ,note that the  query you created  caches deleted snapshots

you can use the below Create and Select queries as reference

CREATE TABLE storage.snapshot (


  `name` VARCHAR(255)  NOT NULL,


`volume_id` INT NOT NULL,

  PRIMARY KEY (`id`)       


snapshot.snapshotId AS id,
snapshot.snapshotName AS name,
snapshot.accessTime AS timestamp,
snapshot.volId AS volume_id

dfm.snapshots snapshot
dfm.volumes volume ON volume.volId = snapshot.volId AND volume.volState IS NOT NULL
dfm.aggrs aggregate  ON volume.aggrId = aggregate.aggrId
dfm.objects obj   ON volume.volId = obj.objid AND obj.objDeleted IS NULL AND ((obj.objFlags & 0x20000000000000) = 0)

Re: DFM.Snapshots table imports

Hi dekel, thanks for the good explanation   Thanks for the information about deleted snapshots (I was not thinking about it... )

If i've understood the problem is mostly that the array/volume/aggregate table are already created so when I define a new table in another schema those old table in the storage schema will not be updated and the data are not imported. So only tables in the same schema can reference each other, is this correct?

I'll probably go with this until 1.1 is out just to start to do some testing and put on some test workflow we will need but I'll wait for 1.1. to starting working with them.

Just a curiosity: I saw that I cannot put the snapshot table in the default Data Source Type for DFM (because it will not let me put in the query on the snapshot table) but can I create a new Data Source Type copying all the old query to the new with the added one for snapshot? This should create a schema with all the tables with the right foreign keys. Is this correct?


Re: DFM.Snapshots table imports


Your understanding is indeed correct .

Your suggested workaround will work as well ,but I advice against it .when you create your "own" storage schema you are detaching yourself from the mainline ,

You will not benefit from updates and bug fixes made for the certified "storage" schema .

Hope it answers your q


Re: DFM.Snapshots table imports

You're right, I'll just go with a simple table and the join at filter level for know. I just need to check for snapshot existance or newer snapshot so it's not a problem to wait for 1.1 and the full implementation.

Thanks really much for the full explanation on how WFA is working, it helped really much.