Subscribe
Highlighted

WFA - UseQuotaManagement with WFA for cDOT - Cache Query => Duplicate rows found for cache table

Hi there,

I am in the process to built a WFA Cache Database storing values that are required for User-Quota-Management (UQM). UQM should be implemented as Workflow and is relying on activated default user-quotas.

 

SVM:\\VolumeName\QtreeName\user_name <---- user_quota.users

 

In order to obtain all required storage-object details required for UQM I am querying the OCUM DB with following sql statement:

 

####
SELECT
qtree_quota.objid AS id,
qtree_quota.diskLimit/1024 AS disk_limit_mb,
qtree_quota.fileLimit AS file_limit,
qtree_quota.qtreeId AS qtree_id,
qtree_quota.softDiskLimit/1024 AS soft_disk_limit_mb,
qtree_quota.softFileLimit AS soft_file_limit,
qtree_quota.quotaTarget AS target,
qtree_quota.threshold/1024 AS threshold_mb,
'tree' AS type,
NULL as user_mapping,
qtree_quota.volumeId AS volume_id,
NULL as user_name
FROM
netapp_model_view.qtree_quota
UNION
SELECT
user_quota.objid AS id,
user_quota.diskLimit/1024 AS disk_limit_mb,
user_quota.fileLimit AS file_limit,
user_quota.qtreeId AS qtree_id,
user_quota.softDiskLimit/1024 AS soft_disk_limit_mb,
user_quota.softFileLimit AS soft_file_limit,
user_quota.quotaTarget AS target,
user_quota.threshold/1024 AS threshold_mb,
user_quota.quotaType AS type,
NULL as user_mapping,
user_quota.volumeId AS volume_id,
user_quota.users AS user_name
FROM
netapp_model_view.user_quota
########

 

This sql-statement is working within an simple SQL-Editor and is returning the storage-object details required for changing a quota for an existing user "user_quota.users".
Unfortunately a WFA cache update for this specific "table" fails. If I have understood the "Dynamic Home Share function" correctly, using default quotas would result in multiple
entries with different users but in the same Qtree.

 

####
Extract WFA Log Message:

2014-10-23 11:01:27,925 INFO [com.netapp.wfa.cache.job.CacheJobExecutorImpl] (Thread-6883 (HornetQ-client-global-threads-1685068325))
Acquisition job 7859 - 1 warning - Duplicate rows found for cache table 'cm_storage.quota_rule_cache',
Columns: (id,disk_limit_mb,file_limit,qtree_id,soft_disk_limit_mb,soft_file_limit,target,threshold_mb,type,user_mapping,volume_id,user_name)
Rows: (6237,0.0000,0,6236,0.0000,0,,0.0000,USER,null,6230,BUILTIN\Administrators)
####

So - how do I have to change the Cache-DB query in order to get these multiple?

 

Environment: OCUM 6.1 and WFA 2.2.x. 

 

 

Thank you,
Rabé

Re: WFA - UseQuotaManagement with WFA for cDOT - Cache Query => Duplicate rows found for cache ta

Hi Rtoubali,

 

The query looks correct to me, i tried the query on my machine which had a few qtrees and quotas and the query returned results.

From the log message you posted, i think WFA might be discarding the new entries as the natural keys of the records are same as the old ones.

 

Can you share the dictionary entry definition to see if it needs any modification?

 

-regards,

chandank.

Re: WFA - UseQuotaManagement with WFA for cDOT - Cache Query => Duplicate rows found for cache ta

Hi Chadank,

 

 

many thanks for your analysis so far. Please find attached the Dictionary Entry in raw XML-format. I appreciate your support here!!

 

Thank you,

Rabé 

Re: WFA - UseQuotaManagement with WFA for cDOT - Cache Query => Duplicate rows found for cache ta

Hi Chadank,

 

 

many thanks for your analysis so far. Please find attached the Dictionary Entry in raw XML-format. I appreciate your support here!!

 

 

 

Thank you,

Rabé 

Re: WFA - UseQuotaManagement with WFA for cDOT - Cache Query => Duplicate rows found for cache ta

Thanks for sending the dictionary entry. I was able to import it into my wfa installation and acquire few quotas and qtrees.

I had all types of quotas on storage, i got the same warning in my log and few entries were not acquired by wfa.

This is due to a collision in the natural key. As per your dictionary entry natural key = volume_id + target, this is not unique for all the records.

Maybe you have to choose a natural key which would be unique, you can consider adding qtree_id to the natural key.

 

following is the log message, i have highlighted the duplicate natural keys.

 

iddisk_limit_mbfile_limitqtree_idsoft_disk_limit_mbsoft_file_limittargetthreshold_mbtypeuser_mappingvolume_id
27700.121112300026640.01170*0.1211treenull2663
267310.9570266500*0USERnull2663
27770.23050266900*0USERnull2663
267100266500 0USERnull2663
277500266900 0USERnull2663

 

-regards,

chandank