ONTAP Discussions

On running WFA in an environment with many Qtrees

shuhei
1,984 Views

I have a question about using WFA in an environment where I have tens of thousands of Qtrees in a cluster.

I would like to check the following two main points.

 

==Q1

Is there an upper limit or restriction on filters? It means that there is a limit to the size of the CSV file to be imported.

Also, are there any problems with the large number of entries in the dictionary?

 

I don't see any restrictions listed in the manual, but if you actually used it and had a problem with the large size, please let me know what it was.

 

==Q2

In an environment with a large number of Qtrees, the registration time to the dictionary is expected to be long.

Is there a possibility of conflicts with reference APIs (dictionary references)?

Is there any possibility of duplicate execution between the periodically executed JOB registered in the data source and the JOB executed on demand (reference API)?

If there is a conflict, will the one executed later be in a waiting state?

(I'm assuming that the behavior will be different if the lock is per table or per DB.)

 

Basically, jobs are executed sequentially, so I think the REST API will just wait as well.

1 REPLY 1

shuhei
1,477 Views

I was able to confirm that there is no specific upper limit for Q1.
However, we are not sure about Q2.

Is it a specification that volumes can be created and deleted using the API during the registration process in the Qtree DB?
In other words, even if you create or delete a volume during the process of the light blue in the attached image, the API will run without waiting?
The API never waited for us, but we'd like to know if this is the behavior we're expecting.

Public