Sorry for the long delay, I decided I better follow up with Liquidware directly to make sure I was not making assumtions. Here is the data I recieved;
We collect Read IOPS, Write IOPS and IO transfer rate (Kb/s) for both applications and machines.
Our CID keys (agents) collect the information locally on the machine, then sends it back to our Hub. Each agent follows 2 schedules:
Call back frequency : How often the agent send the information back to the hub
Sampling interval: How often do we capture the local data (for continuous metrics such as cpu, memory, disk IOPS, etc…)
Sampling interval basically determine how granular the information will be and can be set up as low as 1 minute.
For each sampling interval, we collect the average resource used during that period, NOT the value at the time of the collection.
In other words, we ask the system for the past sample intervals AVG value with a single API call every sample interval.
The system returns the past X min AVG value and not the current realtime value. “X” may be as low as 1 minute.
1 hour call back frequency, 5 minutes sampling interval
For each application, the CID key will report 12 values representing the average metric (read IOPS, Write IOPS…) observed over 5 minute + one peak value + average over the call back (value 1+ value2+…..Value 12 / 12 ) .
The peak is the highest average value among the 12 samples.
This method insures a very accurate average and that any peaks would be collected.