I have a NFS volume and used for Oracle backups, to calculate the amount of data being written to the volume by using NetApp tool, I pulled out the MBps graph in OCUM, based on the starting and ending time, and average MBps, I got the total amount = length x avg MBps.
Is there any problems with this method to estimate the total amount?
Is this just for test exercise ? Taking avg resource for calculations are fine as far as performance metrics are concerned such as IOPS and throughput but I wouldn't rely on it for 'used-space'. I suggest take the estimated figure from your calculation and compare it with the actual used space (see the deviation) from the console output or based on the OCUM. In fact, OCUM also shows you used-data space but it's not real-time, data is pulled based on the polled interval.
I am not asking how much space been used, but how much been written to the NFS volume during this period of the time? because there are a lot of overwritten data, and also they keep 1 month worth of data.
Your query is clearer now. That's perfectly reasonable method to establish amount of data written to volume for a given time. Now, how much if that written data is actually new blocks could be guessed looking at the snap delta.