I have a question about NVRAM CPs that I hope someone can clarify.
When a CP occurs does it write out the data sequentially based on FIFO? Eg: say NVRAM capacity is 100 units, In a CP interval I have many writes from SATA disk that fill up 35 units of NVRAM, in the same time before a CP there are 10 units of SAS writes. Will the SAS writes be queued behind the SATA writes when NVRAM is flushed to disk?
If this is the case can SAS writes be slowed down on controllers that have mixed SAS and SATA disks?
It is not the question of “ahead” or “behind”. Consistency point happens as a single transaction – which means, it will wait for the slowest drives to complete. Even if SAS has completed, CP will not be complete until SATA catches up.
OTOH unless this causes back-to-back CP, I do not see how it matters. Whether CP completes in 0.5 second or 9.5 second, it does not really matter if it completes fast enough before second half on NVRAM fills up.
Thanks for the updates. The problem I fear for us is our financial modelling applications that have a few nodes and spit out data constantly at about 50MB/sec for writes for a number of hours. If there are 4-5 of these nodes with this write activity going at the same time it adds up causing back to back CPs. We then see delayed write failures in Windows at which point 8/10 times the application falls over and the run has to be started again. This would be pushing 200-250MB/sec writes through a FAS3240 with 90 disk 15k SAS aggregates.
In this controller there are also SATA disks with utilized percent of 90% while the SAS disks are 20-30% utilized. The dataset with the large amount of writes is going to SAS.
What would be the best way to solve performance issues on these large write workloads? other than giving them their own filers 🙂