Data Backup and Recovery Discussions
A customer of ours has a Domino Server for Message Archival.
So far it holds around 80'000 Datafiles spread between Domino-Data and DAOS. (about 2TB Data, 3TB Daos)
While setting the Backup-Mode the Debug-Log reports:
Insufficient memory - NSF pool is full.
That then leads to the Job faulting
[ltd-00010] Errors encountered while opening DBs:
with a list of all Databases.
Snapshot can be taken but having the inconsistent Databases snap'd is quite unsexy.
Does anybody have a Reference-Implementation with similar sizes where we could try to align our settings?
Or maybe a good hint on what to tweak in Domino for more stable operation or to speed up setting the backup-mode.
Also something to speed up processing when DBIIDs have changed might be nice since the Backup has to be taken after estimated 10'000 Databases got new IDs and so far that takes roughly 4-5h.
For me it looks quite clearly like a "put in more memory" but our Customer claims this is something NetApp has to fix.
Any suggestion to persuade him otherwise is appreciated.
This sure looks like a IBM Service Request that the customer need to drive with IBM.
More details here.
Thanks for that Link. I forwarded it to Customer for consultation.
Any Best-Practise Settings for such a kind of Server you could share?
I hope "For example, running a mail server with 1000 users on 1GB of RAM. " doesn't mean we have to install 80G RAM on that Machine.
Thanks in advance
PS: found the following Pool-Parameter in the current notes.ini:
I tend to say they are way to small.
By the way its Domino 8.5 on w2k8R2 with 32GB RAM
As Siva mentioned this is a Domino error that Snap Creator is just passing on.
In this case we're acting as the messenger - Domino is reporting the insufficient memory message.
They should certainly consider a IBM PMR if they don't have an in house admin that can handle this.
Here is a link that may help: http://www-10.lotus.com/ldd/dominowiki.nsf/dx/Domino_Server_performance_troubleshooting_best_practices#Memory
Domino will eat up as much memory as you can throw at it.
There isn't really much best practice that can be provided as workloads differ from company to company.