Active IQ Unified Manager Discussions
Active IQ Unified Manager Discussions
Hello.
I am working on a test system with a customer database of about 100 controllers in their network. OC 50 installed fine and has been working but trying to goto the latest release has been trouble some. Searching in the Support site had yielded little help using the red string below.
Has anyone else gotten this message?
[root@somelinuxhost netappfiles]# ./occore-setup-5-0-2-linux-x64.sh
Preparing to install OnCommand Core Package
Unpacking files needed for the installation ...
We recommend that you back up your DataFabric Manager server database before
upgrading. You may skip this step if you already have a recent backup.
Would you like to back up your database now? [y,n]: n
Beginning the OnCommand Core Package installation ...
warning: waiting for transaction lock on /var/lib/rpm/__db.000
additional info
[root@sjcfilermon01 ~]# dfm about
Version 5.0 (5.0)
Executable Type 64-bit
Serial Number 1-secret number
Edition Standard edition of DataFabric Manager server
Administrator Name root
Host Name dfmhost.company.com
Host IP Address 127.0.0.1
Host Full Name sjcfilermon01.sjc.ebay.com
Node limit 250 (currently managing 102)
Operating System Red Hat Enterprise Linux Server release 5.3 (Tikanga) 2.6.18-128.1.16.el5 x86_64
CPU Count 8
System Memory 31744 MB (load excluding cached memory: 11%)
Installation Directory /opt/NTAPdfm
39.0 GB free (39.3%)
Perf Data Directory /data/perfdata
Data Export Directory /opt/NTAPdfm/dataExport
Database Backup Directory /data
Reports Archival Directory /opt/NTAPdfm/reports
Database Directory /data
56.3 GB free (42.1%)
Licensed Features DataFabric Manager server: installed
Installed Plugins Storage System Config 6.5.1 (6.5.1) - storage systems and vFilers
Storage System Config 6.5.2 (6.5.2) - storage systems and vFilers
Storage System Config 6.5.3 (6.5.3) - storage systems and vFilers
Storage System Config 6.5.4 (6.5.4) - storage systems and vFilers
Storage System Config 6.5.5 (6.5.5) - storage systems and vFilers
Storage System Config 6.5.6 (6.5.6) - storage systems and vFilers
Storage System Config 6.5.7 (6.5.7) - storage systems and vFilers
Storage System Config 7.0 (7.0.0.1) - storage systems and vFilers
Storage System Config 7.0.1 (7.0.1.1) - storage systems and vFilers
Storage System Config 7.0.2 (7.0.2) - storage systems and vFilers
Storage System Config 7.0.3 (7.0.3) - storage systems and vFilers
Storage System Config 7.0.4 (7.0.4) - storage systems and vFilers
Storage System Config 7.0.5 (7.0.5) - storage systems and vFilers
Storage System Config 7.0.6 (7.0.6) - storage systems and vFilers
Storage System Config 7.0.7 (7.0.7) - storage systems and vFilers
Storage System Config 7.1 (7.1.0.1) - storage systems and vFilers
Storage System Config 7.1.1 (7.1.1.1) - storage systems and vFilers
Storage System Config 7.1.2 (7.1.2.1) - storage systems and vFilers
Storage System Config 7.1.3 (7.1.3) - storage systems and vFilers
Storage System Config 7.2 (7.2) - storage systems and vFilers
Storage System Config 7.2.1 (7.2.1.1) - storage systems and vFilers
Storage System Config 7.2.2 (7.2.2) - storage systems and vFilers
Storage System Config 7.2.3 (7.2.3) - storage systems and vFilers
Storage System Config 7.2.4 (7.2.4) - storage systems and vFilers
Storage System Config 7.2.5 (7.2.5.1) - storage systems and vFilers
Storage System Config 7.2.6 (7.2.6.1) - storage systems and vFilers
Storage System Config 7.2.7 (7.2.7) - storage systems and vFilers
Storage System Config 7.3 (7.3) - storage systems and vFilers
Storage System Config 7.3.1 (7.3.1.1) - storage systems and vFilers
Storage System Config 7.3.2 (7.3.2) - storage systems and vFilers
Storage System Config 7.3.3 (7.3.3) - storage systems and vFilers
Storage System Config 7.3.4 (7.3.4) - storage systems and vFilers
Storage System Config 7.3.5 (7.3.5) - storage systems and vFilers
Storage System Config 8.0 (8.0) - storage systems and vFilers
Hi,
What is the o/p for "ps -aef | grep -i rpm" on your dfm host ?
Did you run dfm rpm manually before running script ./occore-setup-5-0-2-linux-x64.sh ?
Thanks
Nikhil
Most often it means that another instance of RPM is already running (notice, it could also be something like background upgrade checker, not necessary rpm command only). This is usually transient condition and goes away automatically.
If you verified that no other invocation of RPM-related programs is active, it could mean unclean shutdown of previous RPM run (like kill -9). This leaves RPM database inside of open transaction. Files __db.000 and others are effectively Berkely DB transaction logs. Those files are normally simply removed on reboot, so brute force is to just manually do the same (rm /var/lib/rpm/__db.*). More prudent approach is to try to recover database; if you have Berkely db-utils installed, try "db_recover -h /var/lib/rpm/"; this should replay logs and remove them. This could be db45_recover or similar, because multiple versions of db-utils can coexist.
Finally the worst case is when installation script tries to invoke RPM from within another RPM invocation. But I would expect this will be seen on every system then, so hopefully it is not the case.
Hello ... sorry i was not able to report right away, i got swept away by higher priority items.
The short answer is rebooted the host; it had been up for 500+ days and it could have been a resource issue, a memory buffer, etc. Impossible to say at the moment; I did kill off the db.000 files and rebuild the RPM database.
After rebooting it worked like a charm and was upgraded in 20 minutes.
This was a test machine so rebooting was easy to do ( no change request or impact to dependent services ) but rebooting on a real system may not be as easy next time. There will be more upgrades in the future and we will see how they go.