I have done this task often depend on the nature of our application.
Would say there is performance hit but not big enough for me to notice, or in another word, I have to increase the number of inodes.
I am assuming you are using 7 mode, there are ways, but they have caveats
There is a rule for this, you can increase the base by 20% once, then increase total by 20% once again. After this second time, it is not recommended to increase again.
You must be very careful here however. NDMP jobs of any type are notoriously sensitive to inode counts. This means that backups and ndmpcopy will take longer on the same size volume once you increase the inode count. Also snapvault and qtree snapmirror jobs are affected by increasing inodes as there is an ndmp component.
Once you increase the count, there is really no way to ever go back. So if you find you have performance issues, there is only one fix - migrate the data to other volumes using host based copies, destroy the volume.
Let me state out in the open, I have done the 20/20 increase many times and have not really seen major problems outside the backup times increasing. I have also had my team be forced to go beyond the 20/20 increase and seen a controller taken down by simply running an "ls" which is the Unix equivalent of "dir". The guidelines are in place for a very good reason and should be followed.
Lastly there used to be a bug that volumes over 1TB in size did not have the inode count calculated properly. It was not scheduled to be fixed, if this is a large volume, you may have to do the required calculations and set the inodes to the proper count for your volume size. The bug ID is 199233