Data ONTAP Discussions

Highlighted

Netapp migration from 7Mode to Ontap 9.3P4 NFS3 issue with Universe Database

I'm not certain this is the correct place to post this query but here goes.

We recently, this past weekend, finally migrated the last of our systems off of 7mode from an 8040 to an AFF 200 running Ontap 9.3P4. Immediately after this move we started to experience issues with one of our database systems "Rocket UniVerse". The systems uses a NFS3 volume.


Since the 7MTT move this particular system has been suffering from performance. It seems that the database is unable to effectively open multiple files at the same time to create joins and spit out in results of the queries. If we open up individual files and query those, then the problem does not manifest it self.  Only when multiple files are opened at the same time and read.

If anyone has seen something similar to this, I'd be greatful in any information that can be shared.

 

I have not seen any issues to latency other some latency with NFS3 - readdirplus and high iops with getattr.

 

Thank you,

larry

2 REPLIES 2

Re: Netapp migration from 7Mode to Ontap 9.3P4 NFS3 issue with Universe Database

is this dNFS / oracle or just a VM sitting on a datastore? 

 

 

Re: Netapp migration from 7Mode to Ontap 9.3P4 NFS3 issue with Universe Database

Thanks for the reply.   

 

It's actually a Solaris Zone in a volume.

 

The issue has turned out to be a bug in the Universe database with NFS3.   The DB doesn't work properly with inode actually being in a 32bit address space.  

 

When we moved to the cluster from 7mode it appears that all the files in the volume have an inode number in the 2billion range.   The DB chokes on this because it's actually looking for something less than 32bit.  Technically a signed 32bit integer instead of an unsigned 32bit integer.

 

It truncates the inode number and that causes it to not work properly.

 

Our immediate fix was to move to NFS4 which works a little differently than v3 with the inode numbering but ultimately we need to patch the DB to the newer version which fixes this flaw.

 

LB

Forums