ONTAP Discussions

storage efficiency in oracle database

Jackiez
1,936 Views
Hi, expert , Would you share some experience for storage efficiency in oracle database ? Customer wanna achieve 3:1 ONTAP storage efficiency. Rgds, Jackie
1 ACCEPTED SOLUTION

steiner
1,892 Views

If the data isn't already encrypted or compressed, then 3:1 is about the median. As Dave said, it's also a lot of "It depends". We evaluated some internal production datafiles here at NetApp, taken at random, and we found between 2:1 ands 6:1 efficiency. 

 

We've also had customers with a lot of datafiles that didn't have blocks, and that sort of data gets about 80:1 efficiency because that gets stored is the datafile block header/trailer.

 

We also had a support case a while back where efficiency was basically 1:1. It wasn't compressed, it was just a massive index of flat files stored elsewhere. It was an extremely efficient way to store data that was also super-random. Conceptually, it was like compressed data, but it wasn't "compression" as we know it. 

View solution in original post

6 REPLIES 6

TMACMD
1,915 Views

For starters, make sure they are not encrypting their databases. That’s a sure way to get ZERO efficiencies. The whole point of encryption is to eliminate patterns in data which is what efficiencies 100% count on

Jackiez
1,821 Views

Thanks, TMACMD!

dkrenik
1,899 Views

 There's a whole lot of "it depends" here...

Are the workloads on these DB's transactional?  Analytical?  What options to the Oracle DB (assuming Enterprise Edition) is the customer utilizing?  Etc.

Tastes like chicken

Jackiez
1,821 Views

Hi,  Drenik,

         Yes, the workload is transactional  Oracle EE .

steiner
1,893 Views

If the data isn't already encrypted or compressed, then 3:1 is about the median. As Dave said, it's also a lot of "It depends". We evaluated some internal production datafiles here at NetApp, taken at random, and we found between 2:1 ands 6:1 efficiency. 

 

We've also had customers with a lot of datafiles that didn't have blocks, and that sort of data gets about 80:1 efficiency because that gets stored is the datafile block header/trailer.

 

We also had a support case a while back where efficiency was basically 1:1. It wasn't compressed, it was just a massive index of flat files stored elsewhere. It was an extremely efficient way to store data that was also super-random. Conceptually, it was like compressed data, but it wasn't "compression" as we know it. 

Jackiez
1,822 Views

Hi Steiner,

 

        Thank you for such profound explanation !  Very useful guidance for my further moving !

Rgds,

Jackie

Public