ONTAP Discussions

storage efficiency in oracle database

Jackiez
2,706 Views
Hi, expert , Would you share some experience for storage efficiency in oracle database ? Customer wanna achieve 3:1 ONTAP storage efficiency. Rgds, Jackie
1 ACCEPTED SOLUTION

steiner
2,662 Views

If the data isn't already encrypted or compressed, then 3:1 is about the median. As Dave said, it's also a lot of "It depends". We evaluated some internal production datafiles here at NetApp, taken at random, and we found between 2:1 ands 6:1 efficiency. 

 

We've also had customers with a lot of datafiles that didn't have blocks, and that sort of data gets about 80:1 efficiency because that gets stored is the datafile block header/trailer.

 

We also had a support case a while back where efficiency was basically 1:1. It wasn't compressed, it was just a massive index of flat files stored elsewhere. It was an extremely efficient way to store data that was also super-random. Conceptually, it was like compressed data, but it wasn't "compression" as we know it. 

View solution in original post

6 REPLIES 6

TMACMD
2,685 Views

For starters, make sure they are not encrypting their databases. That’s a sure way to get ZERO efficiencies. The whole point of encryption is to eliminate patterns in data which is what efficiencies 100% count on

Jackiez
2,591 Views

Thanks, TMACMD!

dkrenik
2,669 Views

 There's a whole lot of "it depends" here...

Are the workloads on these DB's transactional?  Analytical?  What options to the Oracle DB (assuming Enterprise Edition) is the customer utilizing?  Etc.

Tastes like chicken

Jackiez
2,591 Views

Hi,  Drenik,

         Yes, the workload is transactional  Oracle EE .

steiner
2,663 Views

If the data isn't already encrypted or compressed, then 3:1 is about the median. As Dave said, it's also a lot of "It depends". We evaluated some internal production datafiles here at NetApp, taken at random, and we found between 2:1 ands 6:1 efficiency. 

 

We've also had customers with a lot of datafiles that didn't have blocks, and that sort of data gets about 80:1 efficiency because that gets stored is the datafile block header/trailer.

 

We also had a support case a while back where efficiency was basically 1:1. It wasn't compressed, it was just a massive index of flat files stored elsewhere. It was an extremely efficient way to store data that was also super-random. Conceptually, it was like compressed data, but it wasn't "compression" as we know it. 

Jackiez
2,592 Views

Hi Steiner,

 

        Thank you for such profound explanation !  Very useful guidance for my further moving !

Rgds,

Jackie

Public