ONTAP Discussions

Architecture Solution - Question

knssree
3,158 Views

One of my friends reached out to me to provide a solution for the below assignment. Can someone answer the questions ?

 

 

You have been tasked with Architecting Netapp Storage Solution for a new application environment. The environment consists of an Oracle database and CIFS shares for holding multimedia image files

 

  • The long term plan for this storage environment is to host multiple customer environments with the cluster growing across multiple FAS nodes in the future. Keep this in mind when planning this implementation, to take advantage of Netapp storage features and efficiencies.

 

  • You have 2 * FAS8080 heads
  • It has been decided that each server will only run a single protocol SAN or NAS.

 

Firstly, the oracle database will serve a heavily transactional application.

 

The database will be mounted on a Linux cluster (linux-1 & linux-2) with the following mount points.

 

/DB/data (Oracle datafiles) – 1024 GB

/DB/redo (Oracle online redo logs) – 100 GB

/DB/arch (Oracle archived redo logs) – 300 GB

 

As this is heavily transactional database, it is critical that the writes to the redo area have very low latency. Writes to the archive area are less critical in terms of latency, but the Dbas often request that /DB/arch grows several times in size when they have to keep many more archive logs online than usual. Therefore /DB/arch needs to be expandable to 1.5 TB when asked. After a day or so, they’ll delete the logs so you can reclaim the space. The data area must handle quite large IOPS rate.

 

To keep things simple, assume:

 

  • The storage will be mounted by 2 (Linux) hosts.
  • Standard Active/Passive Veritas clustering

 

Secondly the CIFS environment will require a 10 TB CIFS share along with a 40 TB share.

 

The 10 TB CIFS share will be used for initial storage of the image files while they are manipulated and analysed, so have a high performance low latency requirement. The 40 TB share will be used for long term storage, with storage efficiency and capacity of more importance than performance.

 

1) How many shelves would you buy of what type and why? 

2) How would you configure your physical environment and why?

1 REPLY 1

AlexDawson
3,060 Views

It all comes down to budget, which this question doesn't cover.

 

The question proposes a 2.5TB DB workload, and a 50TB CIFS workload

 

I'd buy a shelf of 3.8TB SSDs, which would give about 70TB usable space, then use QoS on the long term archive vols if needed. 

 

But.. I suspect the answer whoever wrote this is looking for will talk about using SAS disks for some workloads and SATA drives for others, but the cost/benefit for a simple config with just 3.8TB SSDs vs multiple shelves of different drive sizes should likely win out at this point. There's no point making things hard for yourself.

 

Spinning SAS will eventually go the way of the dodo - think of our 15TB SSDs - 1 shelf of them is usually cheaper to buy and operate than 15 shelves of 900GB SAS drives. SATA drives meanwhile keep getting larger, and so IO capacity of individual drives is reducing, so even they will turn into secondary only storage eventually.

 

Obligatory "imho" for all this stuff, for certain values of "h".

Public