Tech ONTAP Blogs

Hardware planning for a Bare Metal StorageGRID® Installation

ajeffrey
NetApp Alumni
3,001 Views

If you are familiar with the NetApp® StorageGRID® product line you are probably aware that NetApp® offers several appliances. You may not be as familiar with the software offering though, so let’s dive into what it is and how you can deploy it. StorageGRID®  can be installed on Ubuntu, CentOS, RHEL and Debian operating systems. Always check the NetApp® Interoperability Matrix for StorageGRID SW at the time of installation to see which versions are supported.

 

The recently released TR-4882 Bare Metal Quick Start guide provides a practical, step-by-step set of instructions that produce a working installation of NetApp® StorageGRID®. The installation could be on bare metal or virtual machines running on Centos or RHEL 7.8. The approach is to do an opinionated installation of six StorageGRID containerized services onto three machines in a single grid network layout. It’s a good proof of concept installation, not “right” per se, it’s just easier to wrap your head around the installation with an example. If you’re like me, you usually skip the docs and go straight to the example. And if you are like me you are also in a hurry to get your hands on working StorageGRID!  This is about a Quick Start after all so, let’s get started.

 

The Bare Metal guide takes you through the basic software and hardware requirements for the install and gives you suggested sizing of the hardware nodes based on this. This can be very useful in planning special use cases where you might want to optimize your hardware

Before beginning the deployment, let’s look at the compute, storage, and networking requirements for NetApp® StorageGRID® software. StorageGRID runs as a containerized service within Docker. In this model, some requirements refer to the host operating system (the OS that hosts Docker, which is running the StorageGRID software). And some of the resources are allocated directly to the Docker containers running within each host. In this deployment, in order to maximize hardware usage, we are deploying two services per physical host. For more, read on…

Preparation Compute Requirements

Below we see the supported minimum resource requirements for each type of StorageGRID node.

 

Minimum resources required for StorageGRID nodes.

 Compute Requirements

Type of Node

 

CPU Cores

RAM

Admin

 

8

24GB

Storage

 

8

24GB

Gateway

 

8

24GB

 

In addition, each physical Docker host should have a minimum of 16GB of RAM allocated to it for proper operation. So, for example, to host any two of the services described in Table 1 together on one physical Docker host, you would do the following calculation:

 

24 + 24 +16 = 64GB RAM

and

8 + 8 = 16 Cores

 

Because many modern servers exceed these requirements, we have combined six services (StorageGRID containers) onto three physical servers.

 

Networking Requirements

The three types of StorageGRID traffic are:

  • Grid traffic. The internal StorageGRID traffic that travels between all nodes in the grid. Required. 10GB or higher recommended.
  • Admin traffic. The traffic used for system administration and maintenance. Optional.
  • Client traffic. The traffic that travels between external client applications and the grid, including all object storage requests from S3 and Swift clients. Optional.

 

You can configure up to three networks for use with the StorageGRID system. Each network type must be on a separate subnet with no overlap. If all nodes are on the same subnet, a gateway address is not required.

 

 

Storage Requirements

The nodes each require either SAN-based or local disk devices of the sizes shown below.

 

Node Type

LUN Purpose

    

Number of LUNs

 

Minimum Size of LUN

 

Manual File System Required

Suggested Node Config Entry

All

Admin node system space

*  /var/local (SSD helpful here)

1 for each Admin node

90GB

No

BLOCK_DEVICE_VAR_LOCAL = /dev/mapper/ADM-VAR-LOCAL

 

All Nodes

Docker storage pool at

/var/lib/docker for container pool

1 for each host (physical or VM)

100GB per container

 

Yes – etx4

NA – format and mount as host file system (not mapped into the container)

Admin

Admin node audit logs (System data in Admin container)

*  /var/local/audit/export

 

1 for each Admin node

200GB

 

No

BLOCK_DEVICE_AUDIT_LOGS = /dev/mapper/ADM-OS

Admin

Admin node tables (System data in Admin container)

* /var/local/mysql_ibdata

 

1 for each Admin node

 

200GB

 

No

BLOCK_DEVICE_TABLES = /dev/mapper/ADM-MySQL

 

Storage Nodes

Object storage (block devices)

* /var/local/rangedb0   (SSD helpful here)  

* /var/local/rangedb1      

* /var/local/rangedb2

3 for each storage container

 

4000GB

 

No

BLOCK_DEVICE_RANGEDB_000 = /dev/mapper/SN-Db00

BLOCK_DEVICE_RANGEDB_001 = /dev/mapper/SN-Db01

BLOCK_DEVICE_RANGEDB_002 = /dev/mapper/SN-Db02

 

In this example, the disk sizes shown in Table 3 are needed per container type. The requirements per physical host Are shown in “Physical Host Layout and Requirement,” later in this document.

 

Disk sizes per container type.

Admin Container

Table 3

Name

Size (GiB)

Docker-Store

100 (per container)

Adm-OS

90

Adm-Audit

200

Adm-MySQL

200

 

Storage Container

Name

Size (GiB)

Docker-Store

100 (per container)

SN-OS

90

Rangedb-0

4096

Rangedb-1

4096

Rangedb-2

4096

 

Gateway Container

Name

Size (GiB)

Docker-Store

100 (per container)

/var/local

90

 

Physical Host Layout and Requirements

Sample Layout for Three Hosts

By combining the compute and network requirements shown previously, you can get a basic set of hardware needed for this installation of 3 physical (or virtual) servers with 16 cores, 64GB of RAM, and 2 network interfaces. If higher throughput is desired, it is possible to bond two or more interfaces on the Grid or Client network and use a VLAN-tagged interface such as bond0.520 in the node config file. If you expect more intense workloads, more memory for both the host and the containers is better.

 

As shown in Figure 1, these servers will host 6 Docker containers, 2 per host. The RAM is calculated by providing 24GB per container and 16GB for the host OS itself.

 

 

Total RAM required per physical host (or VM) is 24 x 2 + 16 = 64GB.

 

 Table 4) Disk storage required for host1, 2, and 3. 

 

Table 4

Host 1

Size (GiB)

Docker Store

/var/lib/docker (File system)

200 (100 x 2)

Admin Container

BLOCK_DEVICE_VAR_LOCAL

90

BLOCK_DEVICE_AUDIT_LOGS

200

BLOCK_DEVICE_TABLES

200

Storage Container

SN-OS

*/var/local

(Device)

90

*Rangedb-0

(Device)

4096

*Rangedb-1

(Device)

4096

*Rangedb-2

(Device)

4096

 

Host 2

Size (GiB)

Docker Store

/var/lib/docker (Shared)

200 (100 x 2)

Gateway Container

GW-OS */var/local

100

Storage Container

*/var/local

100

Rangedb-0

4096

Rangedb-1

4096

Rangedb-2

4096

 

Host 3

Size (GiB)

Docker Store

/var/lib/docker (Shared)

200 (100 x 2)

Gateway Container

*/var/local

100

Storage Container

*/var/local

100

Rangedb-0

4096

Rangedb-1

4096

Rangedb-2

4096

 

The Docker Store was calculated by allowing 100GB per /var/local (per container) x two containers = 200 GB.

 

You now have a properly sized hardware resource list for your NetApp StorageGRID deployment. For details on how to prepare the nodes and the actual deployment process please refer to TR-4882.

Public