The first ESRP using Clustered Data ONTAP, the FAS3220 21,000-mailbox ESRP, was published in November 2012. Back then, I wrote a very brief blog post announcing its publishing. Here I’ll discuss its configuration, results and analysis in more detail.
The Exchange 2010 user profile and DAG (database availability group) configuration is outlined below:
And the FAS3220 storage system configuration is as follows:
Figure 1. Topology and DAG architecture of the FAS3220 mailbox resiliency ESRP testing.
Figure 1 illustrates the design topology employed for the FAS3220 ESRP testing. The design is for a 21,000-mailbox Exchange 2010 mailbox resiliency storage solution by using a six-server DAG (three active and three passive) and two copies of databases (one active and one passive). The storage configuration is identical for both copies. Therefore, only the active copy was built out. The tested scenario is the worst-case scenario, because all of the active mailboxes are on the three active servers; and all of the active databases are placed on the first node (fas3220-1).
Figure 1 also shows that inside the first node, there are three active Vservers (1,2,3), each with its own dedicated aggregate. Each aggregate contains seven database LUNs. And each database LUN contains one database, for a total of twenty-one databases.
Figure 2 shows the Expected vs. Achieved total database IOPS (the sum of both reads and writes). The achieved IOPS is 49% higher than the expected or targeted.
Figure 2. Achieved database IOPs is 49% higher than the expected number (higher is better)
Figures 3 and 4 show the measured read and write latencies, respectively, in comparison to the 20ms upper limit (set by Microsoft). Clearly, both read and write latencies are excellent and well below the 20ms limit.
Figure 3. Database read latency in comparison to the 20ms limit (lower is better)
Figure 4. Database write latency in comparison to the 20ms limit (lower is better)
It is interesting to notice that in the design topology (see Figure 1), for the active database copy, there are three mailbox servers, three Vservers and three aggregates. Why is that?
First of all, three mailbox servers are required because we have 21,000 mailboxes. Typically, each mailbox server can have 10,000 or fewer mailboxes.
Since the FAS3220 ESRP was tested on Clustered ONTAP, Vservers are required. The question is how many Vservers? In theory, we can create just one Vserver to service all 21,000 mailboxes. However, it is a common practice in Exchange solution design to use a “pod” or “building block” approach. That means establishing a base configuration with a set number of (a) mailboxes on a single server, and (b) hard disk drives in an aggregate or aggregates in a storage controller. When more mailboxes are added, one simply scale up the number of base configurations as needed. This “pod” approach works well in 7-Mode. And the FAS3220 ESRP demonstrates that this approach can be preserved and work equally well in Clustered ONTAP, if we pair a Vserver to a mailbox server. Therefore, the design topology has three Vservers, each servicing I/Os for a mailbox server.
By the same token, even though it is feasible to have one very large aggregate that supports all three active mailbox servers, the “pod” approach requires each pair of mailbox server and Vserver have their own dedicated aggregate or aggregates. Therefore, in the FAS3220 ESRP, there are three aggregates, one per Vserver.
To sum it up, I’d like to quote the TechOnTap article dated on January 8, 2013 by Steven Miller (Senior Technical Director and Platform Architect, NetApp):
“NetApp recently tested the FAS3220 as part of the Microsoft Exchange Solution Reviewed Program (ESRP). We found that the system is capable of supporting 21,000 Exchange 2010 users at 0.120 IOPS per user and a 1.5GB mailbox size in the Mailbox Resiliency (dual-copy) configuration. Since it achieved 49% more IOPS than targeted, it’s clear that the tested solution still had significant IOPS headroom. This result compares favorably with those from competing midrange storage systems.”
Thanks for reading.