Community

Pan-EMEA



This is the next chapter of our Blog Series: "Why it makes sense for enterprise applications to be hosted on NetApp"

My colleague Andreas Krügel already wrote an article about our SQL Server Integration and I am happy to offer you the translation:

 

Today's challenge ...
I think rapidly growing data volumes, driven by e.g. new business issues, such as industry 4.0 (Internet of Things) and also new, modern business applications provide more than ever new demands on the future SQL Server infrastructure. For example…
- Demanding SLA agreements in terms of availability and of course performance
- Faster deployment of database copies to support additional business processes such as Analytics / BI but also to increase the appropriate test and development cycles.
Using our application integration for SQL Server in conjunction with flash memory, we help customers to meet current and future requirements. Our solutions are characterized mainly by the following points:

a) Maintaining consistent, low-latency database performance (under 1 ms) even during peak load.
b) A variety of storage efficiency technologies for reducing the data (costs) and to extend the life of flash memory extension.
c) Dramatically reduce backup and especially recovery times, this means in seconds or minutes instead of hours.
d) Provision of database copies in a few seconds without requiring additional storage to accelerate business processes for Test and Development or Analytics.

As part of my blog today, I will be focusing on the topics of Performance (a) and Storage Efficiency (b). Of course I will also cover the other points. We will see something new in the next few weeks. So stay tuned ;-)

Performance

I think it cannot be dismissed out of hand, that a large part of today's database performance problems is related to the storage layer.

But why is that?

In the early days of computing the clock speeds of processors were below 1MHz (eg Intel 4004), access times and latency to disk were fast enough.
Bild 1 - IBM Disk.png
Here's a picture from the 50s. The first IBM hard drive 350 weighing several hundred kilograms. The available capacity was approximately 5MB and the average access time at about 0.6 seconds. You could not buy the drive, it was only available for rent for several 1000 - DM per month.

In recent decades, the HDD development was mainly focused on increasing the package density - more capacity, smaller footprint, lower price. The read / write speeds as well as the access times of the hard disks have not really changed significantly in comparison. Currently, the access times are still in the single digit millisecond range. Related changes in recent decades has been a factor of ~ 100th.

Not so with the processors, these have changed in terms of clock speed by a factor of 10,000 (from ~ kHz to GHz). And right here now arises the discrepancy in access to the data. Processors can and want to "process" faster, but often have to wait for the disk IO.

Therefore, actually all storage manufacturers put a lot of intelligence into the software for the storage controllers e.g. aggregating many disks, intelligent read / write caching, and more., in order to increase the bandwidth while optimizing the latencies.
    
Flash - the way from Hybrid array to All Flash Arrays

Flash is a technology, available for a numbers of years now which significantly improves the power in terms of performance and especially latency. However, the high cost of Flash in the past has tended to hold back wide-spread adoption.

For this reason, since 2008 we were one of the first storage vendors to offer a hybrid solution (mix of Flash and Disk) in the form of our Performance Accelerator Module (PAM card – later re-branded Flash Cache). Today. by using these hybrid solutions (Flash Pool / Flash Cache) you can significantly decrease the latency (in a database environment, compared to straight disk-based storage) while at the same time maintaining a meaningful cost-balance between Disk and Flash.



Due to the current price development, it is estimated that in the near future, the price per GB for flash could be lower than the cost for a classic 10k or 15k SAS disc per GB. This is made possible by …
a) new, additional storage efficiency technologies
b) higher data consolidation ratios
c) favorable infrastructure costs for Flash (lower electricity or air consumption)
d) more favorable maintenance costs.

Bild 2 - Wikibone.png

Flash optimization in clustered ONTAP® 8.3.1

With our new operating system version clustered Data ONTAP® 8.3.1, we have set a new milestone in terms of Flash. Starting with this release, we provide for the first time what we call All Flash FAS systems, as totally SSD-based storage.

In addition, we have a variety of new storage efficiency features incorporated into this release. These are also optimized for Flash. Examples ...
- Inline zero-block deduplication
- Always-on Deduplication
- Inline Compression (enabled by default)
- ...

In TR-4428 are among others described some of these new features in more detail.

Of course was also improved the „Flash Performance" in 8.3.1. For example, a read-optimization was developed, which enables a reduction per IO of 300 -400 microseconds.

Well ... maybe you're wondering what that brings? Basically, it goes back to my first point about performance, except that now in the era of Flash we no longer speak of average response times in the order of 10 or 20 milliseconds. No, we are talking about latencies of less than 1 millisecond, which results in a reduction in the average read latency by 30- 40%.

In clustered ONTAP® 8.3.1 some write optimizations also took place, primarily with the aim to extend the Flash lifetime significantly.

AFF8080EX TPC-E Benchmark

As already mentioned, it is one of the biggest challenges in the SQL database environment to maintain optimal performance, even when any load peaks.

To prove the performance of our systems, we ran on our AFF8080EX storage system equipped with 48 800GB SSDs an OLTP Transaction Processing Performance Council Benchmark E (TPC-E).

As the figure below shows, the system provides up to 184K IOPs at a latency of 800 microseconds or 280K IOPS with a latency of 1 millisecond.

Bild 3 - AFF Performance.png


During this benchmark we were able achieve, alone through the active inline compression, a storage saving of 1.8: 1. If one of the following efficiency technologies such as Thin Provisioning, inline zero-block deduplication, always-on deduplication, cloning, etc. would have been used, significantly higher savings are possible.

Added value of Flash for SQL Server 2014

As you may know, Microsoft has changed the licensing policy for SQL Server 2012/2014. While SQL Server 2008 was licensed per physical CPU (Socket), SQL Server 2012/2014 is licensed per core. This probably means that you have significantly higher license costs for the SQL Server. To speak in rough numbers, I have put together a small table on the basis of MS SQL Server list prices with the current Intel® Xeon® E7 v3 family.

Bild 4 - Excel.png

In this respect, of course, many customers are interested in continuing to consolidate databases on a smaller number of servers. This in turn requires that the underlying storage infrastructure must be able to deliver the performance of such a consolidation.
If this is not the case, the expensive licensed CPUs in the server would indeed only be idling.

As the above-mentioned Benchmark proves we are with our systems able to carry out appropriate SQL Server database consolidations to help reduce server investment and the associated database license fees significantly.

As part of an ROI study a Business Case Analysis of a customer has been performed. Our All Flash Array, saved around $ 1M in regards of server and licensing costs. Simultaneously, a payback was achieved within just 6 months. More information on this study can be found in TR-4403rd

Bild 5 - Slide.jpg
Conclusion

With our application integration in conjunction with NetApp Flash Storage, we are able to significantly accelerate your business processes and reduce your application license costs.

Our integrated storage efficiency technologies, like eg. compression and deduplication, help to reduce the cost of all flash significantly. Due to that, cost should NEVER be the reason NOT to implement flash.

NetApp offers best perfomrmance for the best price.

A testament to the success of this solution shows the example of our customer "German Weather Service".

https://www.youtube.com/watch?v=b67bhdI7nL0

 

Author: Andreas Krügel translated from the Product, Solutions and Alliances Marketing EMEA team

 

 

You are here to read 4 blogs in the next few weeks that deal with the topic Storage, Application Integration and Flash.

 

The times of JBOD's are probably long gone. Nevertheless, more and more storage startups are pushing into the market, although with the one or other storage management functionality (e.g. deduplication or thin provisioning), but offer no application integration. Without integration into e.g. Databases or server virtualization it catapults you back to the Stone Age of the Storage Administration. How perfect application integration with NetApp works, we will explain with the examples SAP HANA, followed by Microsoft SQL, Oracle and VDI.

 

Part 1: NetApp and SAP HANA

Read more

All Flash Array Vendors are currently shooting up like mushrooms from the ground. The reason is clear: the market is currently about US $ 2 billion in size and has a growth rate of 56%. Especially in the storage industry, where there was just a 1-digit growth in the last couple of years, it´s a huge growth opportunity. So it's no wonder that the venture capital is being very generous invested in this technology.

Read more

With the advent of the storage technologies such as All Flash FAS, the need to evaluate, tune and manage performance levels becomes even more critical to ensure that you are gaining the maximum advantage from what the technology has to offer. Add cloud, and the need for management, configuration, monitoring and reporting tools provided by NetApp's OnCommand suite becomes event greater.

Read-on to find out more . . .

Read more

Opportunity comes and goes. Workloads, Applications, Protection, Security and Regulation

requirements change with regularity. Our goal is to help future proof your IT Investment,

drive competitive advantage, control costs while maintaining consistent, predictable,

repeatable modes of operation. The element that endures is the ecosystem and the

AFF8000 component helps build the Data Fabric vision for flexibility, choice and

enterprise class performance.

Read more