If you can’t measure IT, you can’t manage IT

A long time ago I was heavily involved in performance and capacity management in a large Super Computing / Mainframe environment. Spending lots of time collecting, analysing, modelling and forecasting in order to better inform management on hot topics, such as Accounting, charge back and IT service delivery.


Unsurprisingly it is a topic that I still care about, and I believe that it has increasing relevance in today’s changing market. Where the order of the day remains time and budget constraints, complex operations and changing needs driven by the Cloud, Flash and Software defined topics. It is crystal clear the “What, Why, How and When” questions must be answered by top of class forecasting, business intelligence and optimisation if you want to find that competitive edge and win.


The headline “If you can’t measure it, you can’t manage it” will be a familiar business mantra to many readers, and it holds true today. More so as organisations strive to find ways to monitor and manage fast growing multi location, heterogeneous environments where the mixture of servers, networks and storage feeding both physical and virtual environments is the norm. (Phew)


Given the complexity that many organisations face, along with the requirement for end to end visibility, operations must deliver accurate and meaningful information to satisfy all staff levels from the CIO to the Storage Administrator. And there is no doubt that this is becoming a mandatory requirement as the attention of executive staff focuses on the delivery of services. Indeed as organisations

move services to the Cloud tools that provide an end to end vertical view of service, as opposed to a horizontal hardware only view, become increasingly important.


Today, almost without exception, compute, network and storage systems produce continuous streams of telemetry that tell the story about factors such as well being, fitness and resource consumption. From this stream of information a consistent, accurate, view of operations, service quality and cost can be derived.


If this is sounding like a Big Data Analytics exercise then you are right, it is. However, adding Big Data project complexity to your business when your prime objective is to gain insight on your end-to-end operations in order to improve your business decisions processes is probably not on your agenda. However, tools that deliver this capability for you probably are.


Focusing on the information delivery, is the rationale behind NetApp OnCommand Insight. OnCommand Insight

is a complete solution that combines data ingest, storage, analytics and presentation. This enables

you to improve business processes through the management of costs, performance, capacity and

alerts. Importantly OnCommand Insight makes all this possible in a multivendor environment. Indeed over

75% of the storage base managed by OnCommand Insight is multivendor.


This is particularly important for those of you looking to improve corporate risk detection processes for the

purposes of compliance and audit.


Indeed one of the prime reasons for selection of OnCommand Insight is its ability to report accurately across multivendor environments. It also scores high in scaleability, along with the key role it can play in root cause analysis, migration planning and, of course forecasting, trending,  performance / capacity planning.


NetApp OnCommand Insight is certainly a key tool on my watch list for this year for the simple reason that it is proven to reduce risk while greatly enhancing service management and compliance auditing, visibility and reporting in complex environments.  I am already seeing some great OnCommand Insight success stories coming through and I will share these with you in the coming months.


For those of you keen to lean more here are some links that will help you dig a little deeper.


NetApp OnCommand Insight

NetApp OnCommand Insight Library

NetApp OnCommand Insight Datasheet


Follow me on Twitter @lozdjames