Community

The Song Remains the Same?

Last week I found myself at Mercedes Benz world at Brooklands in the UK. I say 'found myself' as I was pre-booked but it had been a long long time since I had been anywhere near the old Brooklands racetrack. What a change, businesses had moved in and there were modern futuristic buildings. Where once there had been an old runway and scrub land, now stood a new race track and skid pan with drivers having lots of fun spinning various Mercedes models in circles. However,  I was there for a serious reason – to help present at the Ideal Data Centre event with my colleagues on behalf of our partner Arrow ECS. My topic was – All things Flash. An exciting topic I am getting very familiar with these days! This time I wanted to do something a little different. The previous week the team had been in Manchester in the UK and I had remembered that Manchester University had been instrumental in the development of modern computing. A little investigation threw up some very interesting documents telling stories of computer history from the pioneering early days going back over 50 years. The early research references the development of the Atlas Computer with great pictures of the Atlas Data Centre in 1960. Interesting to see how  data centres have changed since then. Some things are similar, for example the racks, other things are very different such as cabling and power standards, and you can even see someone smoking a pipe. But this data centre, and the people that worked in it at the time were instrumental in the development of virtualisation.

 

Bearing in mind that this was over 50 years ago, they identified the need for memory virtualisation in order to make the Atlas computer more agile, and to enable it to do more work. Today, most computers on the planet implement memory virtualisation. It's invisible and we take it for granted, just as it should be. Other areas that focused the minds of those pioneers were the need to automate the transfer of data between primary and secondary memory, and exploiting the latest developments in RAM an ROM technology to match the speed of the processor.

 

Hold on a minute both of these sound very familiar. Have we moved on 50 Years and yet the challenges are similar? 1) Exploiting new technologies and 2) Feeding the processor quickly enough. It would appear so. Looking at how Gordon Moores law was met over time, processor development and performance kept pace, which had implications for mechanical hard disk technologies leaving a significant performance gap between processor and storage. Today the challenges for many IT folk are similar. Meeting the demands of the business, on time and in budget. But the landscape has changed. Gone are the days when you could negotiate with the business to take the system down for planned maintenance at the weekend or overnight.

 

The new focus is on non-disruptive operations, efficiency and seamless scaling of performance and capacity. Clustered Data ONTAP 8.2 addresses these challenges by integrating new technologies that enable you to scale performance and manage physical growth of storage. You can now meet these challenges while reducing risk and cost, adding additional performance and capacity as you grow without the business suffering from the impact of disruptive upgrades. These new features are fundamental to the effective management of the lifecycle of IT -  introduction of new, retirement of old and upgrade of existing.

 

Finally it is good to know that the pioneering virtualisation work started in Manchester is still being pioneered by NetApp and today the impact of adding more storage capacity and performance are no longer disruptive.

 

If you want to know more about Clustered Data ONTAP 8.2 then the following links will help you:

 

http://www.netapp.com/us/products/platform-os/data-ontap-8/?ref_source=bnrcdot-hp

 

Follow me:

 

@lozdjames