TagsAccessibility Agile Apple ARI Artificial Intelligence BBC Micro BETT Big Data Biomimicry Business Business Models Business Transformation CIO Cloud Computing Collaboration Connected World Data Development Education Enterprise Social Networking Firm of The Future Focus IaaS Identity Security & Risk Management Innovation Linux Management Mobility New ways of working Privacy Regulation Research Risks SaaS Security Service Social Media Social Networking Software Steve Jobs Supply Chain Sustainability Technology Transformation Virtual Avatars Vision VMware Wellbeing @ work Working in IT Zero email
With the proliferation of virtual machines, as discussed by Mark Ross, we start to see another issue – the server that just won’t die. Historically, when the hardware started to wheeze and the systems were beginning to creak a new server was bought and an upgrade was mandatory. But now there are Windows NT and 2000 servers which have been virtualized as the hardware gave up the ghost, but continue on through the magic of virtualization. Due to the cost and hassle of upgrading or replacing the server, and finding all those old install disks for apps that haven’t been touched for years, we are now seeing virtual servers on their second or even third virtual infrastructure having been transferred as the virtual storage and computing platforms have been upgraded.
Are the vendors making it too easy? VMware supports guest systems at least back to Windows 95, and the migration between virtualisation platforms is a relatively easy process. With automatic tiered storage being made available from several vendors, migration of storage platforms is no longer a difficult and time consuming activity, but can happen seamlessly in the background. This can make the commissioning of new virtual platforms a really easy, no impact process.
When we migrate the old server onto our new virtual platform it’s like a free server. The original cost of the software has long since been depreciated and “If it ain’t broke, don’t fix it” so why change? What’s the problem?
The answer is that while that very old VM may be fine, the rest of the world has moved on, and if there’s no security patches, you’re not really managing that data as effectively as you could.
Secondly, there is the cost of support & managing the support risk. If you don’t really have the skills in-house to fully support those old servers now that the vendors don’t, you have a growing support risk. You should know if your nice new management toolset work will with these old servers? The cost of manually maintaining the systems can very quickly outweigh any savings from not buying a new OS and base systems, or will you invest in managing multiple tool sets just to manage these legacy servers.
When considering the support costs, the dependencies need to be worked through in great detail. It will not be uncommon to end up with systems that are so far behind the technical curve that they cannot be upgraded in a single process, but must be upgraded in steps to maintain hardware and software compatibility. This may be fine as part of a full managed project, but is horrendous when trying to put through an emergency change because a component has failed and won’t come back to life.
Almost all the software vendors have an initial period of full support followed by a shorter period of extended support – the titles may vary, but functionally is the same. My recommendations are to be aware of the software lifecycle for the products you are managing, and to assess the ongoing viability and life time at three points, before installation, at end of full support, and at end of extended support (i.e. end of life). Explicit risk analysis at these three points allows for control and governance to be introduced to an emerging increasing risk.