Virtualisation and Its Evolution
Though virtualisation gained popularity only in the twenty-first century, the concept had taken root as far back as the 1960s. Today, the era of cloud computing is taking it to the next stage of evolution!
Early on, IT administrators began to realise that conventional methods of handling IT environments were no longer effective because of the rapid changes in requirements in agile business environments. The demand for the faster time-to-market for applications, the installation or upgrade requests, the need to quickly apply security patches to operating systems and applications, and many other management complications led to a new strategy for server handling and management.
IT organisations need a more nimble strategy to manage environments that easily adapt to rapidly changing needs, where new functions can be deployed in days rather
than several weeks. Given these problems, it is natural that corporations are progressively employing more technological innovations.
Advances over the decades
Virtualisation, a technological innovation long associated with mainframe computer systems, has been changing IT facilities due to its capability to consolidate hardware resources and decrease energy costs. This has led it to grow as a practical technological innovation for mobile phones and exclusive personal systems, as well as being used to reconceive nimble and cloud computing.
Virtualisation, in this new, effective era of cloud computing, is pushed by the need for spending budgets effectively, for agility and for meeting other challenges in the traditional environment.
In the 1960s, the time-sharing of systems was preferred to batch processing systems. Virtualisation became a means to fully utilise hardware components and assist in the optimum use of systems on a time-sharing basis.
The phrase ‘hypervisor’ was rst used in 1965, mentioning software that associated an IBM RPQ for the IBM 360/65. A hypervisor or Virtual Machine Monitor enables multiple guest operating systems on a host computer.
In the mid-60s, IBM’S Cambridge Scientific Center developed the CP-40, the first edition of the CP/CMS. It went into production in January 1967 and was designed to implement full virtualisation. IBM mainframes have supported several completely virtualised operating systems since the early 70s.
In the 80s and into the 90s, virtualisation was mostly overlooked, as affordable PCS and Intel-based hosts became popular. Over time, the expenditure on physical facilities, failover and catastrophe protection needs, the high cost of systems servicing, and low server utilisation became problems that required a new solution.
In the late 90s, x86 virtualisation was achieved through complicated software techniques, which overrode the processor’s lack of virtualisation support and accomplished affordable efficiency. Virtualisation of Intel-based devices became a potential solution. This was followed by the arrival of Vmware, which overcame the hardware limitations that blocked virtualisation on Intelbased structures. Since then, virtualisation achievements have led to what could be called a virtualisation rebirth these days. The current generation is discovering what had already been done long ago, but is implementing the advantages in the present technical landscape. In 1999,
Vmware used x86 techniques to address many of these difficulties and convert x86 techniques into a general purpose, shared hardware infrastructure that offers full isolation, flexibility and a choice of OSS for application environments.
After 2000, there has been a lot of success in the area of virtualisation. In the mid- 2000s, both Intel and AMD included hardware assistance to their processor chips making virtualisation software easier, and later hardware changes offered considerable speed improvements.
Subsequently, software vendors have developed virtualisation solutions, and organisations have implemented virtualisation to solve business needs.
The benefits of virtualisation
There has been a natural development towards organisations wanting their own servers, data centres, networks and desktop environments and being able to manage them. The preferred end-state would be an
environment that has progressed to allow resources anywhere on a network to be dynamically provisioned and consumed, based on application and user requirements. All this would be across a dynamic IT infrastructure, which would be extremely automated, inter-linked and structured to support business procedures, instead of data being isolated in silos.
Typically, organisations advance through various stages of virtualisation. Virtualisation drives agile business solutions, for instance, by resolving specific issues like charge backs, and global concerns such as workload balancing and time to market.
The first stage involves virtualising the ‘low hanging fruit’. Consolidation and disaster recovery strategies are at first targeted to earn returns on capital investments from virtualising programmes that have low business impact. Server virtualisation is also a very popular practice, with many organisations wanting to bring down both CAPEX and OPEX levels.
The second level of virtualisation is when things get confusing, mainly due to the complex design in Level 1. Many CIOS now have a ‘Virtualisation rst’ policy, to enjoy the cost benefits. In this second level, companies have begun to use applications, servers, storage, and networks as pools of resources that can be managed in aggregate rather than isolated silos. Organisations may have experienced unanticipated below-par performance during Level 1 virtualisation, so they will need help to guarantee efficiency of the business critical applications that are virtualised in Level 2 and beyond.
Desktop virtualisation does need a significant change in the infrastructure; so it could be the end of 2012 before we see desktop virtualisation adoption in the millions with enhanced security, manageability and adaptability. The price of desktop virtualisation is a hurdle to adoption, but over the next few years, the price per user will come down continuously.
Many enterprise- level implementations of Level 2 virtualisation store the ‘ virtualised’ desktop on a remote server, instead of on the local storage. Thus, when customers work from their local machines, all the applications, procedures, and data used are kept on the hosting server and run centrally. This allows customers using mobile phones or thin clients with very rudimentary hardware specs to run OSS and applications that would normally be beyond their capabilities.
Mobile virtualisation is a technological innovation that allows several OSS or virtual machines to run at the same time on a mobile phone or connected wireless device. It uses a hypervisor to create a secure separation between the hardware and the application that operates on top of it. In 2008, the telecom industry became interested in using the benefits of virtualisation for mobile phones and other gadgets like tablets, net-books and machine-to-machine (M2M) modules. With mobile virtualisation, manufacturing feature-rich phones has become easier through the re-use of applications and hardware, which reduces the length of the development time. One such example is using mobile virtualisation to make low-cost Android smart-phones. Semiconductor companies such as St-ericsson have implemented mobile virtualisation as part of their low-cost Android platform strategy.