A quick ABC guide for Virtualization
Virtualization is hotter than ever, and we also can virtualize our laptops and workstations.
Although according to Gartner research indicates that at the end of 2009, only 18% of enterprise data center workloads that could be virtualized had been virtualized. Gartner predicts that this number is expected to grow to more than 50% by the close of 2012. Meaning that 18% of all servers worldwide are now virtualized, we still have a long way to go. So there for you may have some questions about virtualization, what it is, what it does, what to choose and such… This quick introduction ABC guide on Virtualization should help you get some of these questions answered. It’s not a definitive guide on the virtualization field, but more like a get to know virtualization guide from Ervik.as. As you may now Ervik.as is one of the biggest resources online for Virtualization news and support,so therefor Stian Hill and I(Alexander Ervik Johnsen) decided to put down some facts and some introduction material to virtualization. Enjoy!
The term “virtualization” was coined in the 1960s, to refer to a virtual machine (sometimes called pseudo machine), a term which itself dates from the experimental IBM M44/44X system. The creation and management of virtual machines has been called platform virtualization, or server virtualization, more recently.
Platform virtualization is performed on a given hardware platform by host software (a control program), which creates a simulated computer environment, a virtual machine, for its guest software. The guest software is not limited to user applications; many hosts allow the execution of complete operating systems. The guest software executes as if it were running directly on the physical hardware, with several notable caveats. Access to physical system resources (such as the network access, display, keyboard, and disk storage) is generally managed at a more restrictive level than the host processor and system-memory. Guests are often restricted from accessing specific peripheral devices, or may be limited to a subset of the device’s native capabilities, depending on the hardware access policy implemented by the virtualization host.
Virtualization refers to technologies designed to provide a layer of abstraction between computer hardware systems and the software running on them. By providing a logical view of computing resources, rather than a physical view, virtualization solutions make it possible to do a couple of very useful things: They can allow you, essentially, to trick your operating systems into thinking that a group of servers is a single pool of computing resources. And they can allow you to run multiple operating systems simultaneously on a single machine.
In the 1990s, virtualization was used primarily to re-create end-user environments on a single piece of mainframe hardware. If you were an IT administrator and you wanted to roll out new software, but you wanted see how it would work on a Windows NT or a Linux machine, you used virtualization technologies to create the various user environments.
But with the advent of the x86 architecture and inexpensive PCs, virtualization faded and seemed to be little more than a fad of the mainframe era. It’s fair to credit the recent rebirth of virtualization on x86 to the founders of the current market leader, VMware. VMware developed the first hypervisor for the x86 architecture in the 1990s, planting the seeds for the current virtualization boom.
Why should you care about virtualization?
The industry buzz around virtualization is just short of deafening. This need-have-it capability has fast become going-to-get-it technology, as new vendors enter the market, and enterprise software providers weave it into the latest versions of their product lines. The reason: Virtualization continues to demonstrate additional tangible benefits the more it’s used, broadening its value to the enterprise at each step.
Server consolidation is definitely the sweet spot in this market. Virtualization has become the cornerstone of every enterprise’s favorite money-saving initiative. Industry analysts report that between 60 percent and 80 percent of IT departments are pursuing server consolidation projects. It’s easy to see why: By reducing the numbers and types of servers that support their business applications, companies are looking at significant cost savings.
Less power consumption, both from the servers themselves and the facilities’ cooling systems, and fuller use of existing, underutilized computing resources translate into a longer life for the data center and a fatter bottom line. And a smaller server footprint is simpler to manage.
However, industry watchers report that most companies begin their exploration of virtualization through application testing and development. Virtualization has quickly evolved from a neat trick for running extra operating systems into a mainstream tool for software developers. Rarely are applications created today for a single operating system; virtualization allows developers working on a single workstation to write code that runs in many different environments, and perhaps more importantly, to test that code. This is a noncritical environment, generally speaking, and so it’s an ideal place to kick the tires.
Once application development is happy, and the server farm is turned into a seamless pool of computing resources, storage and network consolidation start to move up the to-do list. Other virtualization-enabled features and capabilities worth considering: high availability, disaster recovery and workload balancing.
What are the different types of virtualization?
There are 5 basic categories of virtualization:
- Application virtualization is an umbrella term that describes software technologies that improve portability, manageability and compatibility of applications by encapsulating them from the underlying operating system on which they are executed
- Storage virtualization melds physical storage from multiple network storage devices so that they appear to be a single storage device
- Network virtualization combines computing resources in a network by splitting the available bandwidth into independent channels that can be assigned to a particular server or device in real-time
- Server virtualization hides the physical nature of server resources, including the number and identity of individual servers, processors and operating systems, from the software running on them.
- Desktop virtualization also sometimes called client virtualization , as a concept, separates a personal computer desktop environment from a physical machine using a client–server model of computing.
What is a hypervisor?
In today’s term, a hypervisor, also called virtual machine monitor (VMM), allows multiple operating systems to run concurrently on a host computer— a feature called hardware virtualization. The hypervisor presents the guest operating systems with a virtual platform and monitors the execution of the guest operating systems. In that way, multiple operating systems, including multiple instances of the same operating system, can share hardware resources. These concepts have become an important part of the technique known as virtualization. The hypervisor is also the most basic form of a virtualization component. It’s the software that seperates the operating system and applications from their physical resources. The hypervisor has its own kernel and it’s installed directly on the hardware, or “bare metal” which is a frequently used term within Virtualization. The hypervisor is inserted between the hardware and the OS, and interacts fully with the HW of the target that it has been installed on.
Which types of Hypervisors is there?
There are two types of hypervisor’s, these 2 are somewhat alike but here is the difference:
- Type 1 (or native, bare metal) hypervisors run directly on the host’s hardware to control the hardware and to monitor guest operating systems. A guest operating system thus runs on another level above the hypervisor.
This model represents the classic implementation of virtual machine architectures.
- Type 2 (or hosted) hypervisors run within a conventional operating system environment. With the hypervisor layer as a distinct second software level, guest operating systems run at the third level above the hardware.
Note: Microsoft Hyper-V (released in June 2008) exemplifies a type 1 product that is often mistaken for a type 2. Both the free stand-alone version and the version that is part of the commercial Windows Server 2008 product use a virtualized Windows Server 2008 parent partition to manage the Type 1 Hyper-V hypervisor. In both cases the Hyper-V hypervisor loads prior to the management operating system, and any virtual environments created run directly on the hypervisor, not via the management operating system.
What is a virtual machine?
A virtual machine was originally defined by Popek and Goldberg as “an efficient, isolated duplicate of a real machine”. Current use includes virtual machines which have no direct correspondence to any real hardware.
Virtual machines are separated into two major categories, based on their use and degree of correspondence to any real machine. A system virtual machine provides a complete system platform which supports the execution of a complete operating system (OS). In contrast, a process virtual machine is designed to run a single program, which means that it supports a single process. An essential characteristic of a virtual machine is that the software running inside is limited to the resources and abstractions provided by the virtual machine—it cannot break out of its virtual world.
A virtual machine (VM) is a self-contained operating environment—software that works with, but is independent of, a host operating system. Virtualization technologies are sometimes called dynamic virtual machine software.
What is paravirtualization?
Paravirtualization is a type of virtualization technique that presents a software interface to virtual machines that is similar but not identical to that of the underlying hardware. The entire OS runs on top of the hypervisor and communicates with it directly, typically resulting in better performance. The kernels of both the OS and the hypervisor must be modified, however, to accommodate this close interaction.
The intent of the modified interface is to reduce the portion of the guest’s execution time spent performing operations which are substantially more difficult to run in a virtual environment compared to a non-virtualized environment. The paravirtualization provides specially defined ‘hooks’ to allow the guest(s) and host to request and acknowledge these tasks, which would otherwise be executed in the virtual domain (where execution performance is worse.) Hence, a successful paravirtualized platform may allow the virtual machine monitor (VMM) to be simpler (by relocating execution of critical tasks from the virtual domain to the host domain), and/or reduce the overall performance degradation of machine-execution inside the virtual-guest.
Paravirtualization requires the guest operating system to be explicitly ported for the para-API — a conventional O/S distribution which is not paravirtualization-aware cannot be run on top of a paravirtualized VMM. However, even in cases where the operating system cannot be modified, components may be available which confer many of the significant performance advantages of paravirtualization; for example, the XenWindowsGplPv project provides a kit of paravirtualization-aware device drivers, licensed under GPL, that are intended to be installed into a Microsoft Windows virtual-guest running on the Xen hypervisor.
Paravirtualization relies on a virtualized subset of the x86 architecture. Intel and AMD have developed chipsets that are designed to allow simpler virtualization code, and the potential for better performance of fully virtualized environments.
What is “bare metal” hypervisor?
VMware states that the ESX product runs on “bare metal”. In contrast to other VMware products, it does not run atop a third-party operating system, but instead includes its own kernel. Up through the current ESX version 4.1, a Linux kernel is started first, and is used to load a variety of specialized virtualization components, including VMware’s ‘vmkernel’ component. This previously-booted Linux kernel then becomes the first running virtual machine and is called the service console. Thus, at normal run-time, the vmkernel is running on the bare computer and the Linux-based service console runs as the first virtual machine. The vmkernel itself, which VMware says is a microkernel, has three interfaces to the outside world: hardware,guest systems and service console (Console OS)
On the other hand you now also with the introduction of Citrix XenClient have a “bare metal” hypervisor that runs on desktop PC Hardware. XenClient is a bare metal hypervisor intended for use on a client computing device, that is desktop PC hardware, rather than server hardware. XenClient is being created by Citrix in partnership with hardware vendors such as HP. With XenClient users can run their company desktop alongside their own Windows or Linux OS on a single desktop or laptop PC.
What is Xen?
The Xen Project has developed and continues to evolve a free, open-source hypervisor for x86. Available since 2003 under the GNU General Public License, Xen runs on a host operating system, and so is considered paravirtualization technology.
It allows several guest operating systems to execute on the same computer hardware concurrently. The University of Cambridge Computer Laboratory developed the first versions of Xen. As of 2010[update] the Xen community develops and maintains Xen as free software.
Xen systems have a structure with the Xen hypervisor as the lowest and most privileged layer. Above this layer come one or more guest operating systems, which the hypervisor schedules across the physical CPUs. The first guest operating system, called in Xen terminology “domain 0” (dom0), boots automatically when the hypervisor boots and receives special management privileges and direct access to all physical hardware by default. The system administrator can log into dom0 in order to manage any further guest operating systems, called “domain U” (domU) in Xen terminology.
The project originated as a research project at the University of Cambridge led by Ian Pratt, who later left the school to found XenSource, the first company to implement a commercial version of the Xen hypervisor. A number of large enterprise companies now support Xen, including Microsoft, Novell and IBM. XenSource (not surprisingly) and SAP-based startup Virtual Iron offer Xen-based virtualization solutions.
What is application virtualization?
Application virtualization is an umbrella term that describes software technologies that improve portability, manageability and compatibility of applications by encapsulating them from the underlying operating system on which they are executed. A fully virtualized application is not installed in the traditional sense, although it is still executed as if it is. The application is fooled at runtime into believing that it is directly interfacing with the original operating system and all the resources managed by it, when in reality it is not. In this context, the term “virtualization” refers to the artifact being encapsulated (application), which is quite different to its meaning in hardware virtualization, where it refers to the artifact being abstracted (physical hardware).
Virtualization in the application layer isolates software programs from the hardware and the OS, essentially encapsulating them as independent, moveable objects that can be relocated without disturbing other systems. Application virtualization technologies minimize app-related alterations to the OS, and mitigate compatibility challenges with other programs.
Full application virtualization requires a virtualization layer. Application virtualization layers replace part of the runtime environment normally provided by the operating system. The layer intercepts all file and Registry operations of virtualized applications and transparently redirects them to a virtualized location, often a single file. The application never knows that it’s accessing a virtual resource instead of a physical one. Since the application is now working with one file instead of many files and registry entries spread throughout the system, it becomes easy to run the application on a different computer and previously incompatible applications can be run side-by-side. Examples of this technology for the Windows platform are Ceedo, InstallFree, Citrix XenApp, Novell ZENworks Application Virtualization, Endeavors Technologies Application Jukebox, Microsoft Application Virtualization, Software Virtualization Solution, and VMware ThinApp.
Also, somewhat included into Application Virtualization, is application Streaming, a teqnic that virtualizes and streams the application to an end user, without messing up the users operating system.
What is a virtual appliance?
A virtual appliance (VA) is not, as its name suggests, a piece of hardware. It is, rather, a prebuilt, preconfigured application bundled with an operating system inside a virtual machine. The VA is a software distribution vehicle, touted by VMware and others, as a better way of installing and configuring software. The VA targets the virtualization layer, so it needs a destination with a hypervisor. VMware and others are offering the VA as a better way to package software demonstrations, proof-of-concept projects and evaluations.
Citrix on their hand has started to offer a broad range of their own products as a VPX, their Virtual Appliance term. Citrix as of 11.08.2010 offers Citrix Access Gateway VPX, NetScaler VPX, Branch Repeater VPX, Merchandising Server, Citrix Licensing Server and XenDesktop Syncronizer(currently in Tech Preview for synchronization with the XenClient).
What is Desktop Virtualization?
The model stores the resulting “virtualized” desktop on a remote central server, instead of on the local storage of a remote client; thus, when users work from their remote desktop client, all of the programs, applications, processes, and data used are kept and run centrally. This scenario allows users to access their desktops on any capable device, such as a traditional personal computer, notebook computer, smartphone, or thin client. Virtual desktop infrastructure, sometimes referred to as virtual desktop interface(VDI) is the server computing model enabling desktop virtualization, encompassing the hardware and software systems required to support the virtualized environment.
Desktop virtualization involves encapsulating and delivering either access to an entire information system environment or the environment itself to a remote client device. The client device may use an entirely different hardware architecture than that used by the projected desktop environment, and may also be based upon an entirely different operating system.
The desktop virtualization model allows the use of virtual machines to let multiple network subscribers maintain individualized desktops on a single, centrally located computer or server. The central machine may operate at a residence, business, or data center. Users may be geographically scattered, but all may be connected to the central machine by a local area network, a wide area network, or the public Internet.
The main competing vendors in the Desktop Virtualization space at the moment is Citrix and VMware. But, there is also smaller vendors on the horizon and they have products that can compete in some sense.
Here is a list of current Virtual Desktop Vendors:
- Cendio ThinLinc
- Citrix XenDesktop
- Ericom WebConnect
- Leostream
- Microsoft Remote Desktop Services
- MokaFive Suite
- NComputing
- NX technology
- Pano Logic
- Parallels Virtual Desktop Infrastructure
- Red Hat Enterprise Virtualization for Desktops
- RingCube vDesk
- Sun/Oracle Virtual Desktop Infrastructure
- Systancia AppliDis Fusion
- ThinDesk
- Userful
- VMware View
- Wyse
Got something to add? Please post in the Comment field below!