Beyond virtualization: Envisioning true cloud computing
When operating systems break their hardware bonds and deepen their ties with hypervisors, virtual machines will seem almost quaint
I find it puzzling whenever I come across any reasonably sized IT infrastructure that has little or no virtualization in place, and my puzzlement turns to amazement if there’s no plan to embrace virtualization in the near future. Whether it’s due to the “get off my lawn” attitude that eventually killed off many AS/400 admins or simply a budget issue, clinging to a traditional physical infrastructure today is madness.
For one thing, what are these companies buying for servers? If they’re replacing old single- and dual-core servers with new quad-core boxes (at minimum) and simply moving the services over, then they’ve got a whole lot more hardware than they need. Each of their server workloads is running on hardware that could easily handle a half-dozen virtual servers, even with free hypervisors. Are these companies only going to embrace server virtualization when the rest of us have already moved past it?
[ Also on InfoWorld: Download the "Server Virtualization Deep Dive" PDF guide. | Keep up on virtualization by signing up for InfoWorld's Virtualization newsletter. ]
If we look forward a few years, we can expect to see fundamental changes in the way we manage virtual servers. As if the advent of stem-to-stern enterprise virtualization wasn’t already revolutionary, I think there’s plenty more to come. The next leap will occur when we start seeing operating systems that were explicitly designed and tuned to run as VMs — and won’t even function on physical hardware due to the lack of meaningful driver support and other attributes of the physical server world. What’s the point in carrying a few thousand drivers and hardware-specific functions around in a VM anyway? They’re useless.
Eventually we’ll start to see widely available OS builds that dispense with a vast majority of today’s underpinnings, excising years of fat and bloat in favor of kernels that are highly specific to the hypervisor in use, and with different ideas on how memory, CPU, and I/O resources are worked and managed. We’re backing into that now with the advent of introducing CPU and RAM to a VM through the use of hot-plug extensions that were originally conceived to be used for physical RAM and CPU additions.
But this is the long way around — it’s a kludge. When an OS kernel can instantly request, adapt, and use additional compute resources without going through these back alleys, we’ll be closer to the concept of true cloud computing: server instances with no concern for underlying hardware, no concept of fixed CPUs, no concept of fixed or static RAM. These will be operating systems that tightly integrate with the hypervisor at the scheduling level, that care not one whit about processor and core affinity, about optimizing for NUMA, or about running out of RAM.
(I should note that Microsoft has already done something like this in the forthcoming Windows Server 8 Hyper-V, enabling the hypervisor to hot-add RAM to a Microsoft OS even as old as Windows Server 2003 R2. It just says, “Oh, more RAM,” and goes with it — good show.)
This will naturally take years to be fully realized, but I’m sure it’s coming. The hypervisor will become strong and smart enough to take over these functions for any compliant host OS and essentially turn each VM into an application silo. As the load on the application increases, the kernel is more concerned with communicating the growing resource requirements to the hypervisor, which then makes the lower-level decisions on what physical resources to allow the VM to consume, and transition other instances to different physical servers to make room when necessary.
In turn, this will dispense with the notion of assigning emulated CPU and RAM limits to VMs and instead set minimum, maximum, and burst limits — like paravirtualization, but completely OS-agnostic, without relying on the core host OS to play the middleman and fixed kernel. Each VM may know it’s running as a VM, but will also run its own kernel and be able to address hardware directly, with the hypervisor managing the transaction.
We’re talking about fixed-purpose servers that have embedded hypervisors, probably redundant, with the ability to upgrade on the fly by upgrading a passive hypervisor, transitioning the load nearly instantly, then upgrading the other. While we’re at it, we’re also talking about small-footprint physical servers that are essentially CPU and RAM sockets with backplane-level network interfaces completely managed in software.
Are these advanced virtual servers virtual servers at all? Or are they services? If they evolve to become software services like databases and application servers that understand their environment, there may be no need to run more than one or two instances — period.
Rather than dozens of Web server VMs, you might have just two — but two that can grow from, say, 500MHz to 32GHz in an instant as the load jumps, from consuming 6GB RAM to 512GB in the same timeframe, but without the need to assign any fixed value. With a suitably high-speed backplane, it’s conceivable that a single server instance could span two or more physical servers, with the hypervisors divvying up prioritized tasks, keeping RAM local to the individual process, wherever that happens to be. Naturally, we’re talking about massively multithreaded apps, but isn’t that the whole idea behind virtualization and multi-core CPUs?
Perhaps, at some point in the future, we’ll look at a blade chassis or even several racks of them, and instead of seeing a few dozen physical servers running our hypervisor du jour, we instead view them as a big pile of resources to be consumed as needed by any number of services, running on what might be described as an OS shim. If there’s a hardware failure, that piece is easily hot-swapped out for replacement, and the only loss might be a few hundred processes disappearing from a table containing tens of thousands, and those few hundred would be instantly restarted elsewhere with no significant loss of service.
Maybe, if we ever get to a reality such as this, those stalwarts who are still nursing a decrepit physical data center might have finally warmed up to the idea of running more than one server per physical box. Heck, at that point, they won’t have a choice. There will be no other alternative.
About the author
Hi I educated in the U.K. with working experienced for 18 years in multinational companies, As an IT Manager and IT Instructor, I am attached with certkingdom.com here they provide IT exams study material, the study materials included exams Q&A with Explanation, Study Guides, Training Labs, Exams Simulations, Training Videos, etc. for certification like MCSE 2003 Training, MCITP Training, http://www.certkingdom.com, CCNA exams preparation, CompTIA A+ Training, and more Certkingdom.com provide you the best training 100% guarantee. “Best Material Great Results”
|Print article||This entry was posted by admin on 2012/04/09 at 1:55 pm, and is filed under Tech. Follow any responses to this post through RSS 2.0. Responses are currently closed, but you can trackback from your own site.|
Comments are closed.